Virtual system and method for securing external network connectivity

Information

  • Patent Grant
  • 11113086
  • Patent Number
    11,113,086
  • Date Filed
    Thursday, June 30, 2016
    8 years ago
  • Date Issued
    Tuesday, September 7, 2021
    3 years ago
Abstract
According to one embodiment, a computing device comprises one or more hardware processor and a memory coupled to the one or more processors. The memory comprises software that supports a virtualization software architecture including a first virtual machine operating under control of a first operating system. Responsive to determining that the first operating system has been compromised, a second operating system, which is stored in the memory in an inactive (dormant) state, is now active and controlling the first virtual machine or a second virtual machine different from the first virtual machine that now provides external network connectivity.
Description
FIELD

Embodiments of the disclosure relate to the field of malware detection. More specifically, one embodiment of the disclosure relates to a hypervisor-based, malware detection architecture.


GENERAL BACKGROUND

In general, virtualization is a technique for hosting different guest operating systems concurrently on the same computing platform. With the emergence of hardware support for full virtualization in an increased number of hardware processor architectures, new virtualization software architectures have emerged. One such virtualization architecture involves adding a software abstraction layer, sometimes referred to as a virtualization layer, between the physical hardware and a virtual machine (referred to as “VM”).


A VM is a software abstraction that operates like a physical (real) computing device having a particular operating system. A VM typically features pass-through physical and/or emulated virtual system hardware, and guest system software. The virtual system hardware is implemented by software components in the host (e.g., virtual central processing unit “vCPU” or virtual network interface card “vNIC”) that are configured to operate in a similar manner as corresponding physical components (e.g., physical CPU or NIC). The guest system software comprises a “guest” OS and one or more “guest” applications. Controlling execution and allocation of virtual resources, the guest OS may include an independent instance of an operating system such as WINDOWS® OS, MAC® OS, LINUX® OS or the like. The guest application(s) may include any desired software application type such as a Portable Document Format (PDF) reader (e.g., ACROBAT®), a web browser (e.g., EXPLORER®), a word processing application (e.g., WORD®), or the like.


When we run the virtualization layer on an endpoint device, the guest OS is in control of the pass-through endpoint device hardware, notably the network interface card (NIC). A successful (malicious) attack on the guest OS may allow the attacker to control the guest OS and disable external network connectivity via the guest OS. For instance, if the guest OS crashes based on this attack, the virtualization layer of the endpoint device would have neither an ability to communicate with other external devices nor an ability to provide an alert message to advise an administrator of the occurrence of the malicious attack.


A mechanism is needed to ensure external network connectivity and communications even if the guest OS is compromised or no longer functioning correctly.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the disclosure are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1A and FIG. 1B are an exemplary block diagram of a system network that may be utilized by a computing device configured to support virtualization with enhanced security.



FIG. 2 is an exemplary block diagram of a logical representation of the endpoint device of FIG. 1.



FIG. 3 is an exemplary embodiment of the virtualization of the endpoint device of FIG. 2 with compromised guest OS detection and OS recovery.



FIG. 4A is an exemplary flowchart of the operations associated a first technique for guest OS evaluation and OS recovery.



FIG. 4B is an exemplary flowchart of the operations associated a second technique for guest OS evaluation and OS recovery.



FIG. 5 is an exemplary flowchart of the operations for detecting loss of network connectivity caused by a non-functional guest OS and conducting an OS recovery response to re-establish network connectivity.





DETAILED DESCRIPTION

Various embodiments of the disclosure are directed to added functionality of the virtualization layer to transition from a first virtual machine with a first (guest) operating system to a second virtual machine with a second (recovery) operating system (OS) in response to the virtualization layer determining that the guest OS is “compromised,” namely the guest OS is not functioning properly due to malicious operations conducted by malware. More specifically, the virtualization layer is configured to determine that the guest OS is “compromised” upon detecting (i) an attempt to disable or actual loss of external network connectivity or (ii) the guest OS is no longer working (non-functional). For example, where the guest OS kernel is responsible for the attempt to disable or loss of external network connectivity (and network connectivity cannot be restored after repeated retries), the virtualization layer considers that the guest OS kernel is compromised as being hijacked or infected with malware. As another example, where the guest OS kernel is non-functional, which may be due to a number of factors including malware crashing the kernel, the virtualization layer considers that the guest OS is compromised. As yet another example, where a guest OS application is no longer working (or working properly), especially if that application is crucial to network connectivity (e.g., a network daemon) or issuing alerts (e.g., an agent), the virtualization layer considers that the guest OS is compromised.


Upon determining that the guest OS is compromised, the first virtual machine is stopped by halting operations of one or more virtual processors (vCPUs) within the first virtual machine. As an optional operation, the state of the first virtual machine (e.g., a snapshot of stored content) may be captured by the virtualization layer. Also, normally retained in a dormant state as an OS image within a memory (e.g., a particular non-transitory storage medium such as main memory or on disk), the recovery OS may be accessed once a decision is made to bootstrap the recovery OS.


According to one embodiment of the disclosure, a second virtual machine may be created by bootstrapping a second OS, namely the recovery OS, which includes the recovery OS kernel and one or more guest OS applications. Thereafter, the recovery OS is now assigned network device resources that were previously assigned to the guest OS. The recovery OS may be a different type of OS than the guest OS. For instance, the guest OS may be a WINDOWS® OS while the recovery OS may be a Linux® OS with a minimal memory, either stored on disk or in memory.


According to one embodiment of the disclosure, after the network device resources have been reassigned to the recovery OS, which is responsive to the virtualization layer detecting that the guest OS is compromised, the second virtual machine undergoes a boot process. The purpose of transitioning from the first virtual machine to the second virtual machine is to provide a clean, uninfected and trustworthy platform environment, given that the second virtual machine was dormant (e.g., pre-boot state and not running) when the malicious attack occurred. After completion of the boot process, the second virtual machine is capable of driving a physical pass-through network adapter (e.g., physical NIC or software-emulated NIC) to establish a network connection to another computing device for reporting one or more detected malicious events that occurred while the first virtual machine was executing. This reporting may include the transmission of an alert in a message format (e.g., a Short Message Service “SMS” message, Extended Message Service “EMS” message, Multimedia Messaging Service “MMS”, Email, etc.) or any other prescribed wired or wireless transmission format. As part of this reporting, a partial state or an entire state of the compromised, guest OS (or a portion thereof) may be stored and subsequently provided for offline forensic analysis.


Alternatively, in lieu of deploying another virtual machine, the recovery OS and its corresponding guest OS application(s) may be installed into the first virtual machine along with removal of the guest OS (guest OS kernel and its corresponding guest OS applications). Although the first virtual machine is reused, for discussion herein, the reconfigured first virtual machine is referred to as a “second” virtual machine. However, in accordance with this embodiment, the state of the first virtual machine prior to installation of the recovery OS should be captured as described above. Otherwise, any previous state of the guest OS would be lost upon installation of the recovery OS.


Herein, the virtualization layer is a logical representation of at least a portion of a host environment of the virtualization for the computing device. The host environment features a light-weight hypervisor (sometimes referred herein as a “micro-hypervisor”) operating at a high privilege level (e.g., ring “0”). In general, the micro-hypervisor operates similar to a host kernel, where the micro-hypervisor at least partially controls the behavior of a virtual machine (VM). Examples of different types of VM behaviors may include the allocation of resources for the VM, scheduling for the VM, which events cause VM exits, or the like. The host environment further features a plurality of software components, generally operating as user-level virtual machine monitors (VMMs), which provide host functionality and operate at a lower privilege level (e.g. privilege ring “3”) than the micro-hypervisor.


In summary, a first virtual machine under control of a guest OS is in operation while an OS image of a recovery OS is dormant and resides in a particular location of a non-transitory storage medium. In response to a decision by the virtualization layer to bootstrap the recovery OS, normally upon the occurrence of a prescribed event (e.g., loss of network connectivity caused by a compromised or malfunctioning of the guest OS kernel) as described, a second virtual machine is created under control of the recovery OS. Alternatively, in response to a decision by the virtualization layer to substitute the guest OS within the first virtual machine for the recovery OS, the reconfigured virtual machine, also referred to as the “second virtual machine” is created under control of the recovery OS. However, the state of the first virtual machine, notably the guest OS, should be captured prior to substitution of the recovery OS to avoid loss of the state of the guest OS at the time it was compromised.


1. Terminology

In the following description, certain terminology is used to describe features of the invention. For example, in certain situations, the terms “component” and “logic” are representative of hardware, firmware, software or a running process that is configured to perform one or more functions. As hardware, a component (or logic) may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor (e.g., microprocessor with one or more processor cores, a digital signal processor, a programmable gate array, a microcontroller, an application specific integrated circuit “ASIC”, etc.), a semiconductor memory, or combinatorial elements.


A component (or logic) may be software in the form of one or more software modules, such as executable code in the form of an executable application, an API, a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, object code, a shared library/dynamic load library, or one or more instructions. Each or any of these software components may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); or persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code may be stored in persistent storage.


The term “object” generally refers to a collection of data, whether in transit (e.g., over a network) or at rest (e.g., stored), often having a logical structure or organization that enables it to be classified for purposes of analysis for malware. During analysis, for example, the object may exhibit certain expected characteristics (e.g., expected internal content such as bit patterns, data structures, etc.) and, during processing, a set of expected behaviors. The object may also exhibit unexpected characteristics and a set of unexpected behaviors that may offer evidence of the presence of malware and potentially allow the object to be classified as part of a malicious attack.


Examples of objects may include one or more flows or a self-contained element within a flow itself. A “flow” generally refers to related packets that are received, transmitted, or exchanged within a communication session. For convenience, a packet is broadly referred to as a series of bits or bytes having a prescribed format, which may, according to one embodiment, include packets, frames, or cells. Further, an “object” may also refer to individual or a number of packets carrying related payloads, e.g., a single webpage received over a network. Moreover, an object may be a file retrieved from a storage location over an interconnect.


As a self-contained element, the object may be an executable (e.g., an application, program, segment of code, dynamically link library “DLL”, etc.) or a non-executable. Examples of non-executables may include a document (e.g., a Portable Document Format “PDF” document, Microsoft® Office® document, Microsoft® Excel® spreadsheet, etc.), an electronic mail (email), downloaded web page, or the like.


The term “event” should be generally construed as an activity that is conducted by a software component running on the computing device. The event may occur that causes an undesired action to occur, such as overwriting a buffer, disabling a certain protective feature in the guest environment, or a guest OS anomaly such as the guest OS kernel trying to execute from a user page. Generically, an object or event may be referred to as “data under analysis”.


The term “computing device” should be construed as electronics with data processing capability and/or a capability of connecting to any type of network, such as a public network (e.g., Internet), a private network (e.g., a wireless data telecommunication network, a local area network “LAN”, etc.), or a combination of networks. Examples of a computing device may include, but are not limited or restricted to, the following: an endpoint device (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, a medical device, or any general-purpose or special-purpose, user-controlled electronic device configured to support virtualization); a server; a mainframe; a router; or a security appliance that includes any system or subsystem configured to perform functions associated with malware detection and may be communicatively coupled to a network to intercept data routed to or from an endpoint device.


The term “malware” may be broadly construed as information, in the form of software, data, or one or more commands, that are intended to cause an undesired behavior upon execution, where the behavior is deemed to be “undesired” based on customer-specific rules, manufacturer-based rules, and any other type of rules formulated by public opinion or a particular governmental or commercial entity. This undesired behavior may include a communication-based anomaly or an execution-based anomaly that would (1) alter the functionality of an electronic device executing an application software in a malicious manner; (2) alter the functionality of an electronic device executing that application software without any malicious intent; and/or (3) provide an unwanted functionality which is generally acceptable in other context.


The term “interconnect” may be construed as a physical or logical communication path between two or more computing platforms. For instance, the communication path may include wired and/or wireless transmission mediums. Examples of wired and/or wireless transmission mediums may include electrical wiring, optical fiber, cable, bus trace, a radio unit that supports radio frequency (RF) signaling, or any other wired/wireless signal transfer mechanism.


The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. Also, the term “agent” should be interpreted as a software component that instantiates a process running in a virtual machine. The agent may be instrumented into part of an operating system (e.g., guest OS) or part of an application (e.g., guest software application). The agent is configured to provide metadata to a portion of the virtualization layer, namely software that virtualizes certain functionality supported by the computing device.


Lastly, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.


II. General Architecture

Referring to FIG. 1A, an exemplary block diagram of a system network 100 that may be utilized by a computing device configured to support virtualization with enhanced security is described herein. The system network 100 may be organized as a plurality of networks, such as a public network 110 and/or a private network 120 (e.g., an organization or enterprise network). According to this embodiment of system network 100, the public network 110 and the private network 120 are communicatively coupled via network interconnects 130 and intermediary computing devices 1401, such as network switches, routers and/or one or more malware detection system (MDS) appliances (e.g., intermediary computing device 1402) as described in co-pending U.S. patent application entitled “Microvisor-Based Malware Detection Appliance Architecture” (U.S. patent application Ser. No. 14/962,497), the entire contents of which are incorporated herein by reference. The network interconnects 130 and intermediary computing devices 1401, inter alia, provide connectivity between the private network 120 and a computing device 1403, which may be operating as an endpoint device for example.


The computing devices 1401 (i=1, 2, 3) illustratively communicate by exchanging messages (e.g., packets or other data in a prescribed format) according to a predefined set of protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). However, it should be noted that other protocols, such as the HyperText Transfer Protocol Secure (HTTPS) for example, may be advantageously used with the inventive aspects described herein. In the case of private network 120, the intermediary computing device 1401 may include a firewall or other computing device configured to limit or block certain network traffic in an attempt to protect the endpoint devices 1403 from unauthorized users.


As illustrated in FIG. 1B in greater detail, the endpoint device 1403 supports a virtualization software architecture 150 that comprises a guest environment 160 and a host environment 180. As shown, the guest environment 160 comprises one or more virtual machines. As shown, a first virtual machine (VM1) 170 comprises a guest operating system (OS) 300, which includes a guest OS kernel. Residing within the first virtual machine 170 (e.g., within the guest OS 300 or another software component within the first virtual machine 170 or the guest environment 160), a certain component, which are sometimes referred to as a “guest agent” 172, may be configured to monitor and store metadata (e.g., state information, memory accesses, process names, etc.) and subsequently provide the metadata to a virtualization layer 185 within the host environment 180.


A second virtual machine (VM2) 175 comprises a recovery OS 310, which includes a recovery OS kernel 311 and one or more guest OS applications 312 (e.g., applications to configure or bring up the network interface such as a DHCP client, applications for copying files from one machine to another, etc.). The second virtual machine 175 resides in a dormant (non-boot) state until its boot process is initiated by the virtualization layer 185. Although the virtualization layer provides memory isolation between software components, in order to further mitigate a spread of infection of malware already infecting a network device, it is contemplated that the recovery OS 310 may be deployed with an OS type different than the OS type of the guest OS 300. In particular, if the guest OS 300 has been compromised due to an exploitable software vulnerability, it is not desired to provide the recovery OS 310 with the same vulnerability. For example, where the guest OS 300 within the first virtual machine 170 is a WINDOWS® operating system or an IOS® operating system, the recovery OS 310 within the second virtual machine 175 (as shown) may be configured as a version of the LINUX® operating system or the ANDROID® operating system for example. It is contemplated that the second virtual machine 175 may be shown as a logical representation, where the second virtual machine 175 is in fact the first virtual machine 170, where the guest OS 300 (guest OS kernel and corresponding guest applications—not shown) are replaced by the recovery OS 310.


The virtualization layer 185 features (i) a micro-hypervisor 360 (shown in FIG. 3) with access to physical hardware 190 and (ii) one or more host applications running in the user space (not shown). Both the micro-hypervisor and the host applications operate in concert to provide additional functionality by controlling configuration of the second virtual machine 175, including activation or deactivation of the first virtual machine 170 in response to detection of events associated with anomalous behaviors indicating the guest OS 300 has been compromised. This additional functionality ensures external network connectivity is available to the endpoint device 1403, even when the guest OS 300 is non-functional, potentially hijacked by malware.


Referring now to FIG. 2, an exemplary block diagram of a representation of the endpoint device 1403 is shown. Herein, the endpoint device 1403 illustratively includes at least one hardware processor 210, a memory 220, one or more network interfaces (referred to as “network interface(s)”) 230, one or more network devices (referred to as “network device(s)”) 240 communicatively coupled by a system interconnect 260, such as a bus. These components are at least partially encased in a housing 200, which is made entirely or partially of a rigid material (e.g., hardened plastic, metal, glass, composite, or any combination thereof) that protects these components from atmospheric conditions.


It is contemplated that some or all of the network device(s) 240 may be coupled to the system interconnect 260 via an Input/Output Memory Management Unit (IOMMU) 250. The IOMMU 250 provides direct memory access (DMA) management capabilities for direct access of data within the memory 220. According to one embodiment of the disclosure, in response to signaling from a component within the virtualization layer 185 (e.g., micro-hypervisor 360 or one of the hyper-processes 370), the IOMMU 250 may be configured to assign (or re-assign) network devices to a particular OS. For instance, when re-configured by the virtualization layer 185, the IOMMU 250 may re-assign some or all of the network device(s) 240 from the guest OS 300 to the recovery OS 310 and one of the hyper-processes 370, namely the guest monitor component 376, may reassign all PCI device (e.g., memory-mapped input/output “MMIO” or I/O) resources of a network adapter from one virtual machine to another, as described below.


The hardware processor 210 is a multipurpose, programmable device that accepts digital data as input, processes the input data according to instructions stored in its memory, and provides results as output. One example of the hardware processor 210 may include an Intel® x86 central processing unit (CPU) with an instruction set architecture. Alternatively, the hardware processor 210 may include another type of CPU, a digital signal processor (DSP), an ASIC, or the like.


The network device(s) 240 may include various input/output (I/O) or peripheral devices, such as a storage device for example. One type of storage device may include a solid state drive (SSD) embodied as a flash storage device or other non-volatile, solid-state electronic device (e.g., drives based on storage class memory components). Another type of storage device may include a hard disk drive (HDD).


Each network interface 230 may include one or more network ports containing the mechanical, electrical and/or signaling circuitry needed to connect the endpoint device 1403 to the network 120 of FIG. 1 to thereby facilitate communications over the system network 110. To that end, the network interface(s) 230 may be configured to transmit and/or receive messages using a variety of communication protocols including, inter alia, TCP/IP and HTTPS.


The memory 220 may include a plurality of locations that are addressable by the hardware processor 210 and the network interface(s) 230 for storing software (including software applications) and data structures associated with such software. The hardware processor 210 is adapted to manipulate the stored data structures as well as execute the stored software, which includes the guest OS 300, the recovery OS 310, user mode processes 320, a micro-hypervisor 360 and hyper-processes 370.


Herein, the hyper-processes 370 may include instances of software program code (e.g., user-space applications operating as user-level VMMs) that are isolated from each other and run on separate address spaces. In communication with the micro-hypervisor 360, the hyper-processes 370 are responsible for controlling operability of the endpoint device 1403, including policy and resource allocation decisions, maintaining logs of monitored events for subsequent analysis, managing virtual machine (VM) execution, and managing malware detection and classification.


The micro-hypervisor 360 is disposed or layered beneath both the guest OS kernel 301 and/or the recovery OS kernel 311 of the endpoint device 1403 and is the only component that runs in the most privileged processor mode (host mode, ring-0). As part of a trusted computing base of most components in the computing platform, the micro-hypervisor 360 is configured as a light-weight hypervisor (e.g., less than 10K lines of code), thereby avoiding inclusion of potentially exploitable virtualization code in an operating system (e.g. x86 virtualization code).


The micro-hypervisor 360 generally operates as the host kernel that is devoid of policy enforcement; rather, the micro-hypervisor 360 provides a plurality of mechanisms that may be used by the hyper-processes 370 for controlling operability of the virtualization. These mechanisms may be configured to control communications between separate protection domains (e.g., between two different hyper-processes 370), coordinate thread processing within the hyper-processes 370 and virtual CPU (vCPU) processing within the VM1170 or VM2175, delegate and/or revoke hardware resources, and control interrupt delivery and DMA, as described below.


The guest OS 300, portions of which are resident in memory 220 and executed by the hardware processor 210, functionally organizes the endpoint device 1403 by, inter alia, invoking operations that support guest applications executing on the endpoint device 1403. An exemplary guest OS 300 may include a version of the WINDOWS® operating systems, a version of a MAC OS® and IOS® series of operating systems, a version of the LINUX® operating system or a version of the ANDROID® operating system, among others.


The recovery OS 310, portions of which are resident in memory 220 and executed by the hardware processor 210, functionally organizes the endpoint device 1403 by, inter alia, invoking operations to at least drive one or more network adapters that provide external network connectivity by establishing one or more external communication channels with one or more other computing devices. Examples of a network adapter may include, but are not limited or restricted to physical or software-emulated data transfer devices such as a network interface card (NIC), a modem, a wireless chipset that supports radio (e.g., radio frequency “RF” signals such as IEEE 802.11 based communications) or supports cellular transmissions, or light emitting device that produces light pulses for communications. It is contemplated that credentials for wireless network access may either be pre-configured in the recovery OS 310, or the guest agent 172 in the guest OS 300 (which is virtualization aware) can send those credentials via the virtualization layer to the recovery OS 310. For wireless network connectivity, the credentials may include Service Set Identifier (SSID), one or more pre-shared keys, or the like.


Herein, in order to avoid malware from a compromised guest OS 300 of the first virtual machine 170 from potentially infecting the recovery OS 310 within the second virtual machine 175, the recovery OS 310 may feature an operating system type different from the guest OS 300 to not suffer the same vulnerabilities that could be exploited. Also, as configured, the recovery OS 310 of the second virtual machine 175 may be configured to support lesser functionality than the guest OS 300, such as the recovery OS 310 may be configured to only drive a small subset (e.g., less than ten) of the network devices than the number of network devices supported by guest OS 300. Although the second virtual machine 175 is shown, for illustrative purposes, as being separate from the first virtual machine 170, it is contemplated that the second virtual machine 175 may be a different virtual machine or a reconfiguration of the first virtual machine 170 with the recovery OS 310 in lieu of the guest OS 300. For the later, the first virtual machine 170 is reused by deleting the compromised OS and substituting for the recovery OS 310


Running on top of the guest OS kernel 301, some of the user mode processes 320 constitute instances of guest OS applications 302 and/or guest applications 322 running in their separate address space. As an example, one of the guest application processes 322 running on top of the guest OS kernel 301 may include ADOBE® READER® from Adobe Systems Inc. of San Jose, Calif. or MICROSOFT® WORD® from Microsoft Corporation of Redmond, Wash. Events (monitored behaviors) of an object that is processed by one of the user mode processes 320 are monitored by a guest agent process 172, which provides metadata to at least one of the hyper-processes 370 and the micro-hypervisor 360 for use in malware detection. Hence, as shown, the object and associated events may be analyzed for the presence of malware; however, it is contemplated that the analytical functionality provided by the different malware detection processes could be provided by different malware detection modules/drivers (not shown) in the guest OS kernel 301. For such deployment, a guest OS anomaly may be detected.


III. Virtualization Software Architecture


Referring now to FIG. 3, an exemplary embodiment of the virtualization software architecture 150 of the endpoint device 1403 with compromised guest OS detection and OS recovery is shown. The virtualization software architecture 150 comprises guest environment 160 and host environment 180, both of which may be configured in accordance with a protection ring architecture as shown. While the protection ring architecture is shown for illustrative purposes, it is contemplated that other architectures that establish hierarchical privilege levels for virtualization software components may be utilized.


A. Guest Environment


As shown, the guest environment 160 comprises the first virtual machine 170, which is adapted to analyze an object 335 and/or events produced during execution of the first virtual machine 170 (hereinafter generally referred to as “data for analysis”) for the presence of malware. As shown, the first virtual machine 170 features a guest OS 300 that features a guest OS kernel 301 that is running in the most privileged level (Ring-0305) along with one or more processes 320, which may include one or more instances of guest OS applications 302 and/or one or more instances of software applications 322 (hereinafter “guest application process(es)”). Running in a lesser privileged level (Ring-3325), the guest application process(es) 322 may be based on the same software application, different versions of the same software application, or even different software applications, provided the guest software process(es) 322 may be controlled by the same guest OS kernel 301 (e.g., WINDOWS® OS kernel).


It is contemplated that malware detection on the endpoint device 1403 may be conducted by one or more processes embodied as software components (e.g., guest OS application(s)) running with the first virtual machine 170. These processes include a static analysis process 330, a heuristics process 332 and a dynamic analysis process 334, which collectively operate to detect suspicious and/or malicious behaviors by the object 335 that occur during execution within the first virtual machine 170. Notably, the endpoint device 1403 may perform (implement) malware detection as background processing (i.e., minor use of endpoint resources) with data processing being implemented as its primary processing (e.g., in the foreground having majority use of endpoint resources).


As used herein, the object 335 may include, for example, a web page, email, email attachment, file or universal resource locator. Static analysis may conduct a brief examination of characteristics (internal content) of the object 335 to determine whether it is suspicious, while dynamic analysis may analyze behaviors associated with events that occur during virtual execution of the object 335, especially characteristics involving a network adapter such as a physical pass-through network interface card (NIC) (hereinafter “network adapter”) 304. For instance, a loss of network connectivity can be determined in a number of ways. For instance, the guest agent 172 may initiate keepalive network packets and the failure to receive responses to these packets may denote loss of network connectivity. Additionally or in the alternative, the virtualization layer detects that the network adapter is not working based on a lack of network interrupts, or statistical registers in the network adapter that identify the number of bytes sent/received is below a prescribed threshold.


According to one embodiment of the disclosure, the static analysis process 330 and the heuristics process 332 may conduct a first examination of the object 335 to determine whether any characteristics of the object are suspicious and/or malicious. A finding of “suspicious” denotes that the characteristics signify a first probability range of the analyzed object 335 being malicious while a finding of “malicious” denotes that the characteristics signify a higher, second probability of the analyzed object 335 being malicious.


The static analysis process 330 and the heuristics process 332 may employ statistical analysis techniques, including the use of vulnerability/exploit signatures and heuristics, to perform non-behavioral analysis in order to detect anomalous characteristics (i.e., suspiciousness and/or maliciousness) without execution (i.e., monitoring run-time behavior) of the object 335. For example, the static analysis process 330 may employ signatures (referred to as vulnerability or exploit “indicators”) to match content (e.g., bit patterns) of the object 335 with patterns of the indicators in order to gather information that may be indicative of suspiciousness and/or malware. The heuristics module 332 may apply rules and/or policies to detect anomalous characteristics of the object 335 in order to identify whether the object 335 is suspect and deserving of further analysis or whether it is non-suspect (i.e., benign) and not in need of further analysis. These statistical analysis techniques may produce static analysis results (e.g., identification of communication protocol anomalies and/or suspect source addresses of known malicious servers) that may be provided to a reporting module 336.


More specifically, the static analysis process 330 may be configured to compare a bit pattern of the object 335 content with a “blacklist” of suspicious exploit indicator patterns. For example, a simple indicator check (e.g., hash) against the hashes of the blacklist (i.e., exploit indicators of objects deemed suspicious) may reveal a match, where a score may be subsequently generated (based on the content) by the threat protection component 376 to identify that the object may include malware. In addition to or in the alternative of a blacklist of suspicious objects, bit patterns of the object 335 may be compared with a “whitelist” of permitted bit patterns.


The dynamic analysis process 334 may conduct an analysis of the object 335 during its processing, where the guest agent process 172 monitors the run-time behaviors of the object 335 and captures certain type of events that occur during run time. The events are stored within a ring buffer 340 of the guest agent 172 for possible subsequent analysis by the threat protection component 376, as described below. In an embodiment, the dynamic analysis process 334 normally operates concurrently (e.g., at least partially at the same time) with the static analysis process 330 and/or the heuristics process 332. During processing of the object 335, particular events may be hooked to trigger signaling (and the transfer of data) to the host environment 180 for further analysis by the threat protection component 376 and/or master controller component 372.


For instance, the dynamic analysis process 334 may examine whether any behaviors associated with a detected event that occur during processing of the analyzed object 335 are suspicious and/or malicious. One of these detected events may pertain to activities with the network adapter 304 or any activities that are directed to altering a current operating state of the network adapter 304. A finding of “suspicious” denotes that the behaviors signify a first probability range of the analyzed object 335 being associated with malware while a finding of “malicious” denotes that the behaviors signify a higher second probability of the analyzed object 335 being associated with malware. The dynamic analysis results (and/or events caused by the processing of the object 335 and/or object itself) may also be provided to reporting module 336.


Based on the static analysis results and/or the dynamic analysis results, the reporting module 336 may be configured to generate a report (result data in a particular format) or an alert (message advising of the detection suspicious or malicious events) for transmission via network adapter 314 to a remotely located computing device, such as MDS 1402 or another type of computing device.


In addition or in lieu of analysis of the object 335, it is contemplated that the presence of a guest OS anomaly, which may be detected by malware detection processes 302 or malware detection modules/drivers 345 in the guest OS kernel 301, may be detected and reported to the host environment 180 (e.g., guest monitor component 374 and/or threat protection component 376) and/or reporting module 336).


1. Guest OS


In general, the guest OS 300 manages certain operability of the first virtual machine 170, where some of these operations are directed to the execution and allocation of virtual resources involving network connectivity, memory translation, or driving of one or more network devices including a network adapter. More specifically, the guest OS 300 may receive an input/output (I/O) request from the object 335 being processed by one or more guest software process(es) 322, and in some cases, translates the I/O request into instructions. These instructions may be used, at least in part, by virtual system hardware (e.g., vCPU 303), to drive the network adapter 304 for establishing network communications with other network devices. Upon establishing connectivity with the private network 120 and/or the public network 110 of FIG. 1 and in response to detection that the object 335 is malicious, the endpoint device 1403 may initiate an alert messages via reporting module 336 and the network adapter 304. Alternatively, with network connectivity, the guest OS 300 may receive software updates from administrators via the private network 120 of FIG. 1 or from a third party provider via the public network 110 of FIG. 1.


2. Guest Agent


According to one embodiment of the disclosure, the guest agent 172 is a software component configured to provide the virtualization layer 185 with metadata that may assist in the handling of malware detection. Instrumented into either a guest software application 320 (as shown), a portion of the guest OS 300 or operating as a separate module, the guest agent 172 is configured to provide metadata to the virtualization layer 185 in response to at least one selected event.


Herein, the guest agent 172 comprises one or more ring buffers 340 (e.g., queue, FIFO, shared memory, and/or registers), which records certain events that may be considered of interest for malware detection. Examples of these events may include information associated with a newly created process (e.g., process identifier, time of creation, originating source for creation of the new process, etc.), information associated with an access to certain restricted port or memory address, or the like. The recovery of the information associated with the stored events may occur through a “pull” or “push” recovery scheme, where the guest agent 172 may be configured to download the metadata periodically or aperiodically (e.g., when the ring buffer 340 exceeds a certain storage level or in response to a request). The request may originate from the threat protection component 376 and is generated by the guest monitor component 374.


3. Recovery OS


When dormant, the recovery OS 310 is stored as an OS image, the recovery OS 310 manages operability of the second virtual machine 175, most notably its network connectivity. More specifically, the second virtual machine 175 with the recovery OS 310 transitions from its normal dormant (inactive) state into an active state in response to the virtualization layer 185 determining that the guest OS 300 of the first virtual machine 170 has been compromised. Prior to, contemporaneously with, or after activation of the second virtual machine 175, the first virtual machine 170 transitions from an active state to an inactive state when the second virtual machine 175 is deployed as a separate virtual machine. When the recovery OS 310 is substituted for the guest OS 300 within the first virtual machine 170, the second virtual machine 175 is effectively the reconfigured first virtual machine 170.


The above-described transitions are conducted to provide the endpoint device 1403 with external network connectivity that includes one or more external communication channels (via the network adapter 314) to a remotely located (external) computing device. The external communication channel allows for transmission of an alert message from the reporting module 336 to an external computing device to denote the detection of a malicious attack. Alternatively, with network connectivity, the recovery OS 310 may receive software updates (e.g., patches, an updated version, etc.) from an administrator via the private network 120 of FIG. 1 or from a third party provider via the public network 110 of FIG. 1. The recovery OS 310 may be used for all forms of investigative analysis of the guest OS 300 as well as for remediation of any issues related to the (compromised) guest OS 300. As an example, the network connectivity of the recovery OS 310 may be used by a remote party to perform over-the-network forensic analysis of a host that has been reported as compromised or malfunctioning or to send off the state/image of the compromised VM 170 across the network for remote analysis.


B. Host Environment


As further shown in FIG. 3, the host environment 180 features a protection ring architecture that is arranged with a privilege hierarchy from the most privileged level 350 (Ring-0) to a lesser privilege level 352 (Ring-3). Positioned at the most privileged level 350 (Ring-0), the micro-hypervisor 360 is configured to directly interact with the physical hardware, such as hardware processor 210 or memory 220 of FIG. 2.


Running on top of the micro-hypervisor 360 in Ring-3352, a plurality of processes being instances of host applications (referred to as “hyper-processes” 370) communicate with the micro-hypervisor 360. Some of these hyper-processes 370 may include master controller component 372, guest monitor component 374 and threat protection component 376. Each of these hyper-processes 372, 374 and 376 represents a separate software instance with different functionality and is running in a separate address space. As these hyper-processes 370 are isolated from each other (i.e. not in the same binary), inter-process communications between the hyper-processes 370 are handled by the micro-hypervisor 360, but regulated through policy protection by the master controller component 372.


1. Micro-Hypervisor


The micro-hypervisor 360 may be configured as a light-weight hypervisor (e.g., less than 10K lines of code) that operates as a “host” OS kernel. The micro-hypervisor 360 features logic (mechanisms) for controlling operability of the computing device, such as endpoint device 1403 as shown. The mechanisms include inter-process communication (IPC) logic 362, resource allocation logic 364 and scheduling logic 366, where all of these mechanisms are based, at least in part, on a plurality of kernel features—protection domains, execution contexts, scheduling contexts, portals, and semaphores (hereinafter collectively as “kernel features 368”) as partially described in a co-pending U.S. patent application entitled “Microvisor-Based Malware Detection Endpoint Architecture” (U.S. patent application Ser. No. 14/929,821), the entire contents of which are incorporated herein by reference.


More specifically, a first kernel feature is referred to as “protection domains,” which correspond to containers where certain resources for the hyper-processes 370 can be assigned, such as various data structures (e.g., execution contexts, scheduling contexts, etc.). Given that each hyper-process 370 corresponds to a different protection domain, a first hyper-process (e.g., master controller component 372) is spatially isolated from a second (different) hyper-process (e.g., guest monitor component 374). Furthermore, the first hyper-process would be spatially isolated (within the address space) from the first and second virtual machines 170 and 175 as well.


A second kernel feature is referred to as an “execution context,” which features thread level activities within one of the hyper-processes (e.g., master controller component 372). These activities may include, inter alia, (i) contents of hardware registers, (ii) pointers/values on a stack, (iii) a program counter, and/or (iv) allocation of memory via, e.g., memory pages. The execution context is thus a static view of the state of a thread of execution.


Accordingly, the thread executes within a protection domain associated with that hyper-process of which the thread is a part. For the thread to execute on a hardware processor 210, its execution context may be tightly linked to a scheduling context (third kernel feature), which may be configured to provide information for scheduling the execution context for execution on the hardware processor 210. Illustratively, the scheduling context may include a priority and a quantum time for execution of its linked execution context on the hardware processor 210.


Hence, besides the spatial isolation provided by protection domains, the micro-hypervisor 360 enforces temporal separation through the scheduling context, which is used for scheduling the processing of the execution context as described above. Such scheduling by the micro-hypervisor 360 may involve defining which hardware processor may process the execution context (in a multi-processor environment), what priority is assigned the execution priority, and the duration of such execution.


Communications between protection domains are governed by portals, which represent a fourth kernel feature that is relied upon for generation of the IPC logic 362. Each portal represents a dedicated entry point into a corresponding protection domain. As a result, if one protection domain creates the portal, another protection domain may be configured to call the portal and establish a cross-domain communication channel.


Lastly, of the kernel features, semaphores facilitate synchronization between execution context on the same or on different hardware processors. The micro-hypervisor 360 uses the semaphores to signal the occurrence of hardware interrupts to the user applications.


The micro-hypervisor 360 utilizes one or more of these kernel features to formulate mechanisms for controlling operability of the endpoint device 200. One of these mechanisms is the IPC logic 362, which supports communications between separate protection domains (e.g., between two different hyper-processes 370). Thus, under the control of the IPC logic 362, in order for a first software component to communicate with another software component, the first software component needs to route a message to the micro-hypervisor 360. In response, the micro-hypervisor 360 switches from a first protection domain (e.g., first hyper-process 372) to a second protection domain (e.g., second hyper-process 374) and copies the message from an address space associated with the first hyper-process 372 to a different address space associated with the second hyper-process 374.


Another mechanism provided by the micro-hypervisor 360 is resource allocation logic 364. The resource allocation logic 364 enables a first software component to share one or more memory pages with a second software component under the control of the micro-hypervisor 360. Being aware of the location of one or more memory pages, the micro-hypervisor 360 provides the protection domain associated with the second software component access to the memory location(s) associated with the one or more memory pages.


Also, the micro-hypervisor 360 contains scheduling logic 366 that, when invoked, selects the highest-priority scheduling context and dispatches the execution context associated with the scheduling context. As a result, the scheduling logic 366 ensures that, at some point in time, all of the software components can run on the hardware processor 210 as defined by the scheduling context. Also, the scheduling logic 366 re-enforces that no component can monopolize the hardware processor 210 longer than defined by the scheduling context.


2. Master Controller


Referring still to FIG. 3, generally operating as a root task, the master controller component 372 is responsible for enforcing policy rules directed to operations of the virtualization software architecture 150. This responsibility is in contrast to the micro-hypervisor 360, which provides mechanisms for inter-process communications and resource allocation, but does not dictate how and when such functions occur.


Herein, the master controller component 372 may be configured with a policy engine 380 to conduct a number of policy decisions, including some or all of the following: (1) memory allocation (e.g., distinct physical address space assigned to different software components); (2) execution time allotment (e.g., scheduling and duration of execution time allotted on a selected process basis); (3) virtual machine creation (e.g., number of VMs, OS type, etc.); (4) inter-process communications (e.g., which processes are permitted to communicate with which processes, etc.); and/or (5) network device reallocation to the second virtual machine 175 with the recovery OS 310 in response to detecting that the current guest OS 300 has been compromised.


Additionally, the master controller component 372 is responsible for the allocation of resources. Initially, the master controller component 372 receives access to most of the physical resources, except for access to security critical resources that should be driven by high privileged (Ring-0) components, not user space (Ring-3) software components such as hyper-processes 370. For instance, while precluded from access to the memory management unit (MNU) or the interrupt controller, the master controller component 372 may be configured with OS evaluation logic 385, which is adapted to control the selection of which software components are responsible for driving which network devices. For instance, the master controller component 372 may reconfigure the IOMMU 250 of FIG. 2 so that (i) the vCPU 303 of the first virtual machine 170 is halted, (ii) some or all of the network devices 240 that were communicatively coupled to (and driven by) the guest OS kernel 301 are now under control of the recovery OS kernel 310, including a network adapter 314 that is part of the recovery OS 310, and (iii) the vCPU 313 of the second virtual machine 175 is activated from a previously dormant state, where the operability of the second virtual machine 175 is controlled by the recovery OS 310.


The master controller component 372 is platform agnostic. Thus, the master controller component 372 may be configured to enumerate what hardware is available to a particular process (or software component) and to configure the state of the hardware (e.g., activate, place into sleep state, etc.).


By separating the master controller component 372 from the micro-hypervisor 360, a number of benefits are achieved. One inherent benefit is increased security. When the functionality is placed into a single binary, which is running in host mode, any vulnerability may place the entire computing device at risk. In contrast, each of the software components within the host mode is running in its own separate address space.


3. Guest Monitor


Referring still to FIG. 3, the guest monitor component 374 is an instance of a user space application that is responsible for managing the execution of the first virtual machine 170 and/or the second virtual machine 175. Such management includes operating in concert with the threat protection component 376 to determine whether or not certain events, detected by the guest monitor component 374 during processing of the object 335 within the VM 170, are malicious.


In response to receiving one or more events from the guest agent 172 that are directed to the network adapter 304, the guest monitor component 374 determines whether any events are directed to disabling or disrupting operations of the network adapter 304. Data associated with the events is forwarded to the threat protection component 376. Based on this data, the threat protection component 376 may determine if the events denote that the guest OS 300 is compromised and the events further suggest that malware is within the guest OS 300 and is attempting to disrupt or has disabled external network communications for the endpoint device 1403.


Although the IOMMU 250 of FIG. 2 may be responsible for reassigning control of network devices among different OS, notably control of the network adapter, according to one embodiment of the disclosure, it is contemplated that the policy engine 380 of the guest monitor component 374 may be configured to handle reassignment of network device controls in addition to contain lateral movement of the malware such as halting operability of the compromised, guest OS 300 and initiating activation of the second virtual machine 175 with the recovery OS 310. Of course, other components within the virtualization layer 185 may be configured to handle (or assist) in the shift of operability from the first virtual machine 170 with the guest OS 300 to the second virtual machine 175 with the recovery OS 310.


4. Threat Protection Component


As described above and shown in FIG. 3, detection of a suspicious and/or malicious object 335 may be performed by static and dynamic analysis of the object 335 within the first virtual machine 170. Events associated with the process are monitored and stored by the guest agent process 172. Operating in concert with the guest agent process 172, the threat protection component 376 is responsible for further malware detection on the endpoint device 1403 based on an analysis of events received from the guest agent process 172 running in the first virtual machine 170. It is contemplated, however, that detection of suspicious/malicious activity may also be conducted completely outside the guest environment 160, such as solely within the threat protection logic 376 of the host environment 180. The threat protection logic 376 relies on an interaction with the guest agent process 172 when it needs to receive semantic information from inside the guest OS that the host environment 180 could not otherwise obtain itself.


After analysis, the detected events are correlated and classified as benign (i.e., determination of the analyzed object 335 being malicious is less than a first level of probability); suspicious (i.e., determination of the analyzed object 335 being malicious is between the first level and a second level of probability); or malicious (i.e., determination of the analyzed object 335 being malicious is greater than the second level of probability). The correlation and classification operations may be accomplished by a behavioral analysis logic 390 and a classifier 395. The behavioral analysis logic 390 and classifier 395 may cooperate to analyze and classify certain observed behaviors of the object (based on events) as indicative of malware.


In particular, the observed run-time behaviors by the guest agent 172 are provided to the behavioral analysis logic 390 as dynamic analysis results. These events may include commands that may be construed as disrupting or disabling operability of the network adapter, which may be hooked (intercepted) for handling by the virtualization layer 185. As a result, the guest monitor component 374 receives data associated with the events from the guest agent 172 and routes the same to the threat protection component 376.


At this time, the static analysis results and dynamic analysis results may be stored in memory 220, along with any additional data from the guest agent 172. These results may be provided via coordinated IPC-based communications to the behavioral analysis logic 390, which may provide correlation information to the classifier 395. Additionally, or in the alternative, the results and/or events may be provided or attempted to be reported via a network device initiated by the guest OS kernel to the MDS 1402 for correlation. The behavioral analysis logic 390 may be embodied as a rules-based correlation engine illustratively executing as an isolated process (software component) that communicates with the guest environment 160 via the guest monitor component 374.


In an embodiment, the behavioral analysis logic 390 may be configured to operate on correlation rules that define, among other things, patterns (e.g., sequences) of known malicious events (if-then statements with respect to, e.g., attempts by a process to change memory in a certain way that is known to be malicious) and/or known non-malicious events. The events may collectively correlate to malicious behavior. The rules of the behavioral analysis logic 390 may then be correlated against those dynamic analysis results, as well as static analysis results, to generate correlation information pertaining to, e.g., a level of risk or a numerical score used to arrive at a decision of maliciousness.


The classifier 395 may be configured to use the correlation information provided by behavioral analysis logic 390 to render a decision as to whether the object 335 is malicious. Illustratively, the classifier 395 may be configured to classify the correlation information, including monitored behaviors (expected and unexpected/anomalous) and access violations, of the object 335 relative to those of known malware and benign content.


Periodically or aperiodically, rules may be pushed from the MDS 1402 to the endpoint device 1403 to update the behavioral analysis logic 390, wherein the rules may be applied as different behaviors and monitored. For example, the correlation rules pushed to the behavioral analysis logic 390 may include, for example, rules that specify a level of probability of maliciousness, requests to close certain network ports that are ordinarily used by an application program, and/or attempts to disable certain functions performed by the network adapter. Alternatively, the correlation rules may be pulled based on a request from an endpoint device 1403 to determine whether new rules are available, and in response, the new rules are downloaded.


Illustratively, the behavioral analysis logic 390 and classifier 395 may be implemented as separate modules although, in the alternative, the behavioral analysis logic 390 and classifier 395 may be implemented as a single module disposed over (i.e., running on top of) the micro-hypervisor 360. The behavioral analysis logic 390 may be configured to correlate observed behaviors (e.g., results of static and dynamic analysis) with known malware and/or benign objects (embodied as defined rules) and generate an output (e.g., a level of risk or a numerical score associated with an object) that is provided to and used by the classifier 395 to render a decision of malware based on the risk level or score exceeding a probability threshold. The reporting module 336, which executes as a user mode process in the guest OS 300, is configured to generate an alert for transmission external to the endpoint device 1403 (e.g., to one or more other endpoint devices, a management appliance, or MDS 1402) in accordance with “post-solution” activity.


IV. Compromised Guest OS Kernel Detection and OS Recovery

According to one embodiment of the disclosure, the virtualization layer provides enhanced detection of a compromised software component (e.g., guest OS 300) operating within a virtual machine. The guest OS 300 is considered “compromised” when, due on a malicious attack, the functionality of the guest OS kernel 301 has been altered to disrupt or completely disable external network connectivity for the endpoint device. Also, the guest OS 300 may be considered “compromised” when an attacker has managed to take control of the guest OS kernel 301 and altering functionality (e.g., disabling network connectivity, etc.).


After detection, the virtualization layer 185 is configured to halt operability of the compromised (active) guest OS 300 and reconfigure the IOMMU 250 to assign some or all of the network devices, formerly driven by the guest OS 300 of the first virtual machine 170, to now be driven by the recovery OS 310 of the second virtual machine 175. Thereafter, the second virtual machine 175 undergoes a boot process, which initializes this virtual platform and places all of the network devices into a trustworthy state. Now, the external network connectivity for the endpoint device, as driven by the recovery OS 310 of the second virtual machine 175, is in operation. The first virtual machine 170 may undergo a graceful handoff (takeover) to allow the first virtual machine 170 to complete its analysis and to save state upon such completion which may be used in forensic analysis to determine when and how the guest OS 300 was compromised.


There may be a variety of techniques for detecting the change in functionality of the guest OS 300 that constitutes an attempted disruption or a disabling of external network connectivity from the endpoint device. In response, the virtualization layer alters an operating state of the second virtual machine 175 with the recovery OS 310.


As shown in FIG. 4A, a first technique involves the OS evaluation logic of the master controller component 372 transmitting a message destined to the network adapter 304 via the guest agent (not shown) to acquire state information from the guest OS 300 (see operations 400-403). The state information 400-403 may include, but is not limited or restricted to the current operating state of the network adapter such as the presence or absence of keepalive network packets, presence or absence of network interrupts, or information from statistical registers in the network adapter, as described above. Upon receipt of the state information, the master controller component 372 determines, in accordance with the policy rules governing operability of the network adapter (network adapter 304), whether the guest OS 300 has been compromised (operation 405).


When the state information indicates that there is a high likelihood that the guest OS 300 has been compromised, the master controller component 372 may be configured to signal the guest monitor component 374 to halt operations of the first virtual machine 170 (operations 410-411). Additionally, the guest monitor component 374 may secure a copy of the actual state of the first virtual machine as a snapshot (operation 412). The master controller component 372, which is responsible for policy decisions as to device resources, may request the micro-hypervisor 360 to reconfigure the IOMMU 250 (operation 415).


Thereafter, the micro-hypervisor 360 reassigns the network devices and the device resources (e.g., device registers, memory registers, etc.) to the recovery OS 310 (operation 420). As described, the recovery OS 310 may be deployed in a different virtual machine than the guest OS 300 or may be merely substituted for the guest OS 300 and corresponding guest application(s). After such reassignment, the virtualization layer (e.g., guest monitor component 374 and/or master controller component 372 and/or micro-hypervisor 360) boots the virtual machine under control by the recovery OS 310 to subsequently establish network connectivity through one or more external communication channels with a computing device remotely located from the endpoint device 1403 (operations 425-426).


Referring now to FIG. 4B, a second technique for detecting that the first guest OS is compromised with subsequent OS recovery is shown. Herein, the master controller component 372 prompts the guest monitor component 374 to obtain state information from the network adapter 304 that drives the physical network adapter (operations 430-432). Responsive to receipt of the state information, the guest monitor component 374 transmits at least a portion of the state information to the threat protection component 376, which analyzes the state information to determine whether the state information suggests that the first guest OS is compromised (operations 435-437).


Upon receipt of the results of the analysis by the threat protection component 376, if the results identify that there is a high likelihood that the guest OS 300 has been compromised, the master controller component 372 may be configured to signal the guest monitor to halt operations of the first virtual machine and obtain a copy of the actual state of the first virtual machine as a snapshot (operations 440-442). Additionally, the master controller component 372 may request the micro-hypervisor 360 to reconfigure the IOMMU 250 (operation 445).


After receiving a request from the master controller component 372, the micro-hypervisor 360 reconfigures the IOMMU 250, which reassigns the network devices and the device resources (e.g., device registers, memory registers, etc.) to the recovery OS 310 (operation 450). After such reassignment, the virtualization layer boots the virtual machine under control by the recovery OS 310, which subsequently establishes network connectivity through one or more external communication channels with a computing device remotely located from the endpoint device 1403 (operations 455-457).


Referring to FIG. 5, an exemplary embodiment of operations for detecting loss of network connectivity caused by a compromised guest OS and conducting an OS recovery response to re-establish network connectivity is shown. Herein, a first determination is made by the virtualization layer whether external network connectivity for the computing device has been disabled (item 500). This determination may be accomplished by monitoring a state of the network adapter through periodic (heartbeat) messages or accessing certain statistical registers associated with the network adapter for example.


If external network connectivity for a computing device has been disabled, a second determination may be conducted as to the reasons as to why the external network connectivity has been disabled (item 510). This determination may involve an analysis of one or more events, as captured by the guest agent process, that lead up to loss of external network connectivity in order to confirm that the external network connectivity was disabled due to operations conducted by the guest OS. Otherwise, if loss of the external network connectivity is due to a hardware failure or activities that are unrelated to the guest OS, the analysis discontinues.


Upon determining that external network connectivity has been disabled due to operations conducted by the guest OS (perhaps after attempts to re-enable the external network connectivity), the virtualization layer concludes that the guest OS is compromised. Hence, state information (data associated with the operating state of the guest OS) may be captured and the operations of the first virtual machine (with the guest OS) are halted (items 520 and 530).


Thereafter, a dormant recovery OS that is resident in non-transitory storage medium as an OS image may be fetched and installed into a selected virtual machine (item 540). The selected virtual machine may be the first virtual machine (where the recovery OS is substituted for the guest OS) or may be a second virtual machine different from the first virtual machine. Thereafter, the network device resources (and network devices that are currently driven by the guest OS kernel of the first virtual machine) are re-assigned to the recovery OS (item 550). Thereafter, the (second) virtual machine is booted, which causes the recovery OS to run and configure its network adapter to establish external network connectivity so that the endpoint device may electronically communicate with other computing devices located remotely from the endpoint device (items 560 and 570). This allows for the transmission of reports and/or alert messages over a network, which may identify one or more malicious event that is detected during virtual processing of an object under test.


In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. For instance, the guest OS and the recovery OS may be deployed on the same virtual machine, where the recovery OS remains dormant as a standby OS unless the guest OS is compromised. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims
  • 1. A computing device comprising: one or more hardware processors; anda memory coupled to the one or more hardware processors, the memory comprises one or more software components that, when executed by the one or more hardware processors, operate as (i) a visualization layer deployed in a host environment of a virtualization software architecture and (ii) a plurality of virtual machines deployed within a guest environment of the virtualization software architecture, the plurality of virtual machines comprises (a) a first virtual machine that is operating under control of a first operating system and including an agent collecting runtime state information of a network adapter and (b) a second virtual machine that is separate from the first virtual machine and is operating under control of a second operating system in response to determining that the first operating system has been compromised, the second virtual machine being configured to drive the network adapter,wherein after receipt of the state information by the virtualization layer,transmitting at least a portion of the state information to a threat protection component being deployed within the virtualization layer,analyzing, by the threat protection component, the state information to determine whether the first operating system is compromised by at least determining whether (i) an external network connection through the network adapter has been disabled or (ii) a kernel of the first operating system is attempting to disable the external network connection through the network adapter, andupon receipt of the results of the analyzing by the threat protection component that the first operating system is compromised, signaling, by the virtualization layer, to halt operations of the first virtual machine,installing, by the virtualization layer, a second operating system image retained within the memory of the computing device into the second virtual machine,reassigning, by the virtualization layer, the network adapter and adapter resources to the second operating system, the second virtual machine configured to drive the network adapter, andbooting the second virtual machine subsequent to the reassignment of the network adapter and the adapter resources from the first operating system to the second operating system.
  • 2. The computing device of claim 1, wherein the network adapter is configured to establish an external network connection to another computing device.
  • 3. The computing device of claim 1, wherein the memory comprises software, including the one or more software components that, when executed by the one or more hardware processors, operates as the virtualization software architecture that comprises the guest environment including the first virtual machine and the host environment including the virtualization layer that analyzes data provided from the first virtual machine to determine whether the first operating system has been compromised.
  • 4. The computing device of claim 3, wherein the virtualization layer in the host environment comprises (1) a guest monitor component that determines whether an event, received from a process running on the first virtual machine that is configured to monitor operability of the network adapter, is directed to disabling or disrupting functionality of the network adapter and (2) a threat protection component that determines that the first operating system is compromised if the event is classified as malicious.
  • 5. The computing device of claim 4, wherein an event of the one or more events is classified as malicious upon determining that the event represents that an external network connection via the network adapter has been disabled.
  • 6. The computing device of claim 4, wherein the event is classified as malicious upon determining that a kernel of the first operating system is attempting to disable the external network connection via the network adapter.
  • 7. The computing device of claim 3, wherein the virtualization layer in the host environment comprises a threat protection component that determines that the first operating system is compromised when the one or more events is classified as malicious upon determining that the first operating system is non-functional.
  • 8. The computing device of claim 3, wherein the virtualization layer in the host environment comprises a threat protection component that determines that the first operating system (OS) is compromised when the one or more events is classified as malicious upon determining that a guest OS application of the first operating system is inoperable.
  • 9. The computing device of claim 1, wherein the second virtual machine is configured by removal of a first operating system (OS) kernel and one or more guest OS applications of the first operating system and installation of a second OS kernel and one or more guest OS applications of the second operating system.
  • 10. The computing device of claim 1, wherein the first virtual machine transitioning from an active state to an inactive state when the first operating system is determined to be compromised.
  • 11. The computing device of claim 1, wherein the first operating system is a different type of operating system than the second operating system.
  • 12. The computing device of claim 1, wherein the network adapter corresponds to a software-emulated data transfer device.
  • 13. A non-transitory storage medium that includes software that is executable by one or more processors and, upon execution, operates a virtualization software architecture, the non-transitory storage medium comprising: one or more software components that, when executed by the one or more processors, operate as a network adapter;one or more software components that, when executed by the one or more processors, operate as a virtualization layer;one or more software components that, when executed by the one or more processors, operate as a first virtual machine being part of the virtualization software architecture, the first virtual machine operating under control of a first operating system and including an agent collecting runtime state information of a network adapter; andone or more software components that, when executed by the one or more processors, operate as a second virtual machine being part of the virtualization software architecture, the second virtual machine operating under control of a second operating system in response to determining that the first operating system has been compromised in which functionality of the first operating system is determined to have been altered or network connectivity by the first virtual machine has been disabled,wherein after receipt of the state information by the virtualization layer,transmitting at least a portion of the state information to a threat protection component being deployed within the virtualization layer,analyzing, by the threat protection component, the state information to determine whether the first operating system is compromised by at least determining whether (i) an external network connection through the network adapter has been disabled or (ii) a kernel of the first operating system is attempting to disable the external network connection through the network adapter, andupon receipt of the results of the analyzing by the threat protection component that the first operating system is compromised, signaling, by the virtualization layer, to halt operations of the first virtual machine,installing, by the virtualization layer, a second operating system image retained within the memory of the computing device into the second virtual machine,reassigning, by the virtualization layer, the network adapter and adapter resources to the second operating system, the second virtual machine configured to drive the network adapter, andbooting the second virtual machine subsequent to the reassignment of the network adapter and the adapter resources from the first operating system to the second operating system.
  • 14. The non-transitory storage medium of claim 13, wherein the virtualization layer analyzes data provided from the first virtual machine to determine whether the first operating system has been compromised.
  • 15. The non-transitory storage medium of claim 14, wherein the virtualization layer determines that the first operating system has been compromised based on a state of functionality of the network adapter in communications with the first operating system of the first virtual machine.
  • 16. The non-transitory storage medium of claim 14, wherein the virtualization layer comprises (1) a guest monitor component that determines whether one or more events, which are received from a process running on the first virtual machine that is configured to monitor operability of a network adapter in communications with the first operating system of the first virtual machine, is malicious as being directed to disabling or disrupting functionality of the network adapter and (2) the threat protection component that determines that the first operating system is compromised if the one or more events are classified as malicious.
  • 17. The non-transitory storage medium of claim 16, wherein the threat protection component classifies the one or more events as malicious upon determining that the one or more events represent that external network connection via the network adapter has been disabled.
  • 18. The non-transitory storage medium of claim 16, wherein the threat protection component classifies the one or more events as malicious upon determining that either (i) the first operating system is non-functional or (ii) an operability of a guest OS application of the first operating system has ceased.
  • 19. The non-transitory storage medium of claim 16, wherein the threat protection component classifies the one or more events as malicious upon determining that a kernel of the first operating system is attempting to disable the external network connection via the network adapter.
  • 20. The non-transitory storage medium of claim 13, wherein the second virtual machine is configured by removal of a first operating system (OS) kernel and one or more guest OS applications of the first operating system and installation of a second OS kernel and one or more guest OS applications of the second operating system.
  • 21. The non-transitory storage medium of claim 13, wherein the first virtual machine is independent from the second virtual machine.
  • 22. The non-transitory storage medium of claim 13, wherein the second virtual machine is a reconfiguration of the first virtual machine.
  • 23. A computerized method for protecting connectivity of a computing device to an external network in response to a virtualization layer of the computing device detecting that a guest operating system of the computing device has been compromised by a potential malicious attack through malware, the method comprising: operating a first virtual machine under control of a first operating system, the first virtual machine in communication with a network adapter and an agent collecting runtime state information of a network adapter;responsive to receipt of the state information by the virtualization layer, transmitting at least a portion of the state information to a threat protection component being deployed within the virtualization layer operating within a host environment of the computing device;analyzing, by the threat protection component, the state information to determine whether the first operating system is compromised by at least determining whether (i) an external network connection through the network adapter has been disabled or (ii) a kernel of the first operating system is attempting to disable the external network connection through the network adapter; andupon receipt of the results of the analyzing by the threat protection component that the first operating system is compromised, signaling, by the virtualization layer, to halt operations of the first virtual machine,installing, by the virtualization layer, a second operating system image retained within memory of the computing device into a second virtual machine, the second virtual machine being separate from the first virtual machine and allocated to a different address space than allocated to the first virtual machine,reassigning, by the virtualization layer, the network adapter and adapter resources to the second operating system, the second virtual machine configured to drive the reassigned network adapter, andbooting the second virtual machine subsequent to the reassignment of the network adapter and the adapter resources from the first operating system to the second operating system.
  • 24. The computerized method of claim 23 further comprising: conducting a boot process on the second virtual machine so that external network connectivity, as driven by the second operating system of the second virtual machine, is in operation.
  • 25. The computerized method of claim 23, wherein the virtualization layer determines that the first operating system has been compromised based on a state of functionality of the network adapter in communications with the first operating system of the first virtual machine.
  • 26. The computerized method of claim 23, wherein the virtualization layer comprises (1) a guest monitor component that determines whether one or more events corresponding to the state information being received from the agent configured to monitor operability of the network adapter, are being directed to disabling or disrupting functionality of the network adapter and (2) the threat protection component that determines that the first operating system is compromised if the one or more events are classified as malicious.
  • 27. The computerized method of claim 26, wherein the threat protection component classifies the one or more events as malicious upon determining that the one or more events represent that external network connection via the network adapter has been disabled.
  • 28. The computerized method of claim 27, wherein the threat protection component classifies the one or more events as malicious upon determining that either (i) the first operating system is non-functional or (ii) an operability of a guest operating system (OS) application of the first operating system has ceased.
  • 29. The computerized method of claim 26, wherein the threat protection component classifies the one or more events as malicious upon determining that the kernel of the first operating system is attempting to disable the external network connection via the network adapter.
  • 30. The computerized method of claim 23, wherein the second virtual machine is configured by removal of at least the kernel of the first operating system and installation of a kernel of the second operating system.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is based upon and claims the benefit of priority from U.S. Provisional Patent Application No. 62/187,115 filed Jun. 30, 2015, the entire contents of which are incorporated herein by reference.

US Referenced Citations (883)
Number Name Date Kind
4292580 Ott et al. Sep 1981 A
5175732 Hendel et al. Dec 1992 A
5319776 Hile et al. Jun 1994 A
5440723 Arnold et al. Aug 1995 A
5490249 Miller Feb 1996 A
5657473 Killean et al. Aug 1997 A
5802277 Cowlard Sep 1998 A
5842002 Schnurer et al. Nov 1998 A
5878560 Johnson Mar 1999 A
5960170 Chen et al. Sep 1999 A
5978917 Chi Nov 1999 A
5983348 Ji Nov 1999 A
6013455 Bandman et al. Jan 2000 A
6088803 Tso et al. Jul 2000 A
6092194 Touboul Jul 2000 A
6094677 Capek et al. Jul 2000 A
6108799 Boulay et al. Aug 2000 A
6154844 Touboul et al. Nov 2000 A
6269330 Cidon et al. Jul 2001 B1
6272641 Ji Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6298445 Shostack et al. Oct 2001 B1
6357008 Nachenberg Mar 2002 B1
6424627 Sorhaug et al. Jul 2002 B1
6442696 Wray et al. Aug 2002 B1
6484315 Ziese Nov 2002 B1
6487666 Shanklin et al. Nov 2002 B1
6493756 O'Brien et al. Dec 2002 B1
6550012 Villa et al. Apr 2003 B1
6775657 Baker Aug 2004 B1
6831893 Ben Nun et al. Dec 2004 B1
6832367 Choi et al. Dec 2004 B1
6895550 Kanchirayappa et al. May 2005 B2
6898632 Gordy et al. May 2005 B2
6907396 Muttik et al. Jun 2005 B1
6941348 Petry et al. Sep 2005 B2
6971097 Wallman Nov 2005 B1
6981279 Arnold et al. Dec 2005 B1
7007107 Lychenko et al. Feb 2006 B1
7028179 Anderson et al. Apr 2006 B2
7043757 Hoefelmeyer et al. May 2006 B2
7058791 Hughes et al. Jun 2006 B1
7058822 Edery et al. Jun 2006 B2
7069316 Gryaznov Jun 2006 B1
7080407 Zhao et al. Jul 2006 B1
7080408 Pak et al. Jul 2006 B1
7093002 Wolff et al. Aug 2006 B2
7093239 van der Made Aug 2006 B1
7096498 Judge Aug 2006 B2
7100201 Izatt Aug 2006 B2
7107617 Hursey et al. Sep 2006 B2
7159149 Spiegel et al. Jan 2007 B2
7213260 Judge May 2007 B2
7231667 Jordan Jun 2007 B2
7240364 Branscomb et al. Jul 2007 B1
7240368 Roesch et al. Jul 2007 B1
7243371 Kasper et al. Jul 2007 B1
7249175 Donaldson Jul 2007 B1
7287278 Liang Oct 2007 B2
7308716 Danford et al. Dec 2007 B2
7328453 Merkle, Jr. et al. Feb 2008 B2
7346486 Ivancic et al. Mar 2008 B2
7356736 Natvig Apr 2008 B2
7386888 Liang et al. Jun 2008 B2
7392542 Bucher Jun 2008 B2
7409719 Armstrong et al. Aug 2008 B2
7418729 Szor Aug 2008 B2
7424745 Cheston et al. Sep 2008 B2
7428300 Drew et al. Sep 2008 B1
7441272 Durham et al. Oct 2008 B2
7448084 Apap et al. Nov 2008 B1
7458098 Judge et al. Nov 2008 B2
7464404 Carpenter et al. Dec 2008 B2
7464407 Nakae et al. Dec 2008 B2
7467408 O'Toole, Jr. Dec 2008 B1
7478428 Thomlinson Jan 2009 B1
7480773 Reed Jan 2009 B1
7487543 Arnold et al. Feb 2009 B2
7496960 Chen et al. Feb 2009 B1
7496961 Zimmer et al. Feb 2009 B2
7519990 Xie Apr 2009 B1
7523493 Liang et al. Apr 2009 B2
7530104 Thrower et al. May 2009 B1
7540025 Tzadikario May 2009 B2
7546638 Anderson et al. Jun 2009 B2
7565550 Liang et al. Jul 2009 B2
7568233 Szor et al. Jul 2009 B1
7584455 Ball Sep 2009 B2
7603715 Costa et al. Oct 2009 B2
7607171 Marsden et al. Oct 2009 B1
7639714 Stolfo et al. Dec 2009 B2
7644441 Schmid et al. Jan 2010 B2
7657419 van der Made Feb 2010 B2
7676841 Sobchuk et al. Mar 2010 B2
7698548 Shelest et al. Apr 2010 B2
7707633 Danford et al. Apr 2010 B2
7712136 Sprosts et al. May 2010 B2
7730011 Deninger et al. Jun 2010 B1
7739740 Nachenberg et al. Jun 2010 B1
7779463 Stolfo et al. Aug 2010 B2
7784097 Stolfo et al. Aug 2010 B1
7832008 Kraemer Nov 2010 B1
7836502 Zhao et al. Nov 2010 B1
7849506 Dansey et al. Dec 2010 B1
7854007 Sprosts et al. Dec 2010 B2
7869073 Oshima Jan 2011 B2
7877803 Enstone et al. Jan 2011 B2
7904959 Sidiroglou et al. Mar 2011 B2
7908660 Bahl Mar 2011 B2
7930738 Petersen Apr 2011 B1
7937387 Frazier et al. May 2011 B2
7937761 Bennett May 2011 B1
7949849 Lowe et al. May 2011 B2
7958558 Leake et al. Jun 2011 B1
7996556 Raghavan et al. Aug 2011 B2
7996836 McCorkendale et al. Aug 2011 B1
7996904 Chiueh et al. Aug 2011 B1
7996905 Arnold et al. Aug 2011 B2
8006305 Aziz Aug 2011 B2
8010667 Zhang et al. Aug 2011 B2
8020206 Hubbard et al. Sep 2011 B2
8028338 Schneider et al. Sep 2011 B1
8042184 Batenin Oct 2011 B1
8045094 Teragawa Oct 2011 B2
8045458 Alperovitch et al. Oct 2011 B2
8069484 McMillan et al. Nov 2011 B2
8087086 Lai et al. Dec 2011 B1
8151263 Venkitachalam et al. Apr 2012 B1
8171553 Aziz et al. May 2012 B2
8176049 Deninger et al. May 2012 B2
8176480 Spertus May 2012 B1
8201169 Venkitachalam Jun 2012 B2
8201246 Wu et al. Jun 2012 B1
8204984 Aziz et al. Jun 2012 B1
8214905 Doukhvalov et al. Jul 2012 B1
8220055 Kennedy Jul 2012 B1
8225288 Miller et al. Jul 2012 B2
8225373 Kraemer Jul 2012 B2
8233882 Rogel Jul 2012 B2
8234640 Fitzgerald et al. Jul 2012 B1
8234709 Viljoen et al. Jul 2012 B2
8239944 Nachenberg et al. Aug 2012 B1
8260914 Ranjan Sep 2012 B1
8266091 Gubin et al. Sep 2012 B1
8266395 Li Sep 2012 B2
8271978 Bennett et al. Sep 2012 B2
8286251 Eker et al. Oct 2012 B2
8290912 Searls Oct 2012 B1
8291499 Aziz et al. Oct 2012 B2
8307435 Mann et al. Nov 2012 B1
8307443 Wang et al. Nov 2012 B2
8312545 Tuvell et al. Nov 2012 B2
8321936 Green et al. Nov 2012 B1
8321941 Tuvell et al. Nov 2012 B2
8332571 Edwards, Sr. Dec 2012 B1
8347380 Satish et al. Jan 2013 B1
8353031 Rajan et al. Jan 2013 B1
8365286 Poston Jan 2013 B2
8365297 Parshin et al. Jan 2013 B1
8370938 Daswani et al. Feb 2013 B1
8370939 Zaitsev et al. Feb 2013 B2
8375444 Aziz et al. Feb 2013 B2
8381299 Stolfo et al. Feb 2013 B2
8387046 Montague et al. Feb 2013 B1
8397306 Tormasov Mar 2013 B1
8402529 Green et al. Mar 2013 B1
8418230 Cornelius et al. Apr 2013 B1
8464340 Ahn et al. Jun 2013 B2
8479174 Chiriac Jul 2013 B2
8479276 Vaystikh et al. Jul 2013 B1
8479291 Bodke Jul 2013 B1
8479294 Li et al. Jul 2013 B1
8510827 Leake et al. Aug 2013 B1
8510828 Guo et al. Aug 2013 B1
8510842 Amit et al. Aug 2013 B2
8516478 Edwards et al. Aug 2013 B1
8516590 Ranadive et al. Aug 2013 B1
8516593 Aziz Aug 2013 B2
8522236 Zimmer et al. Aug 2013 B2
8522348 Chen et al. Aug 2013 B2
8528086 Aziz Sep 2013 B1
8533824 Hutton et al. Sep 2013 B2
8539582 Aziz et al. Sep 2013 B1
8549638 Aziz Oct 2013 B2
8555391 Demir et al. Oct 2013 B1
8561177 Aziz et al. Oct 2013 B1
8566476 Shifter et al. Oct 2013 B2
8566946 Aziz et al. Oct 2013 B1
8584094 Dadhia et al. Nov 2013 B2
8584234 Sobel et al. Nov 2013 B1
8584239 Aziz et al. Nov 2013 B2
8595834 Xie et al. Nov 2013 B2
8612659 Serebrin et al. Dec 2013 B1
8627476 Satish et al. Jan 2014 B1
8635696 Aziz Jan 2014 B1
8682054 Xue et al. Mar 2014 B2
8682812 Ranjan Mar 2014 B1
8689333 Aziz Apr 2014 B2
8695096 Zhang Apr 2014 B1
8713631 Pavlyushchik Apr 2014 B1
8713681 Silberman et al. Apr 2014 B2
8726392 McCorkendale et al. May 2014 B1
8739280 Chess et al. May 2014 B2
8756696 Miller Jun 2014 B1
8775715 Tsirkin et al. Jul 2014 B2
8776180 Kumar et al. Jul 2014 B2
8776229 Aziz Jul 2014 B1
8782792 Bodke Jul 2014 B1
8789172 Stolfo et al. Jul 2014 B2
8789178 Kejriwal et al. Jul 2014 B2
8793278 Frazier et al. Jul 2014 B2
8793787 Ismael et al. Jul 2014 B2
8799997 Spiers et al. Aug 2014 B2
8805947 Kuzkin et al. Aug 2014 B1
8806647 Daswani et al. Aug 2014 B1
8832352 Tsirkin et al. Sep 2014 B2
8832829 Manni et al. Sep 2014 B2
8839245 Khajuria et al. Sep 2014 B1
8850060 Beloussov et al. Sep 2014 B1
8850570 Ramzan Sep 2014 B1
8850571 Staniford et al. Sep 2014 B2
8863279 McDougal et al. Oct 2014 B2
8875295 Lutas et al. Oct 2014 B2
8881234 Narasimhan et al. Nov 2014 B2
8881271 Butler, II Nov 2014 B2
8881282 Aziz et al. Nov 2014 B1
8898788 Aziz et al. Nov 2014 B1
8910238 Lukacs et al. Dec 2014 B2
8935779 Manni et al. Jan 2015 B2
8949257 Shifter et al. Feb 2015 B2
8984478 Epstein Mar 2015 B2
8984638 Aziz et al. Mar 2015 B1
8990939 Staniford et al. Mar 2015 B2
8990944 Singh et al. Mar 2015 B1
8997219 Staniford et al. Mar 2015 B2
9003402 Carbone et al. Apr 2015 B1
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9027125 Kumar et al. May 2015 B2
9027135 Aziz May 2015 B1
9071638 Aziz et al. Jun 2015 B1
9087199 Sallam Jul 2015 B2
9092616 Kumar et al. Jul 2015 B2
9092625 Kashyap et al. Jul 2015 B1
9104867 Thioux et al. Aug 2015 B1
9106630 Frazier et al. Aug 2015 B2
9106694 Aziz et al. Aug 2015 B2
9117079 Huang et al. Aug 2015 B1
9118715 Staniford et al. Aug 2015 B2
9159035 Ismael et al. Oct 2015 B1
9171160 Vincent et al. Oct 2015 B2
9176843 Ismael et al. Nov 2015 B1
9189627 Islam Nov 2015 B1
9195829 Goradia et al. Nov 2015 B1
9197664 Aziz et al. Nov 2015 B1
9213651 Malyugin et al. Dec 2015 B2
9223972 Vincent et al. Dec 2015 B1
9225740 Ismael et al. Dec 2015 B1
9241010 Bennett et al. Jan 2016 B1
9251343 Vincent et al. Feb 2016 B1
9262635 Paithane et al. Feb 2016 B2
9268936 Butler Feb 2016 B2
9275229 LeMasters Mar 2016 B2
9282109 Aziz et al. Mar 2016 B1
9292686 Ismael et al. Mar 2016 B2
9294501 Mesdaq et al. Mar 2016 B2
9300686 Pidathala et al. Mar 2016 B2
9306960 Aziz Apr 2016 B1
9306974 Aziz et al. Apr 2016 B1
9311479 Manni et al. Apr 2016 B1
9355247 Thioux et al. May 2016 B1
9356944 Aziz May 2016 B1
9363280 Rivlin et al. Jun 2016 B1
9367681 Ismael et al. Jun 2016 B1
9398028 Karandikar et al. Jul 2016 B1
9413781 Cunningham et al. Aug 2016 B2
9426071 Caldejon et al. Aug 2016 B1
9430646 Mushtaq et al. Aug 2016 B1
9432389 Khalid et al. Aug 2016 B1
9438613 Paithane et al. Sep 2016 B1
9438622 Staniford et al. Sep 2016 B1
9438623 Thioux et al. Sep 2016 B1
9459901 Jung et al. Oct 2016 B2
9467460 Otvagin et al. Oct 2016 B1
9483644 Paithane et al. Nov 2016 B1
9495180 Ismael Nov 2016 B2
9497213 Thompson et al. Nov 2016 B2
9507935 Ismael et al. Nov 2016 B2
9516057 Aziz Dec 2016 B2
9519782 Aziz et al. Dec 2016 B2
9536091 Paithane et al. Jan 2017 B2
9537972 Edwards et al. Jan 2017 B1
9560059 Islam Jan 2017 B1
9563488 Fadel et al. Feb 2017 B2
9565202 Kindlund et al. Feb 2017 B1
9591015 Amin et al. Mar 2017 B1
9591020 Aziz Mar 2017 B1
9594904 Jain et al. Mar 2017 B1
9594905 Ismael et al. Mar 2017 B1
9594912 Thioux et al. Mar 2017 B1
9609007 Rivlin et al. Mar 2017 B1
9626509 Khalid et al. Apr 2017 B1
9628498 Aziz et al. Apr 2017 B1
9628507 Haq et al. Apr 2017 B2
9633134 Ross Apr 2017 B2
9635039 Islam et al. Apr 2017 B1
9641546 Manni et al. May 2017 B1
9654485 Neumann May 2017 B1
9661009 Karandikar et al. May 2017 B1
9661018 Aziz May 2017 B1
9674298 Edwards et al. Jun 2017 B1
9680862 Ismael et al. Jun 2017 B2
9690606 Ha et al. Jun 2017 B1
9690933 Singh et al. Jun 2017 B1
9690935 Shifter et al. Jun 2017 B2
9690936 Malik et al. Jun 2017 B1
9736179 Ismael Aug 2017 B2
9740857 Ismael et al. Aug 2017 B2
9747446 Pidathala et al. Aug 2017 B1
9756074 Aziz et al. Sep 2017 B2
9773112 Rathor et al. Sep 2017 B1
9781144 Otvagin et al. Oct 2017 B1
9787700 Amin et al. Oct 2017 B1
9787706 Otvagin et al. Oct 2017 B1
9792196 Ismael et al. Oct 2017 B1
9824209 Ismael et al. Nov 2017 B1
9824211 Wilson Nov 2017 B2
9824216 Khalid et al. Nov 2017 B1
9825976 Gomez et al. Nov 2017 B1
9825989 Mehra et al. Nov 2017 B1
9838408 Karandikar et al. Dec 2017 B1
9838411 Aziz Dec 2017 B1
9838416 Aziz Dec 2017 B1
9838417 Khalid et al. Dec 2017 B1
9846776 Paithane et al. Dec 2017 B1
9876701 Caldejon et al. Jan 2018 B1
9888016 Amin et al. Feb 2018 B1
9888019 Pidathala et al. Feb 2018 B1
9910988 Vincent et al. Mar 2018 B1
9912644 Cunningham Mar 2018 B2
9912681 Ismael et al. Mar 2018 B1
9912684 Aziz et al. Mar 2018 B1
9912691 Mesdaq et al. Mar 2018 B2
9912698 Thioux et al. Mar 2018 B1
9916440 Paithane et al. Mar 2018 B1
9921978 Chan et al. Mar 2018 B1
9934376 Ismael Apr 2018 B1
9934381 Kindlund et al. Apr 2018 B1
9946568 Ismael et al. Apr 2018 B1
9954890 Staniford et al. Apr 2018 B1
9973531 Thioux May 2018 B1
10002252 Ismael et al. Jun 2018 B2
10019338 Goradia et al. Jul 2018 B1
10019573 Silberman et al. Jul 2018 B2
10025691 Ismael et al. Jul 2018 B1
10025927 Khalid et al. Jul 2018 B1
10027689 Rathor et al. Jul 2018 B1
10027690 Aziz et al. Jul 2018 B2
10027696 Rivlin et al. Jul 2018 B1
10033747 Paithane et al. Jul 2018 B1
10033748 Cunningham et al. Jul 2018 B1
10033753 Islam et al. Jul 2018 B1
10033759 Kabra et al. Jul 2018 B1
10050998 Singh Aug 2018 B1
10068091 Aziz et al. Sep 2018 B1
10075455 Zafar et al. Sep 2018 B2
10083302 Paithane et al. Sep 2018 B1
10084813 Eyada Sep 2018 B2
10089461 Ha et al. Oct 2018 B1
10097573 Aziz Oct 2018 B1
10104102 Neumann Oct 2018 B1
10108446 Steinberg et al. Oct 2018 B1
10121000 Rivlin et al. Nov 2018 B1
10122746 Manni et al. Nov 2018 B1
10133863 Bu et al. Nov 2018 B2
10133866 Kumar et al. Nov 2018 B1
10146810 Shiffer et al. Dec 2018 B2
10148693 Singh et al. Dec 2018 B2
10165000 Aziz et al. Dec 2018 B1
10169585 Pilipenko et al. Jan 2019 B1
10176095 Ferguson et al. Jan 2019 B2
10176321 Abbasi et al. Jan 2019 B2
10181029 Ismael et al. Jan 2019 B1
10191858 Tsirkin Jan 2019 B2
10191861 Steinberg et al. Jan 2019 B1
10192052 Singh et al. Jan 2019 B1
10198574 Thioux et al. Feb 2019 B1
10200384 Mushtaq et al. Feb 2019 B1
10210329 Malik et al. Feb 2019 B1
10216927 Steinberg Feb 2019 B1
10218740 Mesdaq et al. Feb 2019 B1
10242185 Goradia Mar 2019 B1
10726127 Steinberg Jul 2020 B1
20010005889 Albrecht Jun 2001 A1
20010047326 Broadbent et al. Nov 2001 A1
20020013802 Mori Jan 2002 A1
20020018903 Kokubo et al. Feb 2002 A1
20020038430 Edwards et al. Mar 2002 A1
20020091819 Melchione et al. Jul 2002 A1
20020095607 Lin-Hendel Jul 2002 A1
20020116627 Tarbotton et al. Aug 2002 A1
20020144156 Copeland Oct 2002 A1
20020162015 Tang Oct 2002 A1
20020166063 Lachman et al. Nov 2002 A1
20020169952 DiSanto et al. Nov 2002 A1
20020184528 Shevenell et al. Dec 2002 A1
20020188887 Largman et al. Dec 2002 A1
20020194490 Halperin et al. Dec 2002 A1
20030021728 Shame et al. Jan 2003 A1
20030074578 Ford et al. Apr 2003 A1
20030084318 Schertz May 2003 A1
20030101381 Mateev et al. May 2003 A1
20030115483 Liang Jun 2003 A1
20030188190 Aaron et al. Oct 2003 A1
20030191957 Hypponen et al. Oct 2003 A1
20030200460 Morota et al. Oct 2003 A1
20030212902 van der Made Nov 2003 A1
20030229801 Kouznetsov et al. Dec 2003 A1
20030237000 Denton et al. Dec 2003 A1
20040003323 Bennett et al. Jan 2004 A1
20040006473 Mills et al. Jan 2004 A1
20040015712 Szor Jan 2004 A1
20040019832 Arnold et al. Jan 2004 A1
20040025016 Focke et al. Feb 2004 A1
20040047356 Bauer Mar 2004 A1
20040083408 Spiegel et al. Apr 2004 A1
20040088581 Brawn et al. May 2004 A1
20040093513 Cantrell et al. May 2004 A1
20040111531 Staniford et al. Jun 2004 A1
20040117478 Triulzi et al. Jun 2004 A1
20040117624 Brandt et al. Jun 2004 A1
20040128355 Chao et al. Jul 2004 A1
20040165588 Pandya Aug 2004 A1
20040236963 Danford et al. Nov 2004 A1
20040243349 Greifeneder et al. Dec 2004 A1
20040249911 Alkhatib et al. Dec 2004 A1
20040255161 Cavanaugh Dec 2004 A1
20040268147 Wiederin et al. Dec 2004 A1
20050005159 Oliphant Jan 2005 A1
20050021740 Bar et al. Jan 2005 A1
20050033960 Vialen et al. Feb 2005 A1
20050033989 Poletto et al. Feb 2005 A1
20050050148 Mohammadioun et al. Mar 2005 A1
20050086523 Zimmer et al. Apr 2005 A1
20050091513 Mitomo et al. Apr 2005 A1
20050091533 Omote et al. Apr 2005 A1
20050091652 Ross et al. Apr 2005 A1
20050108562 Khazan et al. May 2005 A1
20050114663 Cornell et al. May 2005 A1
20050125195 Brendel Jun 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050157662 Bingham et al. Jul 2005 A1
20050183143 Anderholm et al. Aug 2005 A1
20050201297 Peikari Sep 2005 A1
20050210533 Copeland et al. Sep 2005 A1
20050238005 Chen et al. Oct 2005 A1
20050240781 Gassoway Oct 2005 A1
20050262562 Gassoway Nov 2005 A1
20050265331 Stolfo Dec 2005 A1
20050283839 Cowburn Dec 2005 A1
20060010495 Cohen et al. Jan 2006 A1
20060015416 Hoffman et al. Jan 2006 A1
20060015715 Anderson Jan 2006 A1
20060015747 Van de Ven Jan 2006 A1
20060021029 Brickell et al. Jan 2006 A1
20060021054 Costa et al. Jan 2006 A1
20060031476 Mathes et al. Feb 2006 A1
20060047665 Neil Mar 2006 A1
20060070130 Costea et al. Mar 2006 A1
20060075252 Kallahalla et al. Apr 2006 A1
20060075496 Carpenter et al. Apr 2006 A1
20060095968 Portolani et al. May 2006 A1
20060101516 Sudaharan et al. May 2006 A1
20060101517 Banzhof et al. May 2006 A1
20060112416 Ohta May 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060123477 Raghavan et al. Jun 2006 A1
20060130060 Anderson et al. Jun 2006 A1
20060143709 Brooks et al. Jun 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060161983 Cothrell et al. Jul 2006 A1
20060161987 Levy-Yurista Jul 2006 A1
20060161989 Reshef et al. Jul 2006 A1
20060164199 Glide et al. Jul 2006 A1
20060173992 Weber et al. Aug 2006 A1
20060179147 Tran et al. Aug 2006 A1
20060184632 Marino et al. Aug 2006 A1
20060191010 Benjamin Aug 2006 A1
20060221956 Narayan et al. Oct 2006 A1
20060236127 Kurien et al. Oct 2006 A1
20060236393 Kramer et al. Oct 2006 A1
20060242709 Seinfeld et al. Oct 2006 A1
20060248519 Jaeger et al. Nov 2006 A1
20060248528 Oney et al. Nov 2006 A1
20060248582 Panjwani et al. Nov 2006 A1
20060251104 Koga Nov 2006 A1
20060288417 Bookbinder et al. Dec 2006 A1
20070006226 Hendel Jan 2007 A1
20070006288 Mayfield et al. Jan 2007 A1
20070006313 Porras et al. Jan 2007 A1
20070011174 Takaragi et al. Jan 2007 A1
20070016951 Piccard et al. Jan 2007 A1
20070019286 Kikuchi Jan 2007 A1
20070033645 Jones Feb 2007 A1
20070038943 FitzGerald et al. Feb 2007 A1
20070055837 Rajagopal et al. Mar 2007 A1
20070064689 Shin et al. Mar 2007 A1
20070074169 Chess et al. Mar 2007 A1
20070094676 Fresko et al. Apr 2007 A1
20070094730 Bhikkaji et al. Apr 2007 A1
20070101435 Konanka et al. May 2007 A1
20070128855 Cho et al. Jun 2007 A1
20070142030 Sinha et al. Jun 2007 A1
20070143565 Corrigan et al. Jun 2007 A1
20070143827 Nicodemus et al. Jun 2007 A1
20070156895 Vuong Jul 2007 A1
20070157180 Tillmann et al. Jul 2007 A1
20070157306 Elrod et al. Jul 2007 A1
20070168988 Eisner et al. Jul 2007 A1
20070171824 Ruello et al. Jul 2007 A1
20070174915 Gribble et al. Jul 2007 A1
20070192500 Lum Aug 2007 A1
20070192858 Lum Aug 2007 A1
20070198275 Malden et al. Aug 2007 A1
20070208822 Wang et al. Sep 2007 A1
20070220607 Sprosts et al. Sep 2007 A1
20070240218 Tuvell et al. Oct 2007 A1
20070240219 Tuvell et al. Oct 2007 A1
20070240220 Tuvell et al. Oct 2007 A1
20070240222 Tuvell et al. Oct 2007 A1
20070250930 Aziz et al. Oct 2007 A1
20070256132 Oliphant Nov 2007 A2
20070271446 Nakamura Nov 2007 A1
20070300227 Mall et al. Dec 2007 A1
20080005782 Aziz Jan 2008 A1
20080018122 Zierler et al. Jan 2008 A1
20080028463 Dagon et al. Jan 2008 A1
20080040710 Chiriac Feb 2008 A1
20080046781 Childs et al. Feb 2008 A1
20080065854 Schoenberg et al. Mar 2008 A1
20080066179 Liu Mar 2008 A1
20080072326 Danford et al. Mar 2008 A1
20080077793 Tan et al. Mar 2008 A1
20080080518 Hoeflin et al. Apr 2008 A1
20080086720 Lekel Apr 2008 A1
20080098476 Syversen Apr 2008 A1
20080120722 Sima et al. May 2008 A1
20080123676 Cummings et al. May 2008 A1
20080127348 Largman et al. May 2008 A1
20080134178 Fitzgerald et al. Jun 2008 A1
20080134334 Kim et al. Jun 2008 A1
20080141376 Clausen et al. Jun 2008 A1
20080184367 McMillan et al. Jul 2008 A1
20080184373 Traut et al. Jul 2008 A1
20080189787 Arnold et al. Aug 2008 A1
20080201778 Guo et al. Aug 2008 A1
20080209557 Herley et al. Aug 2008 A1
20080215742 Goldszmidt et al. Sep 2008 A1
20080222729 Chen et al. Sep 2008 A1
20080235793 Schunter et al. Sep 2008 A1
20080244569 Challener et al. Oct 2008 A1
20080263665 Ma et al. Oct 2008 A1
20080294808 Mahalingam et al. Nov 2008 A1
20080295172 Bohacek Nov 2008 A1
20080301810 Lehane et al. Dec 2008 A1
20080307524 Singh et al. Dec 2008 A1
20080313738 Enderby Dec 2008 A1
20080320594 Jiang Dec 2008 A1
20090003317 Kasralikar et al. Jan 2009 A1
20090007100 Field et al. Jan 2009 A1
20090013408 Schipka Jan 2009 A1
20090031423 Liu et al. Jan 2009 A1
20090036111 Danford et al. Feb 2009 A1
20090037835 Goldman Feb 2009 A1
20090044024 Oberheide et al. Feb 2009 A1
20090044274 Budko et al. Feb 2009 A1
20090064332 Porras et al. Mar 2009 A1
20090077666 Chen et al. Mar 2009 A1
20090083369 Marmor Mar 2009 A1
20090083855 Apap et al. Mar 2009 A1
20090089860 Forrester et al. Apr 2009 A1
20090089879 Wang et al. Apr 2009 A1
20090094697 Provos et al. Apr 2009 A1
20090106754 Liu et al. Apr 2009 A1
20090113425 Ports et al. Apr 2009 A1
20090125976 Wassermann et al. May 2009 A1
20090126015 Monastyrsky et al. May 2009 A1
20090126016 Sobko et al. May 2009 A1
20090133125 Choi et al. May 2009 A1
20090144823 Lamastra et al. Jun 2009 A1
20090158430 Borders Jun 2009 A1
20090158432 Zheng et al. Jun 2009 A1
20090172661 Zimmer et al. Jul 2009 A1
20090172815 Gu et al. Jul 2009 A1
20090187992 Poston Jul 2009 A1
20090193293 Stolfo et al. Jul 2009 A1
20090198651 Shifter et al. Aug 2009 A1
20090198670 Shifter et al. Aug 2009 A1
20090198689 Frazier et al. Aug 2009 A1
20090199274 Frazier et al. Aug 2009 A1
20090199296 Xie et al. Aug 2009 A1
20090204964 Foley et al. Aug 2009 A1
20090228233 Anderson et al. Sep 2009 A1
20090241187 Troyansky Sep 2009 A1
20090241190 Todd et al. Sep 2009 A1
20090254990 McGee Oct 2009 A1
20090265692 Godefroid et al. Oct 2009 A1
20090271867 Zhang Oct 2009 A1
20090276771 Nickolov et al. Nov 2009 A1
20090300415 Zhang et al. Dec 2009 A1
20090300761 Park et al. Dec 2009 A1
20090320011 Chow et al. Dec 2009 A1
20090328185 Berg et al. Dec 2009 A1
20090328221 Blumfield et al. Dec 2009 A1
20100005146 Drako et al. Jan 2010 A1
20100011205 McKenna Jan 2010 A1
20100017546 Poo et al. Jan 2010 A1
20100030996 Butler, II Feb 2010 A1
20100031353 Thomas et al. Feb 2010 A1
20100031360 Seshadri et al. Feb 2010 A1
20100037314 Perdisci et al. Feb 2010 A1
20100043073 Kuwamura Feb 2010 A1
20100054278 Stolfo et al. Mar 2010 A1
20100058474 Hicks Mar 2010 A1
20100064044 Nonoyama Mar 2010 A1
20100077481 Polyakov et al. Mar 2010 A1
20100083376 Pereira et al. Apr 2010 A1
20100100718 Srinivasan Apr 2010 A1
20100115621 Staniford et al. May 2010 A1
20100132038 Zaitsev May 2010 A1
20100154056 Smith et al. Jun 2010 A1
20100180344 Malyshev et al. Jul 2010 A1
20100191888 Serebrin et al. Jul 2010 A1
20100192223 Ismael et al. Jul 2010 A1
20100220863 Dupaquis et al. Sep 2010 A1
20100235647 Buer Sep 2010 A1
20100235831 Dittmer Sep 2010 A1
20100251104 Massand Sep 2010 A1
20100254622 Kamay et al. Oct 2010 A1
20100281102 Chinta et al. Nov 2010 A1
20100281541 Stolfo et al. Nov 2010 A1
20100281542 Stolfo et al. Nov 2010 A1
20100287260 Peterson et al. Nov 2010 A1
20100299665 Adams Nov 2010 A1
20100299754 Amit et al. Nov 2010 A1
20100306173 Frank Dec 2010 A1
20100306560 Bozek et al. Dec 2010 A1
20110004737 Greenebaum Jan 2011 A1
20110004935 Moffie et al. Jan 2011 A1
20110022695 Dalal et al. Jan 2011 A1
20110025504 Lyon et al. Feb 2011 A1
20110041179 St hlberg Feb 2011 A1
20110047542 Dang et al. Feb 2011 A1
20110047544 Yehuda et al. Feb 2011 A1
20110047594 Mahaffey et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110055907 Narasimhan et al. Mar 2011 A1
20110060947 Song Mar 2011 A1
20110078794 Manni et al. Mar 2011 A1
20110078797 Beachem Mar 2011 A1
20110082962 Horovitz et al. Apr 2011 A1
20110093951 Aziz Apr 2011 A1
20110099620 Stavrou et al. Apr 2011 A1
20110099633 Aziz Apr 2011 A1
20110099635 Silberman et al. Apr 2011 A1
20110113231 Kaminsky May 2011 A1
20110145918 Jung et al. Jun 2011 A1
20110145920 Mahaffey et al. Jun 2011 A1
20110145934 Abramovici et al. Jun 2011 A1
20110153909 Dong Jun 2011 A1
20110167422 Eom et al. Jul 2011 A1
20110167493 Song et al. Jul 2011 A1
20110167494 Bowen et al. Jul 2011 A1
20110173213 Frazier et al. Jul 2011 A1
20110173460 Ito et al. Jul 2011 A1
20110219449 St. Neitzel et al. Sep 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225624 Sawhney et al. Sep 2011 A1
20110225655 Niemela et al. Sep 2011 A1
20110247072 Staniford et al. Oct 2011 A1
20110265182 Peinado et al. Oct 2011 A1
20110289582 Kejriwal et al. Nov 2011 A1
20110296412 Banga et al. Dec 2011 A1
20110296440 Launch et al. Dec 2011 A1
20110299413 Chatwani et al. Dec 2011 A1
20110302587 Nishikawa et al. Dec 2011 A1
20110307954 Melnik et al. Dec 2011 A1
20110307955 Kaplan et al. Dec 2011 A1
20110307956 Yermakov et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20110321040 Sobel et al. Dec 2011 A1
20110321165 Capalik et al. Dec 2011 A1
20110321166 Capalik Dec 2011 A1
20120011508 Ahmad Jan 2012 A1
20120023593 Puder et al. Jan 2012 A1
20120047576 Do et al. Feb 2012 A1
20120054869 Yen et al. Mar 2012 A1
20120066698 Yanoo Mar 2012 A1
20120079596 Thomas et al. Mar 2012 A1
20120084859 Radinsky et al. Apr 2012 A1
20120096553 Srivastava et al. Apr 2012 A1
20120110667 Zubrilin et al. May 2012 A1
20120117652 Manni et al. May 2012 A1
20120121154 Xue et al. May 2012 A1
20120124426 Maybee et al. May 2012 A1
20120131156 Brandt et al. May 2012 A1
20120144489 Jarrett et al. Jun 2012 A1
20120159454 Barham et al. Jun 2012 A1
20120174186 Aziz et al. Jul 2012 A1
20120174196 Bhogavilli et al. Jul 2012 A1
20120174218 McCoy et al. Jul 2012 A1
20120198279 Schroeder Aug 2012 A1
20120198514 McCune et al. Aug 2012 A1
20120210423 Friedrichs et al. Aug 2012 A1
20120216046 McDougal et al. Aug 2012 A1
20120216069 Bensinger Aug 2012 A1
20120222114 Shanbhogue Aug 2012 A1
20120222121 Staniford et al. Aug 2012 A1
20120254993 Sallarn Oct 2012 A1
20120254995 Sallam Oct 2012 A1
20120255002 Sallam Oct 2012 A1
20120255003 Sallam Oct 2012 A1
20120255012 Sallam Oct 2012 A1
20120255015 Sahita et al. Oct 2012 A1
20120255016 Sallam Oct 2012 A1
20120255017 Sallam Oct 2012 A1
20120255021 Sallam Oct 2012 A1
20120260304 Morris et al. Oct 2012 A1
20120260342 Dube et al. Oct 2012 A1
20120260345 Quinn et al. Oct 2012 A1
20120265976 Spiers et al. Oct 2012 A1
20120266244 Green et al. Oct 2012 A1
20120278886 Luna Nov 2012 A1
20120291029 Kidambi et al. Nov 2012 A1
20120297057 Ghosh et al. Nov 2012 A1
20120297489 Dequevy Nov 2012 A1
20120311708 Agarwal et al. Dec 2012 A1
20120317566 Santos et al. Dec 2012 A1
20120330801 McDougal et al. Dec 2012 A1
20120331553 Aziz et al. Dec 2012 A1
20130007325 Sahita et al. Jan 2013 A1
20130014259 Gribble et al. Jan 2013 A1
20130036470 Zhu et al. Feb 2013 A1
20130036472 Aziz Feb 2013 A1
20130042153 McNeeney Feb 2013 A1
20130047257 Aziz Feb 2013 A1
20130055256 Banga et al. Feb 2013 A1
20130074185 McDougal et al. Mar 2013 A1
20130086235 Ferris Apr 2013 A1
20130086299 Epstein Apr 2013 A1
20130086684 Mohler Apr 2013 A1
20130091571 Lu Apr 2013 A1
20130097699 Balupari et al. Apr 2013 A1
20130097706 Titonis et al. Apr 2013 A1
20130111587 Goel et al. May 2013 A1
20130111593 Shankar et al. May 2013 A1
20130117741 Prabhakaran et al. May 2013 A1
20130117848 Golshan et al. May 2013 A1
20130117849 Golshan et al. May 2013 A1
20130117852 Stute May 2013 A1
20130117855 Kim et al. May 2013 A1
20130139264 Brinkley et al. May 2013 A1
20130159662 Iyigun et al. Jun 2013 A1
20130160125 Likhachev et al. Jun 2013 A1
20130160127 Jeong et al. Jun 2013 A1
20130160130 Mendelev et al. Jun 2013 A1
20130160131 Madou et al. Jun 2013 A1
20130167236 Sick Jun 2013 A1
20130174214 Duncan Jul 2013 A1
20130179971 Harrison Jul 2013 A1
20130185789 Hagiwara et al. Jul 2013 A1
20130185795 Winn et al. Jul 2013 A1
20130185798 Saunders et al. Jul 2013 A1
20130191915 Antonakakis et al. Jul 2013 A1
20130191924 Tedesco et al. Jul 2013 A1
20130196649 Paddon et al. Aug 2013 A1
20130227680 Pavlyushchik Aug 2013 A1
20130227691 Aziz et al. Aug 2013 A1
20130246370 Bartram et al. Sep 2013 A1
20130247186 LeMasters Sep 2013 A1
20130263260 Mahaffey et al. Oct 2013 A1
20130282776 Durrant et al. Oct 2013 A1
20130283370 Vipat et al. Oct 2013 A1
20130291109 Staniford et al. Oct 2013 A1
20130298243 Kumar et al. Nov 2013 A1
20130298244 Kumar et al. Nov 2013 A1
20130312098 Kapoor et al. Nov 2013 A1
20130312099 Edwards et al. Nov 2013 A1
20130318038 Shifter et al. Nov 2013 A1
20130318073 Shifter et al. Nov 2013 A1
20130325791 Shifter et al. Dec 2013 A1
20130325792 Shifter et al. Dec 2013 A1
20130325871 Shifter et al. Dec 2013 A1
20130325872 Shifter et al. Dec 2013 A1
20130326625 Anderson et al. Dec 2013 A1
20130333033 Khesin Dec 2013 A1
20130333040 Diehl et al. Dec 2013 A1
20130347131 Mooring et al. Dec 2013 A1
20140006734 Li et al. Jan 2014 A1
20140007097 Chin Jan 2014 A1
20140019963 Deng et al. Jan 2014 A1
20140032875 Butler Jan 2014 A1
20140053260 Gupta et al. Feb 2014 A1
20140053261 Gupta et al. Feb 2014 A1
20140075522 Paris et al. Mar 2014 A1
20140089266 Une Mar 2014 A1
20140096134 Barak Apr 2014 A1
20140115578 Cooper et al. Apr 2014 A1
20140115652 Kapoor et al. Apr 2014 A1
20140130158 Wang et al. May 2014 A1
20140137180 Lukacs et al. May 2014 A1
20140157407 Krishnan et al. Jun 2014 A1
20140169762 Ryu Jun 2014 A1
20140179360 Jackson et al. Jun 2014 A1
20140181131 Ross Jun 2014 A1
20140189687 Jung et al. Jul 2014 A1
20140189866 Shifter et al. Jul 2014 A1
20140189882 Jung et al. Jul 2014 A1
20140208123 Roth et al. Jul 2014 A1
20140230024 Uehara et al. Aug 2014 A1
20140237600 Silberman et al. Aug 2014 A1
20140245423 Lee Aug 2014 A1
20140259169 Harrison Sep 2014 A1
20140280245 Wilson Sep 2014 A1
20140283037 Sikorski et al. Sep 2014 A1
20140283063 Thompson et al. Sep 2014 A1
20140289105 Sirota et al. Sep 2014 A1
20140304819 Ignatchenko et al. Oct 2014 A1
20140310810 Brueckner Oct 2014 A1
20140325644 Oberg et al. Oct 2014 A1
20140328204 Klotsche et al. Nov 2014 A1
20140337836 Ismael Nov 2014 A1
20140344926 Cunningham et al. Nov 2014 A1
20140351810 Pratt et al. Nov 2014 A1
20140351935 Shao et al. Nov 2014 A1
20140359239 Hiremane et al. Dec 2014 A1
20140380473 Bu et al. Dec 2014 A1
20140380474 Paithane et al. Dec 2014 A1
20150007312 Pidathala et al. Jan 2015 A1
20150013008 Lukacs et al. Jan 2015 A1
20150095661 Sell et al. Apr 2015 A1
20150096022 Vincent et al. Apr 2015 A1
20150096023 Mesdaq et al. Apr 2015 A1
20150096024 Haq et al. Apr 2015 A1
20150096025 Ismael Apr 2015 A1
20150121135 Pape Apr 2015 A1
20150128266 Tosa May 2015 A1
20150172300 Cochenour Jun 2015 A1
20150180886 Staniford et al. Jun 2015 A1
20150186645 Aziz et al. Jul 2015 A1
20150199513 Ismael et al. Jul 2015 A1
20150199514 Tosa et al. Jul 2015 A1
20150199531 Ismael et al. Jul 2015 A1
20150199532 Ismael et al. Jul 2015 A1
20150220735 Paithane et al. Aug 2015 A1
20150244732 Golshan et al. Aug 2015 A1
20150304716 Sanchez-Leighton Oct 2015 A1
20150317495 Rodgers et al. Nov 2015 A1
20150318986 Novak et al. Nov 2015 A1
20150372980 Eyada Dec 2015 A1
20160004869 Ismael et al. Jan 2016 A1
20160006756 Ismael et al. Jan 2016 A1
20160044000 Cunningham Feb 2016 A1
20160048680 Lutas et al. Feb 2016 A1
20160057123 Jiang et al. Feb 2016 A1
20160103698 Yang Apr 2016 A1
20160127393 Aziz et al. May 2016 A1
20160191547 Zafar et al. Jun 2016 A1
20160191550 Ismael et al. Jun 2016 A1
20160261612 Mesdaq et al. Sep 2016 A1
20160285914 Singh et al. Sep 2016 A1
20160301703 Aziz Oct 2016 A1
20160335110 Paithane et al. Nov 2016 A1
20160371105 Sieffert et al. Dec 2016 A1
20170083703 Abbasi et al. Mar 2017 A1
20170124326 Wailly et al. May 2017 A1
20170213030 Mooring et al. Jul 2017 A1
20170344496 Chen et al. Nov 2017 A1
20170364677 Soman Dec 2017 A1
20180013770 Ismael Jan 2018 A1
20180048660 Paithane et al. Feb 2018 A1
20180121316 Ismael et al. May 2018 A1
20180288077 Siddiqui et al. Oct 2018 A1
Foreign Referenced Citations (16)
Number Date Country
2439806 Jan 2008 GB
2490431 Oct 2012 GB
02006928 Jan 2002 WO
0223805 Mar 2002 WO
2007117636 Oct 2007 WO
2008041950 Apr 2008 WO
2011084431 Jul 2011 WO
2011112348 Sep 2011 WO
2012075336 Jun 2012 WO
2012145066 Oct 2012 WO
2012135192 Oct 2012 WO
2012154664 Nov 2012 WO
2012177464 Dec 2012 WO
2013067505 May 2013 WO
2013091221 Jun 2013 WO
2014004747 Jan 2014 WO
Non-Patent Literature Citations (81)
Entry
Garfinkel, Tai, and Mendel Rosenblum. “A Virtual Machine Introspection Based Architecture for Intrusion Detection.” Ndss. vol. 3. No. 2003. 2003.
U.S. Appl. No. 15/199,876, filed Jun. 30, 2016 Non-Final Office Action dated Jan. 10, 2018.
U.S. Appl. No. 15/199,871, filed Jun. 30, 2016 Advisory Action dated Nov. 8, 2018.
U.S. Appl. No. 15/199,871, filed Jun. 30, 2016 Notice of Allowance dated Mar. 20, 2019.
U.S. Appl. No. 15/199,876, filed Jun. 30, 2016 Non-Final Office Action dated Mar. 28, 2019.
Venezia, Paul , “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003).
Vladimir Getov: “Security as a Service in Smart Clouds—Opportunities and Concerns”, Computer Software and Applications Conference (COMPSAC), 2012 IEEE 36th Annual, IEEE, Jul. 16, 2012 (Jul. 16, 2012).
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International conference on Network and System Security, pp. 344-350.
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.
U.S. Appl. No. 15/197,634, filed Jun. 29, 2016 Notice of Allowance dated Apr. 18, 2018.
U.S. Appl. No. 15/199,871, filed Jun. 30, 2016 Final Office Action dated Aug. 16, 2018.
U.S. Appl. No. 15/199,871, filed Jun. 30, 2016 Non-Final Office Action dated Apr. 9, 2018.
U.S. Appl. No. 15/199,876, filed Jun. 30, 2016 Final Office Action dated Jul. 5, 2018.
U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Advisory Action dated Nov. 8, 2018.
U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Final Office Action dated Aug. 31, 2018.
U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Non-Final Office Action dated Apr. 5, 2018.
U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Non-Final Office Action dated Dec. 20, 2018.
U.S. Appl. No. 15/199,871, filed Jun. 30, 2016.
U.S. Appl. No. 15/199,873 filed Jun. 30, 2016.
U.S. Appl. No. 15/199,876 filed Jun. 30, 2016.
U.S. Appl. No. 15/199,882 filed Jun. 30, 2016.
“Mining Specification of Malicious Behavior”—Jha et al, UCSB, Sep. 2007 https://www.cs.ucsb.edu/.about_chris/research/doc/esec07.sub.--mining.pdf-.
“Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003).
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.isp?reload=true&amumbe- r=990073, (Dec. 7, 2013).
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003).
Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
Boubalos, Chris , “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003).
Chaudet, C. , et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82.
Chen, P. M. and Noble, B. D., “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”) (2001).
Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012).
Cohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120.
Costa, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
Didier Stevens, “Malicious PDF Documents Explained”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 9, No. 1, Jan. 1, 2011, pp. 80-82, XP011329453, ISSN: 1540-7993, DOI: 10.1109/MSP.2011.14.
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007).
Dunlap, George W. , et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002).
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
Gregg Keizer: “Microsoft's HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-d/1035069? [retrieved on Jun. 1, 2016].
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase © CMU, Carnegie Mellon University, 2007.
Hiroshi Shinotsuka, Malware Authors Using New Techniques to Evade Automated Threat Analysis Systems, Oct. 26, 2012, http://www.symantec.com/connect/blogs/, pp. 1-4.
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware letection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003).
Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6.
Khaled Salah et al: “Using Cloud Computing to Implement a Security Overlay Network”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 11, No. 1, Jan. 1, 2013 (Jan. 1, 2013).
Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”), (2003).
Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages.
Lastline Labs, The Threat of Evasive Malware, Feb. 25, 2013, Lastline Labs, pp. 1-8.
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: a Statistical Viewpoint”, (“Marchette”), (2001).
Moore, D. , et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
Natvig, Kurt , “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002).
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005).
Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
Oberheide et al., CloudAV.sub.--N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium Useni Security '08 Jul. 28-Aug. 1, 2008 San Jose, CA.
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doom, John Linwood Griffin, Stefan Berger., &Hype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”).
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25.
Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998).
U.S. Appl. No. 15/199,876, filed Jun. 30, 2016 Notice of Allowance dated Sep. 9, 2019.
U.S. Appl. No. 15/199,879, filed Jun. 30, 2016 Non-Final Office Action dated Apr. 27, 2018.
U.S. Appl. No. 15/199,879, filed Jun. 30, 2016 Notice of Allowance dated Oct. 4, 2018.
U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Advisory Action dated Sep. 30, 2019.
U.S. Appl. No. 15/199,882 filed Jun. 30, 2016 Final Office Action dated Jun. 11, 2019.
U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Non-Final Office Action dated Nov. 1, 2019.
U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Notice of Allowance dated Mar. 19, 2020.
Provisional Applications (1)
Number Date Country
62187115 Jun 2015 US