System and method of threat detection under hypervisor control

Information

  • Patent Grant
  • 10033759
  • Patent Number
    10,033,759
  • Date Filed
    Wednesday, June 29, 2016
    8 years ago
  • Date Issued
    Tuesday, July 24, 2018
    6 years ago
Abstract
A computing device is described that comprises one or more hardware processors and a memory communicatively coupled to the one or more hardware processors. The memory comprises software that, when executed by the processors, operates as (i) a virtual machine and (ii) a hypervisor. The virtual machine includes a guest kernel that facilitates communications between a guest application being processed within the virtual machine and one or more virtual resources. The hypervisor configures a portion of the guest kernel to intercept a system call from the guest application and redirect information associated with the system call to the hypervisor. The hypervisor enables logic within the guest kernel to analyze information associated with the system call to determine whether the system call is associated with a malicious attack in response to the system call being initiated during a memory page execution cycle. Alternatively, the hypervisor operates to obfuscate interception of the system call in response to the system call being initiated during memory page read cycle.
Description
FIELD

Embodiments of the disclosure generally relate to the field of malware detection (e.g., exploit detection) through the hooking of system calls under hypervisor control.


GENERAL BACKGROUND

In general, virtualization is a technique that provides an ability to host two or more operating systems concurrently on the same computing device. Currently, virtualization architectures feature hardware virtualization capabilities that operate in accordance with two processor modes: host mode and guest mode. A virtual machine runs in guest mode, where a processor of the computing device switches from host mode to guest mode for execution of software components associated with the virtual machine (VM entry). Similarly, the processor may switch from guest mode to host mode when operations of the virtual machine are paused or stopped (VM exit).


A virtual machine (VM) is a software abstraction that operates like a physical (real) computing device having a particular guest operating system (OS). Each VM includes an operating system (OS) kernel (sometimes referred to as a “guest kernel”) that operates in a most privileged guest mode (guest kernel mode, ring-0). A guest software application executes in a lesser privileged operating mode (guest user mode, ring-3).


As the guest software application executes and requires certain resources, the guest software application accesses an Application Programming Interface (API), which invokes a system call function (sometimes referred to as a “syscall”) operating within the guest kernel. In response to the syscall, the guest kernel operates as a service provider by facilitating communications between the guest software application and one or more resources associated with the syscall. Examples of the resources may include, but are not limited or restricted to virtual resources, such as a particular virtual driver or certain virtual system hardware implemented by software components in a host space (e.g., virtual central processing unit “vCPU” or virtual disk) that are configured to operate in a similar manner as corresponding physical components (e.g., physical CPU or hard disk) or directly mapped to a physical resource. As a result, the guest system software, when executed, controls execution and allocation of virtual resources so that the VM operates in a manner consistent to operations of the physical computing device.


With the emergence of hardware support for full virtualization in an increased number of hardware processor architectures, new virtualization (software) architectures have emerged. One such virtualization architecture involves adding a software abstraction layer, sometimes referred to as a “virtualization layer,” between the physical hardware and the virtual machine. The virtualization layer runs in host mode. It consists of an OS kernel (sometimes referred to as a “hypervisor”) and multiple host applications that operate under the control of the OS kernel.


Conventionally, the detection of an exploit usually involves monitoring the usage of System APIs accessed by the guest software application operating with the virtual machine. This is achieved by intercepting (sometimes referred to as “hooking”) an API call in the guest user mode prior to entry into the guest kernel mode. The analysis as to whether the API call is associated with a malicious attack may be conducted in the guest user mode, and thereafter, control may be transferred back to service the API function being called. However, there are exploits designed to detect an API hook, and in response, attempts to bypass the API hook by advancing a memory pointer directed to the “hooked” instruction associated with the API call a few bytes. The bypass attempt, referred to as “hook hopping,” is an attempt by the exploit to execute the original API function.


Previously, to avoid hook hopping, efforts have been made to migrate exploit detection, one type of malware detection, into the guest kernel through a technique known as “kernel hooking”. Kernel hooking is the process of modifying the guest OS kernel to alter its behavior or capture certain events. Previously, security vendors relied on kernel hooking to implement antivirus services, protecting the OS and its applications by intercepting and blocking potentially malicious actions or processes. However, in an attempt to strengthen protection of the guest kernel and combat the increased presence of rootkit attacks, some software companies modified their operating systems to monitor kernel code as well as system resources used by the guest kernel and initiate an automatic shutdown of the computing device upon detecting unauthorized kernel patching. One example of this modification is PatchGuard™ for Microsoft® Windows® operating systems.


While the prevention of kernel patching may have reduced some rootkit infections, it precludes security providers from providing more robust protection against malicious attacks to the guest kernel by handling exploit detection checks in the guest kernel.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the disclosure are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1 is an exemplary block diagram of a network featuring a computing device that supports virtualization and is configured with a hypervisor-controlled threat detection system.



FIG. 2 is an exemplary block diagram of a logical representation of an endpoint device of FIG. 1 that is implemented with the hypervisor-controlled threat detection system is shown.



FIG. 3 is an exemplary embodiment of the software virtualization architecture of the endpoint device of FIG. 2 with the hypervisor-controlled threat detection system.



FIG. 4 is an exemplary block diagram of hypervisor-controlled syscall hooking conducted by the hypervisor-controlled threat detection system.



FIG. 5 is an exemplary block diagram of hypervisor-controlled syscall breakpoint handling by the hypervisor-controlled threat detection system.



FIG. 6 is an exemplary block diagram of exploit detection and reporting by the hypervisor-controlled threat detection system.



FIG. 7 is a flowchart of exemplary operations by the hypervisor-controlled threat detection system of FIGS. 4-6.





DETAILED DESCRIPTION

Various embodiments of the disclosure relates to logic operating in cooperation with a hypervisor to intercept and analyze a system call within a guest operating system (OS) kernel when the computing device is operating in a first (selected) operating state and to permit apparent uninterrupted access of a memory page normally accessed in response to that system call when the computing device is operating in a second (selected) operating state. The first selected operating state may be a guest “execute” cycle (e.g., instruction decode) and the second selected operating state may be a guest “read” cycle (e.g., instruction fetch).


More specifically, embodiments of the disclosure are directed to a hypervisor-controlled threat detection system that is deployed within a software virtualization architecture. The hypervisor-controlled threat detection system is configured, during run-time (e.g., a guest “execute” cycle) of a virtual machine operating in guest mode, to intercept a particular system call (sometimes referred to as a “hooked syscall”) within a guest operating system (OS) kernel (hereinafter “guest kernel”) and provide control to a hypervisor operating in host mode. According to one embodiment of the disclosure, the hooked syscall may be intercepted by inserting a breakpoint (e.g., a single byte instruction such as a HALT instruction) as the first instruction in code associated with the hooked syscall. The single byte instruction ensures that the instruction pointers for all threads during multi-thread operations being conducted on the computing device (e.g., endpoint device) will either be before or after the embedded instruction, not in the middle of the instructions which may occur with a multi-byte instruction. The code is maintained within a guest memory page, which may be referenced by use of a pointer corresponding to the hooked syscall that is placed within an entry of a service dispatch (syscall) table. The service dispatch table resides in the guest kernel.


In response to an Application Programming Interface (API) call that invokes the hooked syscall, the breakpoint (e.g., HALT instruction) causes a trap to the hypervisor at a desired address (or the hyper-process component operating in conjunction with the hypervisor). Thereafter, in response to receipt of the trap during run-time (e.g., guest “execute” cycle), the hypervisor subsequently diverts control to exploit detection logic residing within the guest kernel. This redirection maintains the “context” of the application executing in the guest user mode, enabling the exploit detection logic to access the metadata that may include some or all of the context. The context is the original state of the application prior to the trap and is preserved during re-direction to the exploit detection logic. Some of the context may be accessed dependent on the exploit detection intelligence. The exploit detection logic analyzes metadata, inclusive of at least some of the context information associated with the hooked syscall, to determine if the computing device may be subject to a malicious attack (e.g., exploit attack, malware attack, etc.).


However, during a second guest (read) cycle and in response to a read access that invokes the hooked syscall, the hypervisor does not divert control to the exploit detection logic. Rather, the hypervisor emulates a read access to the memory page in an unaltered state. This may be accomplished by the hypervisor returning an original first instruction of the hooked syscall that is overwritten by the breakpoint. Hence, during a (guest) read cycle, any guest application or guest OS functionality is unable to detect the presence of the breakpoint (e.g. HALT instruction) in code within the guest kernel since the control flow of the syscall during the (guest) read cycles continues as expected.


The “hooked” syscalls may be selected in accordance with exploit attack patterns that have been previously detected or syscall that are anticipated to be accessed during a malicious attack. It is contemplated that the selection of the hooked syscalls may be dynamically set or may be static in nature. When dynamically set, the hooked syscalls may be tailored to monitor syscalls that may be more frequently experienced by the computing device.


I. Terminology

In the following description, certain terminology is used to describe features of the invention. For example, in certain situations, the terms “component” and “logic” are representative of hardware, firmware or software that is configured to perform one or more functions. As hardware, a component (or logic) may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor (e.g., microprocessor with one or more processor cores, a digital signal processor, a programmable gate array, a microcontroller, an application specific integrated circuit “ASIC”, etc.), a semiconductor memory, or combinatorial elements.


A component (or logic) may be software in the form of a process or one or more software modules, such as executable code in the form of an executable application, an API, a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, object code, a shared library/dynamic load library, or one or more instructions. These software modules may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); or persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code may be stored in persistent storage. Upon execution of an instance of a system component or a software module, a “process” performs operations as coded by the software component.


The term “object” generally refers to a collection of data, whether in transit (e.g., over a network) or at rest (e.g., stored), often having a logical structure or organization that enables it to be categorized for purposes of analysis for a presence of an exploit and/or malware. During analysis, for example, the object may exhibit certain expected characteristics (e.g., expected internal content such as bit patterns, data structures, etc.) and, during processing, a set of expected behaviors. The object may also exhibit unexpected characteristics and a set of unexpected behaviors that may offer evidence of the presence of malware and potentially allow the object to be classified as part of a malicious attack.


Examples of objects may include one or more flows or a self-contained element within a flow itself. A “flow” generally refers to related packets that are received, transmitted, or exchanged within a communication session. For convenience, a packet is broadly referred to as a series of bits or bytes having a prescribed format, which may, according to one embodiment, include packets, frames, or cells. Further, an “object” may also refer to individual or a number of packets carrying related payloads, e.g., a single webpage received over a network. Moreover, an object may be a file retrieved from a storage location over an interconnect.


As a self-contained element, the object may be an executable (e.g., an application, program, segment of code, dynamically link library “DLL”, etc.) or a non-executable. Examples of non-executables may include a document (e.g., a Portable Document Format “PDF” document, Microsoft® Office® document, Microsoft® Excel® spreadsheet, etc.), an electronic mail (email), downloaded web page, or the like.


The term “event” should be generally construed as an activity that is conducted by a software component process performed by the computing device. The event may occur that causes an undesired action to occur, such as overwriting a buffer, disabling a certain protective feature in the guest environment, or a guest OS anomaly such as a guest OS kernel trying to execute from a user page.


According to one embodiment, the term “malware” may be construed broadly as any code or activity that initiates a malicious attack and/or operations associated with anomalous or unwanted behavior. For instance, malware may correspond to a type of malicious computer code that executes an exploit to take advantage of a vulnerability, for example, to harm or co-opt operation of a network device or misappropriate, modify or delete data. Malware may also correspond to an exploit, namely information (e.g., executable code, data, command(s), etc.) that attempts to take advantage of a vulnerability in software and/or an action by a person gaining unauthorized access to one or more areas of a network device to cause the network device to experience undesirable or anomalous behaviors. The undesirable or anomalous behaviors may include a communication-based anomaly or an execution-based anomaly, which, for example, could (1) alter the functionality of an network device executing application software in an atypical manner (a file is opened by a first process where the file is configured to be opened by a second process and not the first process); (2) alter the functionality of the network device executing that application software without any malicious intent; and/or (3) provide unwanted functionality which may be generally acceptable in another context.


The term “computing device” should be generally construed as electronics with the data processing capability and/or a capability of connecting to any type of network, such as a public network (e.g., Internet), a private network (e.g., a wireless data telecommunication network, a local area network “LAN”, etc.), or a combination of networks. Examples of a computing device may include, but are not limited or restricted to, the following: an endpoint device (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, a medical device, or any general-purpose or special-purpose, user-controlled electronic device configured to support virtualization); a server; a mainframe; a router; or a security appliance that includes any system or subsystem configured to perform functions associated with malware detection and may be communicatively coupled to a network to intercept data routed to or from an endpoint device.


The term “message” generally refers to information transmitted in a prescribed format, where each message may be in the form of one or more packets or frames, a Hypertext Transfer Protocol (HTTP) based transmission, or any other series of bits having the prescribed format. For instance, a message may include an electronic message such as an electronic mail (email) message; a text message in accordance with a SMS-based or non-SMS based format; an instant message in accordance with Session Initiation Protocol (SIP); or a series of bits in accordance with another messaging protocol exchanged between software components or processes associated with these software components.


The term “trap,” often also known as an exception or a fault, is typically a type of interrupt caused by an exceptional condition (e.g., breakpoint), resulting in a switch in control to the operating system to allow it to perform an action before returning control to the originating process. In some contexts, the trap refers specifically to an interrupt intended to initiate a context switch to a monitor program.


The term “interconnect” may be construed as a physical or logical communication path between two or more computing devices. For instance, the communication path may include wired and/or wireless transmission mediums. Examples of wired and/or wireless transmission mediums may include electrical wiring, optical fiber, cable, bus trace, a radio unit that supports radio frequency (RF) signaling, or any other wired/wireless signal transfer mechanism.


The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. Also, the term “agent” should be interpreted as a software component that instantiates a process running in a virtual machine. The agent may be instrumented into part of an operating system (e.g., guest OS) or part of an application (e.g., guest software application). The agent is configured to provide metadata to a portion of the virtualization layer, namely software that virtualizes certain functionality supported by the computing device.


Lastly, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.


II. General Network Architecture

Referring to FIG. 1, an exemplary block diagram of a network 100 is illustrated, which features a computing device that supports virtualization and is configured with a hypervisor-controlled threat detection system. The network 100 may be organized as a plurality of networks, such as a public network 110 and/or a private network 120 (e.g., an organization or enterprise network). According to this embodiment of network 100, the public network 110 and the private network 120 are communicatively coupled via network interconnects 130 and intermediary computing devices 1401, such as network switches, routers and/or one or more malware detection system (MDS) appliances (e.g., intermediary computing device 1402) as described in co-pending U.S. Patent Application entitled “Virtual System and Method For Securing External Network Connectivity” (U.S. Patent Application No. 62/187,108), the entire contents of which are incorporated herein by reference. The network interconnects 130 and the intermediary computing devices 1401 and/or 1402, inter alia, provide connectivity between the private network 120 and a computing device 1403, which may be operating as an endpoint device for example. According to one embodiment, the hypervisor-controlled threat detection system (described below) may be deployed within the computing device 1403. Alternatively, the hypervisor-controlled threat detection system may be deployed within the intermediary computing device 1401 as a component of the firewall, within the intermediary computing device 1402 operating as a security appliance, or as part of cloud services 150.


The computing devices 140i (i=1, 2, 3) illustratively communicate by exchanging messages (e.g., packets or data in a prescribed format) according to a predefined set of protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). However, it should be noted that other protocols, such as the HyperText Transfer Protocol Secure (HTTPS) for example, may be used with the inventive aspects described herein. In the case of private network 120, the intermediary computing device 1401 may include a firewall or other computing device configured to limit or block certain network traffic in an attempt to protect the endpoint devices 1403 from unauthorized users.


III. General Endpoint Architecture

Referring now to FIG. 2, an exemplary block diagram of a logical representation of the endpoint device 1403 implemented with the hypervisor-controlled threat detection system is shown. Herein, the endpoint device 1403 illustratively includes one or more hardware processors 210, a memory 220, one or more network interfaces (referred to as “network interface(s)”) 230, and one or more network devices (referred to as “network device(s)”) 240 connected by a system interconnect 250, such as a bus. These components are at least partially encased in a housing 200, which is made entirely or partially of a rigid material (e.g., hardened plastic, metal, glass, composite, or any combination thereof) that protects these components from atmospheric conditions.


The hardware processor 210 is a multipurpose, programmable device that accepts digital data as input, processes the input data according to instructions stored in its memory, and provides results as output. One example of the hardware processor 210 may include an Intel® x86 central processing unit (CPU) with an instruction set architecture. Alternatively, the hardware processor 210 may include another type of CPU, a digital signal processor (DSP), an ASIC, a field programmable gate array (FPGA), or the like.


According to one implementation, the hardware processor 210 may include one or more control registers, including a “CR3” control register in accordance with x86 processor architectures. Herein, the control register 212 may be context-switched between host mode and guest mode, where a “guest read cycle” corresponds to a read operation that occurs when the hardware processor 210 is in guest mode and the “guest execute cycle” corresponds to an execute operation that occurs when the hardware processor 210 is in execute mode. Hence, when the hardware processor 210 is executing in guest mode, a pointer value within the control register 212 identifies an address location for guest memory page tables, namely memory page tables associated with a currently running process that is under control of the guest OS (e.g., WINDOWS®-based process). The address location may be for syscall function calls via the System Services Dispatch Table.


The network device(s) 240 may include various input/output (I/O) or peripheral devices, such as a storage device for example. One type of storage device may include a solid state drive (SSD) embodied as a flash storage device or other non-volatile, solid-state electronic device (e.g., drives based on storage class memory components). Another type of storage device may include a hard disk drive (HDD). Each network interface 230 may include a modem or one or more network ports containing the mechanical, electrical and/or signaling circuitry needed to connect the endpoint device 1403 to the private network 120 to thereby facilitate communications over the network 110. To that end, the network interface(s) 230 may be configured to transmit and/or receive messages using a variety of communication protocols including, inter alia, TCP/IP and HTTPS.


The memory 220 may include a plurality of locations that are addressable by the hardware processor 210 and the network interface(s) 230 for storing software (including software applications) and data structures associated with such software. Examples of the stored software include a guest software 260 and host software 270, as shown in FIG. 2. Collectively, these components may be operable within one or more virtual machines that are generated and configured by virtual machine (VM) configuration logic 280.


In general, the guest software 260 includes at least one guest application 310, a guest agent 320, and a guest OS 330, all of which may be operating as part of a virtual machine in a guest environment. Resident in memory 220 and executed by a virtual processor “vCPU” (which is, in effect, executed by processor(s) 210), the guest OS 330 functionally organizes the endpoint device 1403 by, inter alia, invoking operations in support of guest applications 310 executing on the endpoint device 1403. An exemplary guest OS 330 may include, but are not limited or restricted to the following: (1) a version of a WINDOWS® series of operating system; (2) a version of a MAC OS® or an IOS® series of operating system; (3) a version of a LINUX® operating system; or (4) a versions of an ANDROID® operating system, among others.


As described below in detail, the guest OS 330 further includes a guest OS kernel 335 that operates in cooperation with instance(s) of one or more guest applications 310 perhaps running their own separate guest address spaces and/or one or more instances of a guest agent 320 that may be instrumented as part of or for operation in conjunction with the guest OS 330 or as one of the separate guest applications 310. The guest agent 320 is adapted to monitor what processes are running in the guest user mode and provide information for selecting the “hooked” syscalls, namely the syscalls that are to be intercepted and monitored by the hypervisor. Additionally, the guest agent 320 is adapted to receive from the guest OS kernel 335 information associated with detected events that suggest a malicious attack, such as the presence of an exploit or malware in the computing device 1403. Examples of these guest applications 310 may include a web browser (e.g., EXPLORER®, CHROME®, FIREFOX®, etc.), document application such as a Portable Document Format (PDF) reader (e.g., ADOBE® READER®) or a data processing application such as the MICROSOFT® WORD® program from untrusted source(s).


Herein, the host software 270 may include hyper-process components 360, namely instances of user-space applications operating as user-level virtual machine monitors and in cooperation with a hypervisor 370 (described below). When executed, the hyper-process components 360 produce processes running in the host user mode. The hyper-process components 360 may be isolated from each other and run in separate (host) address spaces. In communication with a hypervisor 370 (described below), the hyper-process components 360 are responsible for controlling operability of the endpoint device 1403, including policy and resource allocation decisions, maintaining logs of monitored events for subsequent analysis, managing virtual machine (VM) execution, and managing exploit detection and classification. The management of exploit detection and classification may be accomplished through certain hyper-process components 360 (e.g., guest monitor and threat protection logic described below).


The hypervisor 370 is disposed or layered beneath the guest OS kernel 335 of the endpoint device 1403 and is the only component that runs in the most privileged processor mode (host mode, ring-0). In some embodiments, as part of a trusted computing base of most components in the computing device, the hypervisor 370 is configured as a light-weight hypervisor (e.g., less than 10K lines of code), thereby avoiding inclusion of potentially exploitable x86 virtualization code.


The hypervisor 370 generally operates as the host kernel that is devoid of policy enforcement; rather, the hypervisor 370 provides a plurality of mechanisms that may be used by the hyper-processes, namely processes produced by execution of certain host software 280. These mechanisms may be configured to control communications between separate protection domains (e.g., between two different hyper-processes), coordinate thread processing within the hyper-processes and virtual CPU (vCPU) processing within a virtual machine, delegate and/or revoke hardware resources, and control interrupt delivery and Direct Memory Access (DMA), as described below.


IV. Virtualization Architecture—Threat Detection System

Referring now to FIG. 3, an exemplary embodiment of the software virtualization architecture of the endpoint device 1403 with the hypervisor-controlled threat detection system 300. The software virtualization architecture includes a guest environment 300 and a host environment 350, both of which may be configured in accordance with a protection ring architecture as shown. While the protection ring architecture is shown for illustrative purposes, it is contemplated that other architectures that establish hierarchical privilege levels for virtualized software components may be utilized.


A. Guest Environment


The guest environment 300 includes a virtual machine 305, which includes software components that are configured to detect the presence of an exploit based on intercepting and analyzing syscalls within the guest kernel 335 while complying with OS functionality that precludes kernel hooking (e.g., PatchGuard® for Windows® OSes). Herein, as shown, the virtual machine 305 includes the guest OS 330 that features the guest OS kernel 335 running in the most privileged level (guest kernel mode, Ring-0 306) along with one or more instances of guest OS applications 310 running in a lesser privileged level (guest user mode, Ring-3 307).


1. Guest OS


In general, the guest OS 330 manages certain operability of the virtual machine 305, where some of these operations are directed to the execution and allocation of virtual resources, which may involve network connectivity, memory translation, interrupt service delivery and handling. More specifically, the guest OS 330 may receive electrical signaling from a process associated with the guest application 310 that requests a service from the guest kernel 335. The service may include hardware-related services (for example, accessing a hard disk drive), creation and execution of a new process, or the like. Application Programming Interfaces (APIs) provide an interface between the guest application 310 and the guest OS 330.


As an illustrative example, the request may include an API call, where a request for a service from the guest kernel 335 is routed to system service dispatch logic 340. Operating as a syscall table, the service dispatch logic 340 is configured to invoke a particular syscall that provides the service requested by the API call. For this embodiment, the service dispatch logic 340 includes a plurality of entries, where each entry includes an address pointer to a memory location (e.g., a memory page or a portion of a memory page) having code instructions for the corresponding syscall. It is contemplated that different syscall pointers may be directed to different memory pages, although two or more syscall pointers may be directed to the same memory page as the syscall pointers may be directed to different address regions of the same memory page. The portion of the memory page associated with the syscall includes instructions that may be used, at least in part, to cause virtual system hardware (e.g., vCPU 342) to provide the requested services, such as to access to virtual hard disk for example.


According to one embodiment of the disclosure, a breakpoint (e.g., HALT instruction) is inserted as a first instruction at a starting address location for one or more selected syscalls (sometimes referred to as “hooked syscalls”). This starting address location is referenced by a syscall pointer associated with the entry in the service dispatch logic 340 (syscall table) pertaining to the hooked syscall. In response to the API call invoking a particular syscall, the breakpoint causes a trap to the hypervisor 370 (described below) at a desired address. Thereafter, in response to receipt of the trap during run-time, the hypervisor 370 subsequently diverts control flow to exploit detection logic 345 within the guest kernel 335. The exploit detection logic 345 is configured to analyze metadata or other context information associated with the hooked syscall to detect a malicious attack.


2. Guest Agent


According to one embodiment of the disclosure, the guest agent 320 is a software component configured to provide the exploit detection logic 345 and/or a portion of the hypervisor logic (e.g., threat protection logic 390) with metadata that assists in determining the monitored syscalls. Instrumented in guest OS 330 or operating as a separate software component in the guest user mode 307 as shown, the guest agent 320 may be configured to provide metadata to the exploit detection logic 345 that identifies what syscalls are to be “hooked” for the current process running in the virtual machine 305. Stated differently, for a process being launched, the guest agent 320 identifies which syscalls are selected for insertion of a breakpoint (e.g., special instruction such as a HALT instruction) to redirect control flow to the exploit detection logic 345 operating within the guest kernel 335.


Herein, the guest agent 320 includes one or more ring buffers 325 (e.g., queue, FIFO, buffer, shared memory, and/or registers) to record the metadata used for syscall selection as well as information associated with certain characteristics determined by the exploit detection logic 345 that signify the object is an exploit or includes malware. Examples of these characteristics may include information associated with (i) type of syscall, (ii) what thread or process invoked the syscall, or (iii) was the API call that invoked the syscall coming from an allocated memory region, or the like. The recovery of the information associated with the characteristics may occur through a “pull” or “push” recovery scheme, where the guest agent 320 may be configured to download the characteristics periodically or aperiodically (e.g., when the ring buffer 325 exceeds a certain storage level or in response to a request from the exploit detection logic 345).


B. Host Environment


As further shown in FIG. 3, the host environment 350 features a protection ring architecture that is arranged with a privilege hierarchy from the most privileged level 356 (host kernel mode, Ring-0) to a lesser privilege level 357 (host user mode, Ring-3). Positioned at the most privileged level 356 (Ring-0), the hypervisor 370 is configured to directly interact with the physical hardware platform and its resources, such as hardware processor 210 or memory 220 of FIG. 2.


Running on top of the hypervisor 370 in Ring-3 357, a plurality of instances of certain host software (referred to as “hyper-process components 360”) communicate with the hypervisor 370. Some of these hyper-process components 360 may include a master controller logic 380, a guest monitor logic 385 and a threat protection logic 390, each representing a separate software component with different functionality and its corresponding process may be running in a separate address space. As the software components associated with the hyper-process components 360 are isolated from each other (i.e. not in the same binary), inter-process communications between running instances of any of the hyper-process components 360 are handled by the hypervisor 370, but regulated through policy protection by the master controller logic 380.


1. Hypervisor


The hypervisor 370 may be configured as a light-weight hypervisor (e.g., less than 10K lines of code) that operates as a host OS kernel. The hypervisor 370 features logic (mechanisms) for controlling operability of the computing device, such as endpoint device 1403 as shown. The mechanisms include inter-process communication (IPC) logic 372, scheduling logic 374 and page control logic 376, as partially described in a U.S. Provisional Patent Application Nos. 62/097,485 & 62/187,108, the entire contents of both of which are incorporated herein by reference.


The hypervisor 370 features IPC logic 372, which supports communications between separate hyper-processes. Thus, under the control of the IPC logic 372, in order for a first hyper-process (e.g., guest monitor 385) to communicate with another hyper-process, the first hyper-process needs to route a message to the hypervisor 370. In response, the hypervisor 370 switches from the first hyper-process to a second hyper-process (e.g., threat protection logic 390) and copies the message from an address space associated with the first hyper-process to a different address space associated with the second hyper-process.


Also, the hypervisor 370 contains scheduling logic 374 that ensures, at some point in time, all of the software components can run on the hardware processor 210 as defined by the scheduling context. Also, the scheduling logic 374 re-enforces that no software component can monopolize the hardware processor 210 longer than defined by the scheduling context.


Lastly, the hypervisor 370 contains page control logic 376 that, operating in combination with the threat protection logic 390 (described below), is configured to set permissions for different guest memory pages based on information provided by the exploit detection logic 345. The page permissions may be set in order to control which syscalls are being monitored for the presence of exploits (or malware) through insertion of one or more breakpoints (e.g., special instruction such as a HALT instruction) into starting locations for the monitored syscalls (sometimes referred to as “hooked syscalls”). These starting locations may be, at least in part, correspond to the syscall pointers in the service dispatch logic 340 (syscall table) that pertain to the hooked syscall.


For instance, as an illustrative example as shown in FIGS. 3-4, for a given breakpoint at syscall_1( ) in the guest kernel 335, the hypervisor 370 makes the memory page containing syscall_1( ) execute-only and may assist in the creation of a copy of that memory page to emulate read accesses to that copied memory page. In the memory page with execute-only permission, the hypervisor 370 places a special instruction (e.g., HALT or another single byte instruction, a multi-byte instruction, etc.) that traps to the hypervisor 370 at a desired address. The original instruction byte substituted for the special instruction is stored at that address.


Where the guest application 310 executes an object 315 that issues an API call for services supported by syscall_1( ), the guest kernel 335 locates a function pointer within the service dispatch logic 340 to the syscall_1( ) function and invokes the syscall_1( ) function. A first instruction (HALT) in that function traps to the hypervisor 370. In response to detecting that the trap is a result of the first instruction, the hypervisor 370 diverts the control flow to the exploit detection logic 345 which performs the exploit detection checks. However, in response to an invocation of the syscall_1( ) function during a read access, the hypervisor 370 either directs the accessing source to the copy of the memory page or returns the original instruction that was overwritten by the breakpoint.


2. Master Controller


Referring still to FIG. 3, the master controller logic 380 is responsible for enforcing policy rules directed to operations of the software virtualization architecture. This responsibility is in contrast to the hypervisor 370, which provides mechanisms for inter-process communications and resource allocation, but has little or no responsibility in dictating how and when such functions occur. For instance, the master controller logic 380 may be configured to conduct a number of policy decisions, including some or all of the following: (1) memory allocation (e.g., distinct physical address space assigned to different software components); (2) execution time allotment (e.g., scheduling and duration of execution time allotted on a selected granular basis or process basis); (3) virtual machine creation (e.g., number of VMs, OS type, etc.); and/or (4) inter-process communications (e.g., which processes are permitted to communicate with which processes, etc.). Additionally, the master controller logic 380 is responsible for the allocation of resources, namely resources that are driven by hyper-process components 360.


3. Guest Monitor


Referring still to FIG. 3, the guest monitor logic 385 is a running instance of a host user space application that is responsible for managing the execution of the virtual machine 305, which includes operating in concert with the threat protection logic 390 to determine whether or not (i) certain breakpoints are to be applied to certain syscalls and (ii) to transfer of control flow to the exploit detection logic 345. Herein, an occurrence of the VM Exit may prompt the guest monitor logic 385 to receive and forward metadata associated with a process scheduled to run on the virtual machine to the threat protection logic 390. Based on the metadata, which may identify the next running process and one or more syscalls for hooking, the threat protection logic 390 signals the guest monitor logic 385 whether some of all of these syscalls are to be hooked. Such signaling may be based on analysis conducted by the threat protection logic 390, the hypervisor 370 or a combination thereof. Likewise, in response to a VM Entry, the guest monitor logic 385 may signal the exploit detection logic 345 that it now has control flow and to conduct the exploit detection checks to detect a malicious attack.


4. Threat Protection


As described above and shown in FIG. 3, the threat protection logic 390 operates in conjunction with the hypervisor 370 for selecting the “hooked” syscalls and diverting control flow to the exploit detection logic 345 operating within the guest kernel 335. More specifically, based on the type of process being launched (i.e., web browser version w.x, document reader version y.z), the threat protection logic 390 is responsible for selecting (or at least authorizing) the insertion of breakpoints within memory pages associated with the hooked syscall.


Thereafter, in response to a trap to the hypervisor 370 at a desired address, the threat protection logic 390 determines if the trap occurs during run-time (e.g., guest “execute” cycle) or during read-time (guest “read” cycle). In response to receipt of the trap during run-time, the threat protection logic 390, perhaps operating in conjunction with the hypervisor 370, determines that the trap is based on one of the inserted breakpoints and diverts control to exploit detection logic 345 residing within the guest kernel 335. The “hooked” syscall may be one of a plurality of syscalls selected in accordance with known or anticipated exploit attack patterns. It is contemplated that the selection of the hooked syscalls may be dynamically set or static in nature as described above.


V. Hypervisor-Controlled Threat Detection System

Referring now to FIG. 4, an exemplary block diagram of hypervisor-controlled syscall hooking by a hypervisor-controlled threat detection system 400 is shown. Herein, the hypervisor-controlled threat detection system 400 includes the exploit detection logic 345 residing within the guest kernel 335 and operating in concert with the hypervisor 370, the guest monitor logic 385, and the threat protection logic 390. When operating, the hypervisor-controlled threat detection system 400 resides in one of multiple operating states: (1) hypervisor-controlled syscall hooking (FIG. 4); (2) hypervisor-controlled syscall breakpoint handling (FIG. 5); and (3) exploit detection and reporting (FIG. 6).


As shown in FIG. 4, operating within the guest kernel 335, the exploit detection logic 345 includes instruction substitution logic 440 that is adapted to communicate with the guest agent 320 in order to receive a syscall listing 410 via an incoming message. Herein, according to one embodiment of the disclosure, the syscall listing 410 includes one or more syscalls that are targeted for monitoring and are independent of application type. Alternatively, according to another embodiment, the syscall listing 410 may comprise (1) identified application types for monitoring and (2) identified syscalls targeted for each identified application type. The targeted syscalls set forth in the syscall listing 410 may be selected based on machine learning gathered from prior analyses by the hypervisor-controlled threat detection system 400, or imported data from other computing devices or a third party source that monitors current trends for exploit or malware attacks in accordance with industry (e.g., financial, high technology, real estate, etc.), geographic region (e.g., country, state, county, city, principality, etc.), or the like.


In response to receiving the syscall listing 410, a virtual machine exit (VM Exit) occurs at which a transition is made between the virtual machine 305 that is currently running and the hypervisor 370 (operation A). In response to the VM Exit, certain guest state information is saved, such as processor state (e.g., control register values, etc.) for example. Also, the exploit detection logic 345 forwards the syscall listing 410 (or at least a portion thereof) within a message to the hypervisor 370.


Operating with the hypervisor 370, the threat protection logic 390 receives the syscall listing 410 from the hypervisor 370 (via the guest monitor logic 385) and determines if none, some or all of these syscalls are to be hooked by inserting a special instruction at a first instruction associated with the hooked syscall (operations B & C). This special instruction may be a single byte instruction (e.g., HALT instruction) or a multi-byte instruction (e.g., JUMP instruction), although the multi-byte instruction may cause device instability when the processing of the multi-byte instruction is interrupted. According to one embodiment of the disclosure, the threat protection logic 390 may rely on the identified application types received by the exploit detection logic 345 as part of the syscall listing 410.


Thereafter, the threat protection logic 390 provides a message including information associated with the selected syscall 420 (hereinafter “selected syscall information”) to the hypervisor 370 via the guest monitor logic 385 (operations D & E). Based on the selected syscall information 420, the page control logic 376 of the hypervisor 370 determines an address location 430 for each memory page (or a region of the memory page) corresponding to the start of a selected syscall. In response to a change in operating state from the hypervisor 370 to the virtual machine 305, a VM Entry is triggered so that the guest state information is restored as the virtual machine 305 resumes operation (operation F). The exploit detection logic 345 receives via a message at least a portion of the selected syscall information 420 and the address information 430 from the hypervisor 370.


In response to receiving at least a portion of the selected syscall information 420 and the address information 430, the instruction substitution logic 440 accesses the service dispatch logic 340 to insert the special instruction into certain locations within memory pages associated with the syscalls confirmed by the threat protection logic 390 for hooking (operation G).


Referring now to FIG. 5, an exemplary block diagram of hypervisor-controlled syscall breakpoint handling by the hypervisor-controlled threat detection system 400 is shown. Herein, the guest application 310 is processing an object 315 that initiates a request message 500 for kernel services (e.g., disk access, create new process, etc.) that accesses the service dispatch logic 340 (operation K). When the kernel services request message 500 is associated with a hooked syscall (e.g., syscall_1( ) 510), an embedded breakpoint (e.g., HALT instruction) causes a trap 520 to the hypervisor 370 at a desired address (operation L). The trap 520 may operate as a VM Exit as a transition is being made between the virtual machine 305 and the hypervisor 370. Also, an identifier 530 of the guest application 310 (hereinafter “guest_app_ID 530”) that initiated the request message 500 along with an identifier of hooked syscall (hereinafter “hooked_syscall_ID” 540) are provided to the hypervisor 370.


Operating with the hypervisor 370, the threat protection logic 390 receives the guest_app_ID 530 along with the hooked_syscall_ID 540 (via a message received from the guest monitor logic 385) with optionally guest application state (e.g. execute cycle, read cycle, etc.) if not immediately available to the threat protection logic 390. This information is provided to the threat protection logic 390 for analysis. If the threat protection logic 390 determines that the guest_app_ID 530 matches one of the identified application types for monitoring included as part of the syscall listing 410 of FIG. 4, but the requesting source is operating in a “guest” read cycle, the hypervisor 370 returns an original first instruction of the “hooked” syscall that is overwritten by the breakpoint so that no guest application or guest OS functionality is able to detect the presence of the breakpoint.


However, if the operating state is in execute mode, the threat protection logic 390 determines that the guest_app_ID 530 matches one of the identified application types for monitoring included as part of the syscall listing 410 of FIG. 4 (operations M & N), the threat protection logic 390 signals the hypervisor 370 via the guest monitor logic 385 to divert the control flow to exploit detection logic 345 residing within the guest kernel 335 and a VM Entry occurs to restore the guest state information and resume VM operations (operations O, P & Q). The hypervisor 370 may divert the control flow by modifying an address in an instruction pointer of the vCPU 342 of FIG. 3 to point to a memory address associated with the exploit detection logic 345, namely a memory address of context handler logic 550 that is part of the exploit detection logic 345. The guest_app_ID 530 and the hooked_syscall_ID 540 are provided to the context handler 550, which is adapted to retrieve context information 560, including a memory region from which the API call that invokes syscall_1( ) identified by the hooked_syscall_ID 540 or other characteristics that may be useful in determining whether an exploit attack is being conducted.


Referring to FIG. 6, an exemplary block diagram of exploit detection and reporting by the hypervisor-controlled threat detection system 400 is shown. Herein, the context handler 550 of the exploit detection logic 345 fetches context information associated with the hooked syscall (syscall_1) for the context analyzer logic 600 to determine if the computing device may be subject to a malicious attack. For instance, where the object 315 is an executable, the context analyzer logic 600 may conduct heuristic analysis of certain context information 560, such as the memory region associated with the object 315 that issues the API call. Where the memory region corresponds to a data section of the executable object 315, the context analyzer logic 600 identifies that the object 315 is associated with a malicious attack and information is sent to the guest agent 320 that reports (though a formatted message) that the object 315 includes a potential exploit. In contrast, where the memory region corresponds to a code section associated with the executable object 315, the context analyzer logic 600 identifies that the object 315 is not associated with a malicious attack. As a result, the exploit detection logic 345 may refrain from sending information to the guest agent 320 that identifies a low probability of the object 315 including a potential exploit. During exploit detection checks or subsequent thereto, the exploit detection logic 345 may resume operations 610, which may include returning the original instruction swapped out for the HALT instruction and allowing the guest kernel 335 to acquire the virtual resources requested by the guest application 310 as processing of the object 315 continues.


Referring to FIG. 7, a flowchart of exemplary operations of the hypervisor-controlled threat detection system is shown. First, a determination is made as to whether any memory pages associated with syscalls operating within the guest kernel are to be monitored (block 700). If so, such monitoring may be conducted by installing a breakpoint (e.g., a single-byte instructions such as a HALT instruction) as the first instruction at a particular address location where the syscall code resides, namely an address for the portion of the memory page that includes the syscall code for example (block 710). Herein, the breakpoint is only visible to the processor during the (guest) execute cycle. During the (guest) read cycle, however, the hypervisor is configured to hide the HALT instruction by returning the first instruction, namely the original instruction that was substituted for the HALT instruction. This hypervisor-controlled syscall hooking process is iterative until all of the breakpoints are installed into each of the syscalls (block 720).


Thereafter, the hypervisor-controlled threat detection system conducts a hypervisor-controlled syscall hooking process that initially determines whether a HALT instruction has been triggered (block 730). If not, the hypervisor-controlled threat detection system continues to monitor for the execution of a HALT instruction. If so, the HALT instruction causes a trap to the hypervisor at a desired address (block 740). Thereafter, in response to receipt of the trap during run-time and the HALT instruction is directed to one of the applications being monitored, the hypervisor subsequently diverts control flow to the exploit detection logic residing within the guest kernel (blocks 750 and 760). The exploit detection logic obtains and analyzes metadata or other context information associated with the hooked syscall to determine if the object being processed by the guest application is a potential exploit (blocks 770 and 780). One factor in the determination is whether the object (executable), which initiated an API call (which resulted in the syscall), resides in a code section or a data portion. If the later, the object is a potential exploit.


If the object is determined to be a potential exploit, the exploit detection logic notifies the guest agent by providing characteristics supporting a determination that the object is associated with a potential exploit and the virtual machine operations resume (blocks 790 and 795). Otherwise, no reporting may be conducted, but rather, the virtual machine resumes (block 795).


In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims
  • 1. A computing device comprising: one or more hardware processors; anda memory coupled to the one or more processors, the memory comprises software that, when executed by the one or more hardware processors, operates as (i) a virtual machine including a guest kernel that facilitates communications between a guest application being processed within the virtual machine and one or more resources and (ii) a hypervisor configured to intercept a system call issued from the guest application,wherein the hypervisor is configured to signal logic within the guest kernel to analyze information associated with the intercepted system call to determine whether the intercepted system call is associated with a malicious attack in response to the intercepted system call occurring during a first operating state,wherein the hypervisor is further configured to obfuscate interception of the system call in response to the intercepted system call being issued during a second operating state,wherein the first operating state is a first guest cycle and the second operating state is a second guest cycle.
  • 2. The computing device of claim 1, wherein the first guest cycle is a guest execute cycle.
  • 3. The computing device of claim 2, wherein the second guest cycle is a guest read cycle.
  • 4. The computing device of claim 1, wherein the hypervisor is further configured to obfuscate interception of the system call by re-inserting an original first instruction of the system call, which has been previously overwritten by a particular instruction that diverts control to the hypervisor, so that no guest application or guest operating system functionality is able to detect presence of the particular instruction.
  • 5. The computing device of claim 4, wherein the particular instruction is a HALT instruction.
  • 6. The computing device of claim 1, wherein a portion of the guest kernel includes a service dispatch table.
  • 7. The computing device of claim 6, wherein the hypervisor is configured to intercept the system call by inserting a single-byte instruction as a first instruction in code associated with the system call that is accessed via a pointer in the service dispatch table corresponding to the system call.
  • 8. The computing device of claim 7, wherein the hypervisor is configured to intercept the system call by processing the single-byte instruction which is a HALT instruction that causes a trap to the hypervisor, the trap includes at least an identifier of the guest application that issued the system call and an identifier of the system call.
  • 9. The computing device of claim 8, wherein the memory further comprises threat protection logic that, when executed by the one or more hardware processors, determines whether the guest application is a type of application being monitored and signals the hypervisor to divert operation control to the logic operating within the guest kernel in response to detecting that the system call is initiated during the first operating state.
  • 10. The computing device of claim 9, wherein the logic operating within the guest kernel comprises an exploit detection logic that fetches context information associated with the intercepted system call, when the intercepted system call was issued by an object being processed by the guest application and that conducts a heuristic analysis of the context information to determine whether the object that issued the system call resides in a code section or a data portion.
  • 11. A computerized method comprising: intercepting, using a hypervisor, a system call issued from an object being processed by a guest application operating within a virtual machine, the virtual machine including a guest kernel that facilitates communications between the guest application and one or more resources within the virtual machine;responsive to the intercepted system call occurring during a first operating state, signaling logic with the guest kernel to analyze information associated with the intercepted system call to determine whether the intercepted system call is associated with a malicious attack; andresponsive to the intercepted system call occurring during a second operating state different than the first operating state, obfuscating interception of the system call,wherein the first operating state is a first guest cycle and the second operating state is a second guest cycle.
  • 12. The computerized method of claim 11, wherein the first guest cycle is a guest execute cycle and the second guest cycle is a guest read cycle.
  • 13. The computerized method of claim 12, wherein the obfuscating interception of the system call comprises re-inserting an original first instruction associated with the system call, which has been previously overwritten by a particular instruction that diverts control to the hypervisor, so that no guest application or guest operating system functionality is able to detect a presence of the particular instruction.
  • 14. The computerized method of claim 13, wherein the particular instruction is a HALT instruction.
  • 15. The computerized method of claim 11, wherein intercepting the system call comprises inserting a single-byte instruction as a first instruction in stored code associated with the system call that, when accessed, causes a trap to the hypervisor.
  • 16. The computerized method of claim 15, wherein the single-byte instruction includes a HALT instruction and the trap includes at least an identifier of the guest application running the object that issued the system call and an identifier of the system call.
  • 17. The computerized method of claim 16, wherein prior to signaling the logic within the guest kernel to analyze information associated with the intercepted system call to determine whether the intercepted system call is associated with a malicious attack, the computerized method further comprises determining whether the guest application is a particular type of application that is to be monitored and signaling a hypervisor to divert operation control to the logic operating within the guest kernel in response to detecting that the system call is issued during the first operating state.
  • 18. A computing device comprising: a virtual machine including a guest kernel that facilitates communications between a guest application being processed within the virtual machine and one or more resources; anda hypervisor communicatively coupled to the virtual machine, the hypervisor being configured to receive an intercepted system call initiated by an object being processed within the guest application within the virtual machine, the intercepted system call being directed to a memory page in an altered state with a first instruction of the memory page being substituted with a HALT instruction to trap to the hypervisor,wherein the hypervisor (i) signals logic within the guest kernel to analyze information associated with the intercepted system call to determine whether the intercepted system call is associated with a malicious attack in response to the intercepted system call occurring during a first operating state and (ii) obfuscates interception of the system call by emulating a read access to the memory page in an unaltered state in response to the intercepted system call occurring during a second operating state different than the first operating state,wherein the first operating state is a first guest cycle and the second operating state is a second guest cycle.
  • 19. The computing device of claim 18, wherein the first guest cycle is a guest execute cycle.
  • 20. The computing device of claim 19, wherein the second guest cycle is a guest read cycle.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of priority on U.S. Provisional Application No. 62/233,977 filed Sep. 28, 2015, the entire contents of which are incorporated by reference.

US Referenced Citations (369)
Number Name Date Kind
5878560 Johnson Mar 1999 A
6013455 Bandman et al. Jan 2000 A
7424745 Theston et al. Sep 2008 B2
7937387 Razier et al. May 2011 B2
7958558 Eake et al. Jun 2011 B1
7996836 McCorkendale et al. Aug 2011 B1
8006305 Aziz Aug 2011 B2
8010667 Zhang et al. Aug 2011 B2
8069484 McMillan et al. Nov 2011 B2
8151263 Venkitachalam et al. Apr 2012 B1
8171553 Aziz et al. May 2012 B2
8201169 Venkitachalam et al. Jun 2012 B2
8204984 Aziz et al. Jun 2012 B1
8233882 Rogel Jul 2012 B2
8271978 Bennett et al. Sep 2012 B2
8290912 Seads et al. Oct 2012 B1
8291499 Aziz et al. Oct 2012 B2
8332571 Edwards, Sr. Dec 2012 B1
8353031 Rajan et al. Jan 2013 B1
8375444 Aziz et al. Feb 2013 B2
8387046 Montague et al. Feb 2013 B1
8397306 Tormasov Mar 2013 B1
8418230 Cornelius et al. Apr 2013 B1
8479276 Vaystikh et al. Jul 2013 B1
8479294 Li et al. Jul 2013 B1
8510827 Leake et al. Aug 2013 B1
8516593 Aziz Aug 2013 B2
8522236 Zimmer et al. Aug 2013 B2
8528086 Aziz Sep 2013 B1
8539582 Aziz et al. Sep 2013 B1
8549638 Aziz Oct 2013 B2
8561177 Aziz et al. Oct 2013 B1
8566476 Shifter et al. Oct 2013 B2
8566946 Aziz et al. Oct 2013 B1
8584239 Aziz et al. Nov 2013 B2
8635696 Aziz Jan 2014 B1
8689333 Aziz Apr 2014 B2
8713681 Silberman et al. Apr 2014 B2
8756696 Miller Jun 2014 B1
8775715 Tsirkin et al. Jul 2014 B2
8776180 Kumar et al. Jul 2014 B2
8776229 Aziz Jul 2014 B1
8793278 Frazier et al. Jul 2014 B2
8793787 Ismael et al. Jul 2014 B2
8799997 Spiers et al. Aug 2014 B2
8805947 Kuzkin et al. Aug 2014 B1
8832352 Tsirkin et al. Sep 2014 B2
8832829 Manni et al. Sep 2014 B2
8839245 Khajuria et al. Sep 2014 B1
8850060 Beloussov et al. Sep 2014 B1
8850571 Staniford et al. Sep 2014 B2
8863279 McDougal et al. Oct 2014 B2
8881271 Butler, II Nov 2014 B2
8881282 Aziz et al. Nov 2014 B1
8898788 Aziz et al. Nov 2014 B1
8910238 Lukacs et al. Dec 2014 B2
8935779 Manni et al. Jan 2015 B2
8949257 Shifter et al. Feb 2015 B2
8984478 Epstein Mar 2015 B2
8984638 Aziz et al. Mar 2015 B1
8990939 Staniford et al. Mar 2015 B2
8990944 Singh et al. Mar 2015 B1
8997219 Staniford et al. Mar 2015 B2
9003402 Carbone et al. Apr 2015 B1
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9027125 Kumar et al. May 2015 B2
9027135 Aziz May 2015 B1
9071638 Aziz et al. Jun 2015 B1
9087199 Sallam Jul 2015 B2
9092616 Kumar et al. Jul 2015 B2
9092625 Kashyap et al. Jul 2015 B1
9104867 Thioux et al. Aug 2015 B1
9106630 Frazier et al. Aug 2015 B2
9106694 Aziz et al. Aug 2015 B2
9117079 Huang et al. Aug 2015 B1
9118715 Staniford et al. Aug 2015 B2
9159035 Ismael et al. Oct 2015 B1
9171160 Vincent et al. Oct 2015 B2
9176843 Ismael et al. Nov 2015 B1
9189627 Islam Nov 2015 B1
9195829 Goradia et al. Nov 2015 B1
9197664 Aziz et al. Nov 2015 B1
9213651 Malyugin et al. Dec 2015 B2
9223972 Vincent et al. Dec 2015 B1
9225740 Ismael et al. Dec 2015 B1
9241010 Bennett et al. Jan 2016 B1
9251343 Vincent et al. Feb 2016 B1
9262635 Paithane et al. Feb 2016 B2
9268936 Butler Feb 2016 B2
9275229 LeMasters Mar 2016 B2
9282109 Aziz et al. Mar 2016 B1
9292686 Ismael et al. Mar 2016 B2
9294501 Mesdaq et al. Mar 2016 B2
9300686 Pidathala et al. Mar 2016 B2
9306960 Aziz Apr 2016 B1
9306974 Aziz et al. Apr 2016 B1
9311479 Vlanni et al. Apr 2016 B1
9355247 Thioux et al. May 2016 B1
9356944 Aziz May 2016 B1
9363280 Ivlin et al. Jun 2016 B1
9367681 Smael et al. Jun 2016 B1
9398028 Arandikar et al. Jul 2016 B1
9413781 Cunningham et al. Aug 2016 B2
9426071 Caldejon et al. Aug 2016 B1
9430646 Mushtaq et al. Aug 2016 B1
9432389 Khalid et al. Aug 2016 B1
9438613 Paithane et al. Sep 2016 B1
9438622 Staniford et al. Sep 2016 B1
9438623 Thioux et al. Sep 2016 B1
9459901 Jung et al. Oct 2016 B2
9467460 Otvagin et al. Oct 2016 B1
9483644 Paithane et al. Nov 2016 B1
9495180 Ismael Nov 2016 B2
9497213 Thompson et al. Nov 2016 B2
9507935 Ismael et al. Nov 2016 B2
9516057 Aziz Dec 2016 B2
9519782 Aziz et al. Dec 2016 B2
9536091 Paithane et al. Jan 2017 B2
9537972 Edwards et al. Jan 2017 B1
9560059 Islam Jan 2017 B1
9565202 Kindlund et al. Feb 2017 B1
9591015 Amin et al. Mar 2017 B1
9591020 Aziz Mar 2017 B1
9594904 Jain et al. Mar 2017 B1
9594905 Ismael et al. Mar 2017 B1
9594912 Thioux et al. Mar 2017 B1
9609007 Rivlin et al. Mar 2017 B1
9626509 Khalid et al. Apr 2017 B1
9628498 Aziz et al. Apr 2017 B1
9628507 Haq et al. Apr 2017 B2
9633134 Ross Apr 2017 B2
9635039 Islam et al. Apr 2017 B1
9641546 Manni et al. May 2017 B1
9654485 Neumann May 2017 B1
9661009 Karandikar et al. May 2017 B1
9661018 Aziz May 2017 B1
9674298 Edwards et al. Jun 2017 B1
9680862 Ismael et al. Jun 2017 B2
9690606 Ha Jun 2017 B1
9690933 Singh et al. Jun 2017 B1
9690935 Shifter et al. Jun 2017 B2
9690936 Malik et al. Jun 2017 B1
9736179 Ismael Aug 2017 B2
9747446 Pidathala et al. Aug 2017 B1
9756074 Aziz et al. Sep 2017 B2
9773112 Rathor et al. Sep 2017 B1
9781144 Otvagin et al. Oct 2017 B1
9787700 Amin et al. Oct 2017 B1
9787706 Otvagin et al. Oct 2017 B1
9792196 Ismael et al. Oct 2017 B1
9824209 Ismael et al. Nov 2017 B1
9824211 Wilson Nov 2017 B2
9824216 Khalid et al. Nov 2017 B1
9825976 Gomez et al. Nov 2017 B1
9825989 Mehra et al. Nov 2017 B1
9838408 Karandikar et al. Dec 2017 B1
9838411 Aziz Dec 2017 B1
9838416 Aziz Dec 2017 B1
9838417 Khalid et al. Dec 2017 B1
9846776 Paithane et al. Dec 2017 B1
9876701 Caldejon et al. Jan 2018 B1
9888016 Amin et al. Feb 2018 B1
9888019 Pidathala et al. Feb 2018 B1
9910988 Vincent et al. Mar 2018 B1
9912644 Cunningham Mar 2018 B2
9912684 Aziz et al. Mar 2018 B1
9912691 Mesdaq et al. Mar 2018 B2
9912698 Thioux et al. Mar 2018 B1
9916440 Paithane et al. Mar 2018 B1
9921978 Chan et al. Mar 2018 B1
20060112416 Ohta et al. May 2006 A1
20060130060 Anderson et al. Jun 2006 A1
20060236127 Kurien et al. Oct 2006 A1
20060248528 Oney et al. Nov 2006 A1
20070006226 Hendel Jan 2007 A1
20070250930 Aziz et al. Oct 2007 A1
20070300227 Mall et al. Dec 2007 A1
20080005782 Aziz Jan 2008 A1
20080065854 Schoenberg et al. Mar 2008 A1
20080123676 Cummings et al. May 2008 A1
20080127348 Largman et al. May 2008 A1
20080184367 McMillan et al. Jul 2008 A1
20080184373 Traut et al. Jul 2008 A1
20080235793 Schunter et al. Sep 2008 A1
20080244569 Challener et al. Oct 2008 A1
20080294808 Mahalingam et al. Nov 2008 A1
20080320594 Jiang Dec 2008 A1
20090007100 Field et al. Jan 2009 A1
20090036111 Danford et al. Feb 2009 A1
20090044024 Oberheide et al. Feb 2009 A1
20090044274 Budko et al. Feb 2009 A1
20090089860 Forrester et al. Apr 2009 A1
20090106754 Liu et al. Apr 2009 A1
20090113425 Ports et al. Apr 2009 A1
20090158432 Zheng et al. Jun 2009 A1
20090172661 Zimmer et al. Jul 2009 A1
20090198651 Shifter et al. Aug 2009 A1
20090198670 Shifter et al. Aug 2009 A1
20090198689 Frazier et al. Aug 2009 A1
20090199274 Frazier et al. Aug 2009 A1
20090204964 Foley et al. Aug 2009 A1
20090276771 Nickolov et al. Nov 2009 A1
20090320011 Chow et al. Dec 2009 A1
20090328221 Blumfield et al. Dec 2009 A1
20100030996 Butler, II Feb 2010 A1
20100031360 Seshadri et al. Feb 2010 A1
20100043073 Kuwamura Feb 2010 A1
20100100718 Srinivasan Apr 2010 A1
20100115621 Staniford et al. May 2010 A1
20100191888 Serebrin et al. Jul 2010 A1
20100192223 Ismael et al. Jul 2010 A1
20100235647 Buer Sep 2010 A1
20100254622 Kamay et al. Oct 2010 A1
20100299665 Adams Nov 2010 A1
20100306173 Frank Dec 2010 A1
20100306560 Bozek et al. Dec 2010 A1
20110004935 Moffie et al. Jan 2011 A1
20110022695 Dalal et al. Jan 2011 A1
20110047542 Dang et al. Feb 2011 A1
20110047544 Yehuda et al. Feb 2011 A1
20110060947 Song et al. Mar 2011 A1
20110078794 Manni et al. Mar 2011 A1
20110078797 Beachem et al. Mar 2011 A1
20110082962 Horovitz et al. Apr 2011 A1
20110093951 Aziz Apr 2011 A1
20110099633 Aziz Apr 2011 A1
20110099635 Silberman et al. Apr 2011 A1
20110153909 Dong Jun 2011 A1
20110167422 Eom et al. Jul 2011 A1
20110173213 Frazier et al. Jul 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225624 Sawhney et al. Sep 2011 A1
20110247072 Staniford et al. Oct 2011 A1
20110296412 Banga et al. Dec 2011 A1
20110296440 Laurich et al. Dec 2011 A1
20110299413 Chatwani et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20110321040 Sobel et al. Dec 2011 A1
20110321165 Capalik et al. Dec 2011 A1
20110321166 Capalik et al. Dec 2011 A1
20120011508 Ahmad Jan 2012 A1
20120047576 Do et al. Feb 2012 A1
20120117652 Manni et al. May 2012 A1
20120131156 Brandt et al. May 2012 A1
20120144489 Jarrett et al. Jun 2012 A1
20120159454 Barham et al. Jun 2012 A1
20120174186 Aziz et al. Jul 2012 A1
20120174218 McCoy et al. Jul 2012 A1
20120198514 McCune et al. Aug 2012 A1
20120216046 McDougal et al. Aug 2012 A1
20120222114 Shanbhogue Aug 2012 A1
20120222121 Staniford et al. Aug 2012 A1
20120254995 Sallam Oct 2012 A1
20120255002 Sallam Oct 2012 A1
20120255003 Sallam Oct 2012 A1
20120255012 Sallam Oct 2012 A1
20120255015 Sahita et al. Oct 2012 A1
20120255016 Sallam Oct 2012 A1
20120255017 Sallam Oct 2012 A1
20120255021 Sallam Oct 2012 A1
20120260345 Quinn et al. Oct 2012 A1
20120265976 Spiers et al. Oct 2012 A1
20120291029 Kidambi et al. Nov 2012 A1
20120297057 Ghosh et al. Nov 2012 A1
20120311708 Agarwal et al. Dec 2012 A1
20120317566 Santos et al. Dec 2012 A1
20120331553 Aziz et al. Dec 2012 A1
20130036470 Zhu et al. Feb 2013 A1
20130036472 Aziz Feb 2013 A1
20130047257 Aziz Feb 2013 A1
20130055256 Banga et al. Feb 2013 A1
20130086235 Ferris Apr 2013 A1
20130086299 Epstein Apr 2013 A1
20130091571 Lu Apr 2013 A1
20130111593 Shankar et al. May 2013 A1
20130117741 Prabhakaran et al. May 2013 A1
20130117848 Golshan et al. May 2013 A1
20130117849 Golshan et al. May 2013 A1
20130179971 Harrison Jul 2013 A1
20130191924 Tedesco et al. Jul 2013 A1
20130227680 Pavlyushchik Aug 2013 A1
20130227691 Aziz et al. Aug 2013 A1
20130247186 LeMasters Sep 2013 A1
20130282776 Durrant et al. Oct 2013 A1
20130283370 Vipat et al. Oct 2013 A1
20130291109 Staniford et al. Oct 2013 A1
20130298243 Kumar et al. Nov 2013 A1
20130298244 Kumar et al. Nov 2013 A1
20130312098 Kapoor et al. Nov 2013 A1
20130312099 Edwards et al. Nov 2013 A1
20130318038 Shiffer et al. Nov 2013 A1
20130318073 Shiffer et al. Nov 2013 A1
20130325791 Shiffer et al. Dec 2013 A1
20130325792 Shiffer et al. Dec 2013 A1
20130325871 Shiffer et al. Dec 2013 A1
20130325872 Shiffer et al. Dec 2013 A1
20130326625 Anderson et al. Dec 2013 A1
20130333033 Khesin Dec 2013 A1
20130333040 Diehl et al. Dec 2013 A1
20130347131 Mooring et al. Dec 2013 A1
20140006734 Li et al. Jan 2014 A1
20140019963 Deng et al. Jan 2014 A1
20140032875 Butler Jan 2014 A1
20140075522 Paris et al. Mar 2014 A1
20140089266 Une et al. Mar 2014 A1
20140096134 Barak et al. Apr 2014 A1
20140115578 Cooper et al. Apr 2014 A1
20140115652 Kapoor et al. Apr 2014 A1
20140130158 Wang et al. May 2014 A1
20140137180 Lukacs et al. May 2014 A1
20140157407 Krishnan et al. Jun 2014 A1
20140181131 Ross Jun 2014 A1
20140189687 Jung et al. Jul 2014 A1
20140189866 Shiffer et al. Jul 2014 A1
20140189882 Jung et al. Jul 2014 A1
20140208123 Roth et al. Jul 2014 A1
20140230024 Uehara et al. Aug 2014 A1
20140237600 Silberman et al. Aug 2014 A1
20140259169 Harrison Sep 2014 A1
20140280245 Wilson Sep 2014 A1
20140283037 Sikorski et al. Sep 2014 A1
20140283063 Thompson et al. Sep 2014 A1
20140289105 Sirota et al. Sep 2014 A1
20140304819 Ignatchenko et al. Oct 2014 A1
20140310810 Brueckner et al. Oct 2014 A1
20140325644 Oberg et al. Oct 2014 A1
20140337836 Ismael Nov 2014 A1
20140344926 Cunningham et al. Nov 2014 A1
20140351810 Pratt et al. Nov 2014 A1
20140359239 Hiremane et al. Dec 2014 A1
20140380473 Bu et al. Dec 2014 A1
20140380474 Paithane et al. Dec 2014 A1
20150007312 Pidathala et al. Jan 2015 A1
20150013008 Lukacs et al. Jan 2015 A1
20150096022 Vincent et al. Apr 2015 A1
20150096023 Mesdaq et al. Apr 2015 A1
20150096024 Haq et al. Apr 2015 A1
20150096025 Ismael Apr 2015 A1
20150121135 Pape Apr 2015 A1
20150128266 Tosa May 2015 A1
20150172300 Cochenour Jun 2015 A1
20150180886 Staniford et al. Jun 2015 A1
20150186645 Aziz et al. Jul 2015 A1
20150199514 Tosa et al. Jul 2015 A1
20150199531 Ismael et al. Jul 2015 A1
20150199532 Ismael et al. Jul 2015 A1
20150220735 Paithane et al. Aug 2015 A1
20150244732 Golshan et al. Aug 2015 A1
20150317495 Rodgers et al. Nov 2015 A1
20150372980 Eyada Dec 2015 A1
20160004869 Ismael et al. Jan 2016 A1
20160006756 Ismael et al. Jan 2016 A1
20160044000 Cunningham Feb 2016 A1
20160048680 Lutas et al. Feb 2016 A1
20160057123 Jiang et al. Feb 2016 A1
20160127393 Aziz et al. May 2016 A1
20160191547 Zafar et al. Jun 2016 A1
20160191550 Ismael et al. Jun 2016 A1
20160261612 Mesdaq et al. Sep 2016 A1
20160285914 Singh et al. Sep 2016 A1
20160301703 Aziz Oct 2016 A1
20160335110 Paithane et al. Nov 2016 A1
20160371105 Sieffert Dec 2016 A1
20170083703 Abbasi et al. Mar 2017 A1
20170213030 Mooring et al. Jul 2017 A1
20170344496 Chen Nov 2017 A1
20180013770 Ismael Jan 2018 A1
20180048660 Paithane et al. Feb 2018 A1
Foreign Referenced Citations (7)
Number Date Country
2011112348 Sep 2011 WO
2012135192 Oct 2012 WO
2012154664 Nov 2012 WO
2012177464 Dec 2012 WO
2013067505 May 2013 WO
2013091221 Jun 2013 WO
2014004747 Jan 2014 WO
Non-Patent Literature Citations (2)
Entry
U.S. Appl. No. 15/199,873, filed Jun. 30, 2016 Non-Final Office Action dated Feb. 9, 2018.
U.S. Appl. No. 15/199,876, filed Jun. 30, 2016 Non-Final Office Action dated Jan. 10, 2018.
Provisional Applications (1)
Number Date Country
62233977 Sep 2015 US