Hypervisor-based redirection of system calls and interrupt-based task offloading

Information

  • Patent Grant
  • 12248560
  • Patent Number
    12,248,560
  • Date Filed
    Friday, October 2, 2020
    5 years ago
  • Date Issued
    Tuesday, March 11, 2025
    9 months ago
Abstract
A security agent configured to initiate a security agent component as a hypervisor for a computing device is described herein. The security agent component may change a value of a processor configuration register, such as a Model Specific Register (MSR), in order to cause system calls to be redirected to the security agent, and may set an intercept for instructions for performing read operations on the processor configuration register so that a process, thread, or component different from the processor of the computing device may receive the original value of the processor configuration register instead of an updated value of the processor configuration register. The security agent component may also be configured to generate interrupts to offload task execution from the hypervisor to a security agent executing as a kernel-level component.
Description
BACKGROUND

With Internet use forming an ever greater part of day to day life, security exploits that steal or destroy system resources, data, and private information are an increasing problem. Governments and businesses devote significant resources to preventing intrusions and thefts related to these security exploits. Security exploits come in many forms, such as computer viruses, worms, trojan horses, spyware, keystroke loggers, adware, and rootkits. These exploits are delivered in or through a number of mechanisms, such as spearfish emails, clickable links, documents, executables, or archives. Some of the threats posed by security exploits are of such significance that they are described as cyber terrorism or industrial espionage.


While many activities of security exploits can be introspected using hooks or other interception techniques, certain operations cannot be hooked or intercepted in kernel-mode or user-mode. Such operations include memory accesses and individual instruction execution by the processor. Current techniques involve running guest operating systems (OSes) and applications of those guest OSes in virtual machines or running each application in a separate virtual machine. Each of these techniques involves significant overhead, and neither technique is capable of intercepting memory accesses or instructions executing on the host OS itself.


Furthermore, some known rootkits employ an attack strategy that involves hooking the OS kernel itself. In order to counter such attacks, many of today's OSes include logic that causes the system to crash and reboot if it is determined that the OS kernel has been hooked (because, out of an abundance of caution, it is assumed that a rootkit has hooked the OS kernel). This makes it impractical for security software to do the same (i.e., hook the OS kernel) for benevolent purposes, such as for purposes of monitoring systems calls to detect malware on the host computing device.





BRIEF DESCRIPTION OF THE DRAWINGS

The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features.



FIGS. 1a-1f illustrate overviews of a security agent configured to initiate execution of a security agent component as a hypervisor of computing device, the security agent component setting intercepts on memory locations and processor registers of the computing device, and, in some implementations, generating interrupts in order to offload task execution to the security agent.



FIGS. 2a-2b illustrate overviews of techniques for protecting memory locations through privilege attributes of pages while enabling operations on other memory locations associated with those pages.



FIG. 3 illustrates a component level view of a computing device configured with a security agent and security agent component configured to execute as a hypervisor.



FIG. 4 illustrates an example process for initiating execution of a security agent component as a hypervisor for a computing device, determining memory locations of the computing device to be intercepted, and setting intercepts for the determined memory locations.



FIG. 5 illustrates an example process for protecting memory locations through privilege attributes of pages while enabling operations on other memory locations associated with those pages.



FIG. 6 illustrates an example process for determining memory locations to be intercepted and setting privilege attributes for memory pages including those memory locations, including setting a memory page to non-executable.



FIG. 7 illustrates an example process for determining that a memory location is whitelisted and excluding memory page(s) that include the whitelisted memory location from a set of pages to have their privilege attributes changed.



FIG. 8 illustrates an example process for intercepting accesses of debug registers and, when such accesses are from the operating system, responding with operating-system-permitted values.



FIG. 9 illustrates an example process for intercepting instructions for accessing control registers.



FIG. 10 illustrates an example process for intercepting instructions for reading a value of a processor configuration register, such as a Model Specific Register (MSR), to receive system calls at a security agent.



FIG. 11 illustrates an example process for using a system call received by a security agent as a trigger for initiating a security action on data associated with executing user-mode processes.



FIG. 12 illustrates an example process for improving computational performance of a system by offloading task execution from a security agent component in a hypervisor to a security agent in kernel-mode, the task(s) relating to an operation affecting a page with a protected memory location(s).



FIG. 13 illustrates an example process for switching between hypervisor and kernel-mode components to perform tasks relating to an operation affecting a page with a protected memory location(s).





DETAILED DESCRIPTION

This disclosure describes, in part, security agent configured to initiate a security agent component as a hypervisor for a computing device. Such initiation may involve, in some implementations, storing processor state information into a data structure and instructing the processor to initiate the security agent component as the hypervisor based on the data structure. The security agent may then determine a subset of memory locations in memory of the computing device to be intercepted or one or more processor registers to be intercepted. Such a determination may be based, for example, on a security agent configuration received from a security service. The security agent component may then set intercepts for the determined memory locations or registers. Setting such intercepts may include setting privilege attributes for pages which include the determined memory locations so as to prevent specific operations in association with those memory locations or setting intercepts for instructions affecting the registers.


In some implementations, after setting privilege attributes for pages, operations affecting memory locations in those pages may be noted. In response to one of the specific operations affecting the determined memory location associated with a page, the security agent component may return a false indication of success or allow the operation to enable monitoring of the actor associated with the operation. When an operation affects another memory location associated with that page, the security agent component may temporarily reset the privilege attribute for that page to allow the operation.


In one example, a memory location may store privileged information, and the specific operation protected against may involve writing to that memory location to modify the privileged information. Such an action is known as privilege escalation. To protect against privilege escalation, the privilege attribute of the page including the memory location storing the privileged information may be set to a read only value.


In another example, a memory location may store user credentials, and the specific operation protected against may involve reading the user credentials from the memory location. To protect against such credential reads, the privilege attribute of the page including the memory location storing the user credentials may be set to an inaccessible value. In some implementations, the physical memory location of the page may be modified by the security agent, resulting in the credential read to return data located in a different memory location. The returned user credentials would therefore be invalid as to purposefully mislead an attacker.


In a further example, a memory location may store executable code, and the specific operation protected against may involve executing the code stored at the memory location. To protect against this, the privilege attribute of the page including the memory location storing the executable code may be set to non-executable. The security agent component may then take a further action, such as returning a false indication of successful execution of the code, or redirecting to other code at another location to mislead the attacker.


In various implementations, after setting privilege attributes for pages, operations affecting pages that include protected memory locations may trigger the execution of tasks relating to the noted operations. Although these tasks may be executed in the hypervisor by the security agent component, some or all of the tasks may also be offloaded from the security agent component to the security agent executing as a kernel-level component. This allows for, among other things, performance gains for the computing device because execution of at least some tasks by a kernel-level component of the computing device does not impact computational performance as much as if a hypervisor component were to exclusively execute the same tasks. Moreover, offloading tasks to the security agent executing as a kernel-level component may improve latency, and/or it may avoid a situation where all OS functions (including hardware operations) are halted, which would be the case if the hypervisor component exclusively executed while the tasks were simultaneously in progress. In fact, during hypervisor task execution, packets may be lost, movement of an input device (e.g., a mouse) may go undetected, and so on. The offloading described herein mitigates these and potentially other issues. The determination of whether or not to offload task execution to the security agent in the kernel mode of the computing device may be based on the computational cost/expense of executing a task(s) by a hypervisor component, such as the security agent component. In an example process, the security agent may be configured to initiate the security agent component as a hypervisor for the computing device, and the security agent component, upon noting an operation affecting a page of memory that includes a protected memory location(s), may generate an interrupt for purposes of offloading task execution to the security agent. In response to the interrupt, the security agent may execute at least one task relating to the noted operation in lieu of the security agent component executing the same task(s) in the hypervisor level of the computing device.


In various implementations, the security agent component consults a whitelist or component, such as a virtual address space manager, to determine whether any memory locations identified by the security agent are whitelisted. The security agent component may then exclude memory pages that include the whitelisted memory locations from a set of memory pages and set intercepts of memory locations included in the remaining memory pages of the set of memory pages. Use of whitelisting memory locations may prevent the security agent and security agent component from blocking operations of permitted components known to be associated with those whitelisted memory locations.


In further implementations, the security agent may store memory addresses in debug registers of the processor of the computing device, and the security agent component may set intercepts for the debug registers. Setting the intercepts may include, for example, setting intercepts for instructions seeking to access the debug registers (e.g., reading the debug registers). In some implementations, one of the debug registers may store a memory address not permitted by the operating system of the computing device to be stored in the debug registers. For instance, the operating system may prevent memory addresses associated with kernel-level components from being stored in the debug registers. In order to enable storing such a non-permitted memory address in a debug register, the security agent component may respond to a read operation from the operating system seeking to read that debug register with a false, operating-system-permitted value. In addition to setting intercepts for debug registers storing memory addresses, the security agent component may also set intercepts for the memory addresses themselves, e.g., by setting privilege attributes for memory pages that include the memory addresses.


In some implementations, the security agent component may set intercepts for control registers of the processor of the computing device. Setting the intercepts may include setting intercepts for instructions seeking to access the control registers (e.g., seeking to write to the control registers). In various implementations, one of the control registers may store an on setting for a security feature of the computing device. The security agent component may set intercepts on instructions seeking to write to that register to, for instance, turn off the security feature. In response to intercepting such an instruction, the security agent component may respond with a false indication of success.


In some implementations, the security agent component may be configured to redirect system calls to a security agent executing as a kernel-level component of the computing device. For example, the security agent component may be initiated as a hypervisor of the computing device, and, thereafter, may change a value of a processor configuration register, such as a Model Specific Register (MSR), of a processor of the computing device. The value of the processor configuration register may be changed from an original value to an updated value that points to the security agent. Thereafter, when the processor reads the updated value of the processor configuration register, system calls are redirected to, and received by, the security agent. Receiving system calls allows the security agent to take one or more security actions on the computing device. For example, receipt of a system call may act as a trigger to initiate a security action with respect to data that is associated with user-level processes executing on the computing device. In some embodiments, the security agent component may also set an intercept for instructions for performing read operations on the processor configuration register. In this manner, upon noting a read operation to read the value of the processor configuration register, and upon noting that the read operation is from a process, thread, or component that is different from the processor, the security agent component may return the original value to the requesting process, thread, or component, thereby obfuscating the presence of the security agent component.


Overview



FIG. 1a illustrates an overview of a security agent configured to initiate execution of a security agent component as a hypervisor of computing device, the security agent component setting intercepts on a subset of memory locations of the computing device. As illustrated, a computing device includes components implemented at the kernel-level 102 and at the user-level 104. Kernel-level 102 components include a host OS kernel 106 and a security agent 108. The security agent 108 further includes or is associated with a security agent component 110 implemented at a hypervisor-level 112 of the computing device. The security agent 108 may further include a configuration 114 and a data structure 116 for storing copies of processor state settings. Further, user-level 104 components may include a process 118. Additionally, the computing device may have a memory 120 having multiple memory locations 122 and a processor 124 having processor state settings 126. FIG. 1a further shows, at 128, the security agent 108 storing processor state settings 126 in the data structure 116 and, at 130, initiating the security agent component 110 as a hypervisor based on the data structure 116. The security agent 108 then, at 132, determines memory locations 122 to be intercepted and the security agent component 110 sets, at 134, intercepts for the determined memory locations 122.


In various embodiments, a computing device may include the host OS kernel 106, security agent 108, security agent component 110, process 118, memory 120, and processor 124. Such a computing device may be a server or server farm, multiple, distributed server farms, a mainframe, a work station, a personal computer (PC), a laptop computer, a tablet computer, a personal digital assistant (PDA), a cellular phone, a media center, an embedded system, or any other sort of device or devices. When implemented on multiple computing devices, the host OS kernel 106, security agent 108, security agent component 110, process 118, memory 120, and processor 124 may be distributed among the multiple computing devices. An example of a computing device including the host OS kernel 106, security agent 108, security agent component 110, process 118, memory 120, and processor 124 is illustrated in FIG. 3 and described below with reference to that figure.


The computing device may implement multiple protection rings or privilege levels which provide different levels of access to system resources. For example, user-level 104 may be at an “outer” ring or level, with the least access (e.g., “ring 3”), kernel-level 102 may be at an “inner” ring or level, with greater access (e.g., “ring 0” or “ring 1”), and hypervisor-level 112 may be an “inner-most” ring or level (e.g., “ring −1” or “ring 0”), with greater access than kernel-level 102. Any component at the hypervisor-level 112 may be a hypervisor which sits “below” (and has greater access than) a host OS kernel 106.


The host OS kernel 106 may be a kernel of any sort of OS, such as a Windows® OS, a Unix OS, or any other sort of OS. Other OSes, referred to as “guest” OSes, may be implemented in virtual machines supported by the host OS. The host OS kernel 106 may provide access to hardware resources of the computing device, such as memory 120 and processor 124 for other processes of the computing device, such as process 118.


The security agent 108 may be a kernel-level security agent, which may monitor and record activity on the computing device, may analyze the activity, and may generate alerts and events and provide those alerts and events to a remote security service. The security agent 108 may be installed by and configurable by the remote security service, receiving, and applying while live, configurations of the security agent 108 and its component(s), such as security agent component 110. The configuration 114 may be an example of such a configuration. An example security agent 108 is described in greater detail in U.S. patent application Ser. No. 13/492,672, entitled “Kernel-Level Security Agent” and filed on Jun. 8, 2012, which issued as U.S. Pat. No. 9,043,903 on May 26, 2015.


The security agent component 110 may be a component of the security agent 108 that is executed at a hypervisor for the computing device at hypervisor-level 112. The security agent component 110 may perform hypervisor functions, such as adjusting privilege attributes (e.g., “read-write,” “read only,” “inaccessible,” etc.) of memory pages and managing system resources, such as memory 120. The security agent component 110 may perform at least some of its functions based on the configuration 114 of the security agent 108, which may include configuration settings for the security agent component 110. The security agent component 110 may also perform hypervisor functions to adjust the physical location of memory pages associated with memory 120.


The configuration 114 may comprise any of settings or system images for the security agent 108 and security agent component 110. As noted above, the configuration 114 may be received from a remote security service and may be applied by the security agent 108 and security agent component 110 without rebooting the computing device.


The data structure 116 may be a structure for storing processor state information. Such a data structure may be, for instance, a virtual machine control structure (VA/ICS). In some implementations, a subset of the settings in the data structure 116 may be set by the security agent 108 based on the OS. In such implementations, the security agent 108 may have different routines for different OSes, configuring the data structure 116 with different settings based on the OS. Such settings may typically be processor state settings which are invariant for a given OS. Other settings are then obtained from processor state settings 126. In other implementations, the security agent 108 may not have different routines for different OSes and may obtain all settings for the data structure 116 from the processor state settings 126.


In various implementations, the process 118 may be any sort of user-level 104 process of a computing device, such as an application or user-level 104 OS component. The process 118 may perform various operations, including issuing instructions for execution and making read, write, and execute requests of different memory locations. Such read, write, and execute requests may be addressed to virtual addresses, which may be mapped to physical addresses of memory pages by page tables of the OS kernel 106 or to further virtual addresses of extended or nested page tables, which are then mapped to physical addresses. Such processes 118 may include security exploits or be controlled by such exploits though vulnerabilities and may attempt malicious activity, such as privilege escalation or credential theft, through direct accesses of memory locations or indirect accesses utilizing, for example, vulnerabilities of the host OS kernel 106.


Memory 120 may be memory of any sort of memory device. As shown in FIG. 1a, memory 120 may include multiple memory locations 122, the number of memory locations 122 varying based on the size of memory 120. The memory locations 122 may be addressed through addresses of memory pages and offsets, with each memory page including one or more memory locations. Privileges associated with memory locations 122, such as reading, writing, and executing may be set on a per-page granularity, with each memory page having a privilege attribute. Thus, memory locations 122 of a same page may have the same privileges associated with them. Examples of memory 120 are illustrated in FIG. 3 and described below in detail with reference to that figure.


The processor 124 may be any sort of processor, such as a central processing unit (CPU), a graphics processing unit (GPU), or both CPU and GPU, or other processing unit or component known in the art. The processor 124 may be associated with a data structure 116 describing its state, the contents of which are referred to herein as the processor state settings 126. As described above, in some implementations, a subset of the processor state settings 126 may be invariant for a type of OS. Additionally, the processor 124 supports hardware-based virtualization (such as Intel™ VT-x) with second level address translation (SLAT).


In various implementations, the security agent 108 is configured to initiate execution of the security agent component 110 as a hypervisor. Such initiating may be performed without any rebooting of the computing device. As shown in FIG. 1a, this initiating may involve, at 128, storing the processor state settings 126 in the data structure 116. If any of the processor state settings 126 are invariant, they may have already been included in the data structure 116 by the security agent 108 and thus do not need to be stored again. The initiating may then include, at 130, initiating the security agent component 110 based on the data structure 116. This may involve providing a reference to the security agent component 110 and the data structure 116 along with a “run” instruction.


Next, the security agent 108 determines, at 132, any memory locations 122 or instructions to be intercepted. The security agent 108 may utilize the configuration 114 provided by the security service to determine the memory locations 122 and instructions. Such memory locations 122 may include locations storing executable code or privilege information (e.g., indications of admin privileges) for a process or user credentials (e.g., passwords). As mentioned above, updates to the configuration 114 may be received and applied without rebooting. Upon receiving an update to the configuration 114, the security agent may repeat the determining at 132.


To free memory space, computing devices often clear memory mappings for memory pages which have not been recently accessed and write out their contents to disk, referred to as a page-out operation. When memory is accessed again, the contents are brought back from disk, referred to as a page-in operation. To ensure, then, that knowledge of memory locations 122 stays up-to-date, the security agent 108 may request that the OS kernel 106 lock page tables of mappings in page tables to memory pages which include the memory locations 122 that are to be intercepted. Alternatively, the security agent component 110 may intercept page out requests and prevent paging out of memory pages which include the memory locations 122 that are to be intercepted, or it may intercept page in requests in order to update its knowledge of memory locations 122 and repeat determining at 132.


In various implementations, the security agent component 110 then, at 134, sets intercepts for the instructions and memory locations 122 determined by the security agent 108. In some implementations, setting intercepts may involve determining the memory pages which include the determined memory locations 122 and setting privilege attributes for those pages. The privilege attribute chosen—e.g., “non-executable” or “read only” or “inaccessible”—may be a function of the memory accesses that the security agent 108 and security agent component 110 are configured to intercept. When a process 118 seeks to perform such a memory access—e.g., to execute code stored at a memory page marked “non-executable”—the security agent component 110 will receive notification.


In other implementations, setting intercepts may involve changing the physical memory location of the determined memory locations 122 to reference misleading, incorrect, or otherwise unusable data or code. When a process 118 seeks to perform such memory access—e.g., to read a memory page containing data at memory location 122, the data will instead by read from an alternate memory location.


In some implementations, upon termination of a process 118, the security agent component 110 may remove intercepts for memory locations 122 associated with the process 118. This may involve resetting privilege attributes for the memory pages including the memory locations 122 to their previous settings, or it may include resetting the physical memory location for the memory pages.



FIG. 1b illustrates an overview of a security agent and security agent component configured to determine whitelisted memory locations and to exclude memory pages that include those whitelisted memory locations from a set of memory pages including other, intercepted memory locations. As illustrated, the security agent component 110 may utilize whitelist component(s) and data 136, at 138, to determine whitelisted memory locations. The security agent component 110 then excludes memory pages having the whitelisted memory locations and, at 140, sets intercepts for the remaining memory locations that are not associated with the excluded memory pages.


In various implementations, the whitelist component(s) and data 136 may include a data structure, such as a list, received from a remote security service (e.g., as part of the security agent 108 configuration), an executable component, or both. For example, the whitelist component(s) and data 136 may comprise an executable component, such as a virtual address space manager, which categorizes address spaces of memory 120 into different regions, such as page tables, hyperspace, session-space, loader mappings, system-cache, PFN database, non-paged pool, paged pool, system PTEs, HAL heap, etc. These regions may be received in a specification from a remote security service or may be identified by the virtual address space manager based on their size and alignment specifications. Look-up tables, shadow page tables, or both, may be used to lookup the regions and identify which region a memory location 122 corresponds to, if any. Also, one of these regions, or a portion of a region, may be associated with a parameter whitelisting that region or portion of a region. In one implementation, the security agent component 110 may build or extend the whitelist component(s) and data 136 or memory regions based on memory access pattern(s) of known safe component(s).


In some implementations, one or more components of the computing device may be identified as known safe components. Such known safe components may, for instance, behave in a manner similar to a malicious process or thread, making it difficult to distinguish between the known safe component and malicious process strictly by behavior. If the known safe component is identified in advance, e.g., by the configuration 114 of the security agent 108, along with memory locations or memory regions associated with that known safe component, those memory locations or memory regions may be whitelisted, either by a component such as a virtual address space manager or in a list received from a remote security service.


As described above with respect to FIG. 1a, the security agent 108 may initiate the security agent component 110 as hypervisor and, at 132, identify memory locations 122 to be intercepted. In various implementations, once the memory locations 122 have been identified, the security agent component 110 may determine a set of memory pages that include the identified memory locations 122 and determine, at 138, any whitelisted memory locations. The determining of a set of memory pages is described above. To determine, at 138, the whitelisted memory locations, the security agent component 110 consults the whitelist component(s) and data 136. The security agent component 110 may, in some implementations, determine the set of memory pages for the identified memory locations 122 first and then determine whether any of those memory pages are associated with a whitelisted memory location or whitelisted memory region. Those whitelisted memory pages are then excluded by the security agent component 110 from the set of memory pages. Alternatively, the security agent component 110 may compare the identified memory locations 122 to the whitelisted memory locations or whitelisted memory regions and then determine memory pages for any of the identified memory locations 122 that are not whitelisted. This set of memory pages with whitelisted memory locations not included is referred to herein as the set of remaining memory pages.


At 140, the security agent component 110 sets intercepts for the set of remaining memory pages. As described above, this may include setting privilege attributes for the memory pages included in the set of remaining memory pages.



FIG. 1c illustrates an overview of a security agent and security agent component configured to intercept debug registers and provide an operating-system-permitted result responsive to access by the operating system of the debug registers. As illustrated, at 142, the security agent component 110 may set intercepts for debug registers 144 and, at 146, note an access operation, such as a read operation, from the operating system (e.g., from the host OS kernel 106). At 148, the security agent component 110 then responds to the operating system's access operation with an operating-system-permitted value.


Debug register(s) 144 may be registers used by processor 124 for program debugging, among other possible uses. Processor 124 may be an x86 series processor and include, for example, six debug registers, two of which may be used for control and status. Each debug register 144 may store a memory address and may be associated with a condition that triggers a notification—e.g., writing to the memory address, reading from the memory address, executing code stored at the memory address.


In some implementations, the operating system of the computing device may prohibit the storing of memory addresses associated with kernel-mode code or data from being stored in a debug register 144. The operating system may query the debug register(s) 144 on some basis (e.g., periodic) to determine whether the debug register(s) 144 store non-permitted memory addresses.


In various implementations, the security agent 108 may store memory addresses in some or all of the available debug register(s) 144. The memory addresses stored in the debug register(s) 144 may be specified by the configuration 114 of the security agent 108.


In further implementations, at 142, the security agent component 110 (initiated by the security agent 108, as described above) then sets intercepts for the debug register(s) 144. Setting intercepts for the debug register(s) 144 may include setting intercepts for instructions for accessing the debug register(s) 144.


At 146, the security agent component 110 then notes an access operation, such as a read operation, from the operating system seeking to determine whether a debug register 144 stores an operating-system-permitted value. At 148, the security agent component 110 then responds to the operating system with an operating-system-permitted value. If the debug register 144 being read is storing a memory address not permitted by the operating system, the security agent component 110 may respond with a false, operating-system-permitted value.


In some implementations, when a process 118 attempts some access with respect to a memory address stored in a debug register 144 (e.g., reading, writing, executing, etc.), the security agent 108 is informed of the access and may respond in a manner specified by its configuration 114 (e.g., take some security action such as monitoring or killing the process 118, respond with a false indication of success, etc.).


In various implementations, the security agent 108 may also identify one or more of the memory addresses stored by the debug register(s) 144 as memory locations to be intercepted. The security agent component 110 then sets intercepts for those memory addresses by, e.g., determining the memory pages that include the memory addresses and setting privilege attributes for those memory pages. In this way, two methods may be used by the security agent 108 and security agent component 110 to detect operations associated with a memory address.


In further implementations, in addition to use of debug registers and page privilege attributes for certain memory addresses, the security agent 108 may identify other memory locations, and the security agent component 110 may set intercepts (e.g., page privilege attributes) for those other memory locations, as described above with respect to FIG. 1a.



FIG. 1d illustrates an overview of a security agent and security agent component configured to intercept control register(s) and respond to a write operation directed at one of the control registers with a false indication of success. As illustrated, at 150, the security agent component 110 sets intercepts on instructions for accessing control register(s) 152. The security agent component 110 then notes an access operation, at 154, and responds, at 156, with a false indication of success and/or with a security action.


Control registers 152 may be registers of the processor 124 which change or control the behavior of the processor 124 or another device. x86 series processors may include control registers such as CR0, CR1, CR2, CR3, and CR4, and x86-64 series processors may also include EFER and CR8 control registers. Such control registers may each have a specific or general purpose, and control registers 152 may include any, all, or none of these x86 series processor control registers.


In some implementations, at least one of the control registers 152 may include an on setting for a security feature of the computing device. For example, control registers 152 may include a CR4 register that stores an on setting for a security feature called Supervisor Mode Execution Prevention (SMEP). SMEP may prevent the processor 124 from executing code in a user mode memory range while the privilege of the processor 124 is still in kernel mode, essentially preventing kernel mode privilege escalation. If process 118 is a malicious process or thread operating in kernel mode, however, it is able to turn off SMEP, as any kernel mode process or thread can set the on setting in CR4 to an off value.


In further implementations, the security agent 108 may determine which of the control register(s) 152 to protect. For example, the configuration 114 of the security agent 108 may specify that the CR4 register of control register(s) 152 should be protected. At 150, then, the security agent component 110 (initiated by the security agent 108, as described above) may set intercepts for the control register(s) 152 that are to be protected. Setting intercepts for those control register(s) 152 may include, at 150, setting intercepts for instructions seeking to access those control register(s) 152.


In some implementations, at 154, the security agent component 110 then notes an instruction seeking to access one of the control register(s) 152, such as a write operation seeking to write an off value to a CR4 register to turn off a security feature. The security agent component 110 then responds, at 156, with a false indication of success or by initiating a security action (e.g., killing the process 118 that requested the write operation or monitoring that process 118).



FIG. 1e illustrates an overview of a security agent component initiated as a hypervisor of a computing device, the security agent component being configured to redirect system calls to the security agent, intercept read operations from a process, thread, or component different from a processor that are directed to a processor configuration register, such as a Model Specific Register (MSR) of the processor, and respond to the read operations with a permitted result. As illustrated, at 158, the security agent component 110 (executing as a hypervisor of a computing device) may change a value of a MSR 160 of a processor 124 of the computing device from an original MSR value to an updated MSR value. This may be done when the host computing device first loads (e.g., boots up). At 162, the security agent component 110 may set an intercept for instructions for performing read operations on the MSR 160. Setting an intercept at 162 may include setting an intercept for non-processor entities (e.g., processes, threads, or components different from the processor 124) who are attempting to read the value of the MSR 160. In other words, after setting the intercept at 162, instructions for performing MSR 160 read operations from entities different from the processor 124 are intercepted, while instructions for performing MSR 160 read operations from the processor 124 are not intercepted. Thus, the processor 124 will read the updated MSR value at 164, causing system calls to be redirected to the security agent 108 at 166, thereby allowing the security agent 108 to initiate a security action at 168 in response to receiving a system call.


A Model Specific Register (MSR) 160 may be a type of control register (or processor configuration register) used by processor 124 for directing system calls to a function that is designated to handle the system call. For example, processor 124 may be an x86 series processor and include, for example, a MSR 160 that stores executable code signed by an OS vendor. This signed code may include, among other things, an original MSR value that corresponds to a memory address of a function of the host OS kernel 106 that is to handle a system call. The original MSR value is a “permitted” value in the sense that certain system components periodically check the value of the MSR 160 to verify that it is set to the permitted, original MSR value. Such system components may include a Kernel Patch Protection (KPP) component of the host OS kernel 106, such as the PatchGuard™ introduced by Microsoft Corporation® of Redmond, Washington for Windows-based OSes. PatchGuard is a technology designed to detect rootkits that have changed the value of the MSR 160 in order to attack a host computing device with malware. PatchGuard is configured to detect the changed MSR value and effectively cause the system to crash and reboot as a mechanism to prevent an attack. Thus, the host OS kernel 106 may not permit the value of the MSR 160 to be changed, else the system will crash and reboot.


Thus, the operating system (or host OS kernel 106) may query the MSR 160 on some basis (e.g., periodic) to determine whether the value of the MSR 160 has changed from its permitted, original MSR value to something else. At 170, the security agent component 110 may note an access operation, such as a read operation, from the operating system (e.g., from the host OS kernel 106), which is a component that is different from the processor 124. It is to be appreciated that the security agent component 110 may note access operations (e.g., read operations) from any process, thread, or component that is different from the processor 124. For example, an antivirus driver (a component that is different from the processor 124) may attempt to read the value of the MSR 160, and the security agent component 110 may similarly note the access operation at 170 from the antivirus driver.


At 172, the security agent component 110 may then respond to the noted access operation with the original MSR value, even though the actual value of the MSR 160 has been changed to the updated MSR value. Because the original MSR value returned at 172 is a permitted value (e.g., an operating-system-permitted value), the requesting process, thread, or component (e.g., the host OS kernel 106) suspects nothing is amiss, and the host computing device does not crash or reboot, which allows the security agent 108 to continue to receive redirected system calls at 166 without the system crashing. In this manner, the security agent 108 is configured to hook functions that handle system calls without the host OS kernel 106 knowing about the security agent's hooking of system call functions because the security agent component 110 can intercept read operations directed to the MSR 160 and obfuscate this value change.


A “system call” is a mechanism that signals to the host OS kernel 106 when an application desires access to a resource, such as the file system, registry, display, network, and/or similar resources. For instance, a process 118 may represent a process 118 invoked by an application that is executing on the host computing device, the process 118 being responsible for requesting to open a file on behalf of the application. This may cause the generation of a system call for opening the file, and, at 164, may cause the processor 124 to execute an instruction for performing a read operation on the MSR 160. Because the value of the MSR 160 was changed at 158, the processor 124 reads the updated MSR value at 164, which redirects the system call to the security agent 108. After redirecting the system call to the security agent 108, the system call can be routed to the appropriate function (e.g., an open file function to access the file system of the host computing device).


In some implementations, after the intercept is set at 162 in order to hook a system call function, and after the security agent 108 receives a redirected system call, the security agent 108 is thereby informed of the system call and may respond in a manner specified by its configuration 114 (e.g., take some security action at 168). The security action taken by the security agent 108 at 168 can vary, depending on the configuration 114 of the security agent 108. For example, the configuration 114 may instruct the security agent 108 to identify a process 118, thread, or component associated with the system call, and monitor the process 118, thread, or component, such as by monitoring events on the host computing device that result from execution of the process, thread, or component.


In another example, the receipt of a system call by the security agent 108 may trigger another type of security action at 168. For instance, the security agent 108, prior to receipt of the system call, may be configured to monitor the creation (and destruction) of processes 118(1)-(N) (collectively 118) on the host computing device by observing process creation events associated with user-level processes 118. In a similar manner, the creation (and destruction) of threads can be monitored on the host computing device by observing thread creation events associated with threads. Accordingly, FIG. 1e shows that processes 118(1)-(N) or threads may be associated with corresponding tokens 174(1)-(N) (collectively 174) in the kernel mode 102. The individual tokens 174 may include privilege information that indicates a privilege with which a corresponding process 118 or a corresponding thread is allowed to execute. For example, a first token 174(1) associated with a corresponding first user-level process 118(1) or a thread may indicate that the first user-level process 118(1) or the thread is allowed to execute with an Administrator (“Admin”) privilege, which may be a greater privilege than say a “Guest” privilege. Thus, a second token 174(2) associated with a corresponding second user-level process 118(2) or a thread may indicate that the second user-level process 118(2) or the thread is allowed to execute with a comparatively lower “Guest” privilege, but not an Admin privilege. Thus, the privilege information (e.g., indications of Admin, Guest, and other types of privileges) in the tokens 174 may indicate what level of access a corresponding user-level process 118 or thread has to resources and components on the system. In the running example, the first user-level process 118(1) or thread executes with greater privilege than the second user-level process 118(2) or thread, and therefore, the first user-level process 118(1) or thread has access to resources and/or components on the system that the second user-level process 118(2) or thread may not be allowed to access.


In some cases, malware may leverage an exploit in the host OS kernel 106 that allows the malware to change an original token value of a token 174 to a different, updated token value. This may be done, for example to change the privilege level of a given process 118 or a given thread to a higher/greater privilege level (e.g., changing a token 174 for a “Guest” process 118 or thread to an “Admin” level token 174 with greater privilege). This can effectively transform a non-Admin process 118 or thread into an Admin process 118 or thread so that malware can gain greater access to system resources and/or components for implementing an attack on the host computing device.


Even though the security agent 108 may not detect the initial change of a token's 174 value from an original token value into something else, the security agent 108 doesn't have to detect the token value change to detect malware that changed the token value. This is because the security agent 108 can operate on the assumption that any malware that changed a token value is eventually going to make a system call to do something useful with the newfound privilege afforded to the given process 118 or the given thread. Thus, by receiving redirected system calls at 166, the security agent 108 may initiate a security action at 168 by using the received system call as a trigger to implement the security action on data associated with user-level processes 118 or thread executing in the user mode 104 of the computing device. For instance, the security agent 108—having observed process creation events associated with the processes 118—can identify data associated with those processes, such as data in the form of the kernel-level tokens 174, which are associated with their corresponding user-level processes 118. A similar approach may be used to identify data associated with threads. The security action initiated at 168 can therefore include a determination as to whether values of any of the kernel-level tokens 174 have changed from an original token value to an updated token value (e.g., from a “Guest” privilege value to an “Admin” privilege value). If the security agent 108 determines that a token value has changed, the security agent 108 may restore the updated token value of the changed kernel-level token 174 to its original token value, terminate/block/kill the user-level process 118 or thread corresponding to the changed token 174, suspend the user-level process 118 or thread corresponding to the changed token 174, and/or monitor events on the computing device that result from the corresponding user-level process 118 or thread executing on the computing device. In some embodiments, such as in response to determining that it is not possible to restore the updated token value to its original token value, the security agent 108 may restore the updated token value of the changed kernel-level token 174 to a “safe” or “inert” value, or some value, other than the original value, that reflects the original state. In some implementations, the security agent 108 may determine whether it is “safe” to restore the token value back to its original value (or another value that is safe, inert, and/or reflects the original state), and if not, the security agent 108 may take some other type of remedial action, like blocking, suspending, or terminating the corresponding process 118 or thread, or allowing the process 118 or thread to execute as normal, but monitoring the process 118 or thread as it executes, and similar remedial actions. Thus, the system calls received by the security agent 108 can be used as a trigger event to have the security agent 108 determine whether something abnormal, suspicious, or the like, has happened or not, as a means of detecting malware on the host computing device.


Other events may trigger the security agent 108 to initiate a security action at 168 that determines whether values of any of the kernel-level tokens 174 have changed from an original token value to an updated token value. For instance, a context switch between one thread and another thread may be detected through changes in another processor MSR 160 (or through changes in another register, such as a GS register), and this context switch may act as a trigger event to have the security agent 108 check for a change of a token 174 value.


It is also to be appreciated that a MSR 160 is merely one kind of processor configuration register whose value can be changed at 158 of FIG. 1e. Thus, a similar technique to that described with reference to FIG. 1e can be utilized with any type of processor configuration register, such as a different type of processor configuration register in place of the MSR 160.



FIG. 1f illustrates an overview of a security agent configured to initiate execution of a security agent component as a hypervisor of computing device, the security agent component setting intercepts on a subset of memory locations of the computing device and generating interrupts in order to offload task execution to the security agent. As illustrated in FIG. 1f, the security agent 108 may include, among other sub-components described herein, an interrupt handler 176 for handling interrupts generated by the security agent component 110, when the security agent component 110 acts as a hardware device of the system. For example, the security agent 108, executing as a kernel-level component, may be configured to register with the operating system (e.g., the host OS kernel 106) as a hardware device driver that includes the interrupt handler 176 for handling interrupts generated by the security agent component 110. Interrupts may be generated by the security agent component 110 in order to offload execution of at least some tasks to the security agent 108, which executes as a kernel-level component. This may provide a performance gain for the computing device as compared to executing tasks exclusively by the security agent component 110 as a hypervisor. Because an interrupt blocks processing on the computing device, the offloading of tasks from hypervisor to kernel mode can be done without creating a window of opportunity for a threat, such as malware, to attack while the security agent component 110 offloads tasks to the security agent 108. Thus, malware's window of opportunity to attack the computing device is minimized, or eliminated, while improving the performance of the system by virtue of the offloading of task execution to the kernel level instead of executing tasks exclusively in the hypervisor.



FIG. 1f shows, at 130, initiating the security agent component 110 as a hypervisor, and, at 134, the security agent component 110 setting intercepts for instructions and/or the memory locations 122 in the memory 120. As described herein, these memory locations 122 may be determined by the configuration 114 of the security agent 108. Setting the intercepts at 134 may include the security agent component 110 determining the memory pages which include the memory locations 122 that are to be protected, and adjusting or setting privilege attributes (e.g., “read-write,” “read only,” “inaccessible,” etc.) of the determined memory pages.


When a process 118, at 178, seeks to perform a memory access—e.g., to execute code stored at a memory page marked “non-executable”—the security agent component 110 may receive notification of the access operation, and, at 180, the security agent component 110 may determine whether to offload, to the kernel-level security agent 108, one or more tasks relating to the noted access operation for execution in the kernel mode 102 as opposed to executing the tasks in the hypervisor level 112. This may be due to the performance impact, from a computational cost standpoint, of executing relatively “expensive” tasks exclusively in the hypervisor level 112. For instance, tasks relating to the noted access operation that are to be executed by the security agent 108, or a component (e.g., 110) thereof, may include, without limitation, determining whether the page affected by the access operation at 178 corresponds to any of the pages that had their privilege attributes adjusted or set by the security agent component 110 at 134, determining current addresses and/or offsets of the pages that had their privilege attributes adjusted/set, and/or determining whether to temporarily reset the privilege attribute of the affected page or not, among other possible tasks described herein and known to a person having ordinary skill in the art.


Depending on the particular scenario (e.g., the type of data stored in the memory locations 122, whether pages have been paged in or paged out, had their addresses changed, and/or had their offsets changed, etc.), execution of these tasks may come with a computational cost that is quite significant. Thus, in at least one example, the determining at 180 may include determining a computational cost of executing one or more of the tasks relating to the access operation at 178, and determining whether the computational cost meets or exceeds a threshold computational cost. If the computational cost meets or exceeds the threshold computational cost, the security agent component 110 may determine to offload task execution of one or more tasks to the security agent 108. Computational cost may be measured in various ways, such as expected processor cycles, amount of data (e.g., number of bytes of data to be processed, etc.), expected amount of time to execute the task(s), and the like. Moreover, offloading tasks to the security agent 108 executing as a kernel-level component may improve latency, and/or it may avoid a situation where all OS functions (including hardware operations) are halted, which would be the case if the hypervisor component exclusively executed while the tasks were simultaneously in progress. In fact, during hypervisor task execution, packets may be lost, movement of an input device (e.g., a mouse) may go undetected, and so on. Thus, computational cost can account for these type of impacts as well, and/or the determination at 180 can be based on the notion that OS functions and the like will not be halted and/or data will not be lost as a result of the offloading. The determination at 180 may also take into account a current workload of the host computing device (e.g., offload during busy/high workload times). Other techniques may be utilized to determine whether to offload tasks at 180. For example, the configuration 114 of the security agent 108 may include information regarding predesignated tasks to offload and/or task criteria that are to be met in order to offload task execution, which may be based on the type of access operation at 178, the type of data stored in the affected memory location(s) 112, and the like.


At 182, in response to noting the access operation at 178, and in response to determining to offload one or more tasks to the security agent 108 at 180, the security agent component 110 may generate an interrupt that is received by the host OS kernel 106. The interrupt generated by the security agent component 110 may be similar to the way that other hardware device interrupts are generated on the host computing device for actual hardware devices, such as the keyboard, the mouse, and the like. However, because the security agent component 110 is not an actual hardware device, but acts as a hardware device (e.g., a virtual hardware device), the security agent component 110 may be configured to execute an instruction that, upon execution, places the interrupt in a queue as a pending interrupt. This instruction may be in the form of an “interrupt_exiting” flag instruction, or a similar instruction, which allows a hypervisor component to generate an interrupt and have it kept in a pending state until the host OS kernel 106 is able to receive interrupts. Without such an instruction, the interrupt generated at 182 may get lost (i.e., not received by the host OS kernel 106 or the interrupt handler 176 of the security agent 108) if and when the host OS kernel 106 clears or disables interrupts, such as by executing a “clear interrupt” (CLI) flag instruction, for example. That is, when the host OS kernel 106 doesn't want to have its processing blocked, the host OS kernel 106 may execute clear or disable interrupts (e.g., via a CLI instruction) to make sure no interrupts come in, and, at a subsequent time, the host OS kernel 106 may execute a “set interrupt” (STI) flag instruction to receive all interrupts that are pending. This does not present an issue with interrupts generated by actual hardware devices because those interrupts stay pending after they are generated, ensuring that the host OS kernel 106 will receive them when interrupts are allowed. The security agent component 110 may mimic this behavior by executing a particular instruction to have its own interrupts treated as pending interrupts until the host OS kernel 106 executes an STI instruction, for example, allowing receipt of any pending interrupts thereafter.


Upon receipt of the interrupt, the host OS kernel 106 may cause execution of the interrupt handler 176 at 184, or may otherwise provide the security agent 108 (acting as a hardware device driver) with notification of the interrupt at 184. At 186, the security agent 108 may execute the one or more offloaded tasks relating to the noted access operation in response to the interrupt. Again, the task(s) executed at 186 may vary, and may include, without limitation, determining whether the page affected by the access operation at 178 corresponds to any of the pages that had their privilege attributes adjusted/set by the security agent component 110 at 134, determining current addresses and/or offsets of the pages that had their privilege attributes adjusted/set, and/or determining whether to temporarily reset the privilege attribute of the affected page or not, among other possible tasks. In some implementations, the offloaded task(s) may be communicated to the security agent 108 by the security agent component 110 after generating the interrupt at 182. If the computational cost associated with executing these tasks is significant, the performance gains of offloading task execution can also be significant, thereby improving the operation of the computing device while monitoring access operations to protected memory locations 122. For instance, a protected memory location 122 of a page affected by the noted access operation may store user credentials, and the access operation at 178 may include a read operation attempting to read the user credentials at the protected memory location 122, whereby the privilege attribute of the page was set to an inaccessible value. In this case, tasks relating to determining whether the affected page corresponds to the page that includes the memory location 122 with the user credentials, determining whether to reset the privilege attribute of that page, and/or determining a current address and/or offset of the page may be associated with a significant computational cost when executed in the hypervisor level 112 as compared to a lower computational cost of offloading this task execution to the kernel mode 102.


FIG. if also shows that the security agent 108 may be configured to return from the interrupt at 188 so that task execution can return to the security agent component 110. This may be triggered by the security agent 108 finishing the execution of the task(s) at 186. In some implementations, upon returning from the interrupt at 188, the security agent component 110 may execute one or more remaining tasks relating to the noted access operation at 178, such as temporarily resetting a privilege attribute of the affected page that includes a protected memory location(s) 122. The security agent component 110 may also continue noting other operations affecting pages of memory 120.



FIGS. 2a-2b illustrate overviews of techniques for protecting memory locations through privilege attributes of pages while enabling operations on other memory locations associated with those pages. FIG. 2a includes a memory page 202 having at least memory location 204(a), memory location 204(b), memory location 204(c), and memory location 204(d), as well as privilege attribute 206. Further, a process 208, as shown, may make requests associated with the memory locations 204. Also, as shown in FIG. 2a, that privilege attribute 206 may be temporarily reset to privilege attribute 210. A process 208 may, at 212, request an operation not permitted by privilege attribute 206. Because the operation may be directed to memory location 204(a), which is not one of the memory locations determined by the security agent 108, the security agent component 110 may, at 214, temporarily reset the privilege attribute 206 to privilege attribute 210 to allow the operation to proceed.


In various implementations, memory page 202 may be an example of the memory pages discussed above with reference to memory 120, memory locations 204 may be examples of memory locations 122, and privilege attributes 206 and 210 may be examples of the privilege attributes discussed above with reference to memory 120. Further, process 208 may be an example of process 118.


Process 208 may request, at 212, an operation such as a read from or write to a memory location 204, or an execute operation to execute code stored at memory location 204. Upon noting the request, various tasks relating to the noted operation may be performed. As noted above, the security agent component 110 may perform one or more of these tasks, such as when a computational cost associated with the task(s) is less than a threshold computational cost, or when the task(s) is otherwise designated (e.g., by the configuration 114) as one to be executed by the security agent component 110. However, in some cases, the security agent component 110 may generate an interrupt in order to offload one or more of the tasks relating to the noted operation to the security agent 108 executing as a kernel-level component. Thus, the security agent component 110 and/or the security agent 108 may determine the memory page 202 associated with the request as well as the specific memory location 204 on that memory page 202. The security agent component 110 and/or the security agent 108 may then determine if the memory location is one of the memory locations identified by the security agent 108. In FIG. 2a, the memory location 204 identified by the security agent 108 is memory location 204(b), and the operation is a request associated with memory location 204(a). In such an example, if the operation does not conflict with the privilege attribute 206, the operation is allowed to proceed. If, on the other hand, the operation is not permitted by the privilege attribute 206, then the security agent component 110 may, at 214, temporarily reset the privilege attribute 206 to privilege attribute 210 to allow the operation to proceed. For example, if privilege attribute 206 is “inaccessible” (e.g., to prevent reads of user credentials stored at memory location 204(b)), the security agent component 110 may temporarily reset the privilege attribute 206 to be privilege attribute 210, which may be “read only.” Or in another example, if the privilege attribute 206 is “non-executable” (e.g., to prevent execution of code stored at memory location 204(b)), the security agent component 110 may temporarily reset the privilege attribute 206 to be privilege attribute 210, which may be “executable.” After the operation has been processed, the security agent component 110 may return the privilege attribute 210 to be privilege attribute 206.



FIG. 2b includes a memory page 202 having at least memory location 204(a), memory location 204(b), memory location 204(c), and memory location 204(d), as well as privilege attribute 206. A process 208, as shown, may make requests associated with the memory locations 204, and privilege attribute 206 may be temporarily reset to privilege attribute 210. As is further illustrated, copies of information stored in memory page 202 may be stored in a copy memory page 216. The copy memory page 216 may include copy memory location 218(a), which includes a copy of the information stored at memory location 204(a); copy memory location 218(c), which includes a copy of the information stored at memory location 204(c); and copy memory location 218(d), which includes a copy of the information stored at memory location 204(d). Rather than storing a copy of the information from memory location 204(b), the copy memory page 216 may include phony/false or deceptive data or code 220. The copy memory page 216 may also include a privilege attribute 222, which may represent elevated privileges when compared to privilege attribute 206.


As illustrated, the process 208 may, at 224, request an operation affecting memory location 204(b). Because 204(b) is one of the memory locations identified by the security agent 108, the security agent component 110 may respond in one of a number of ways. At 226, the security agent component 110 may temporarily reset the privilege attribute 206 to be privilege attribute 210 in order to allow the operation to proceed. The security agent component 110 and/or the security agent 108 may then also identify the process, thread, or component that made the request for the operation at 224 and may monitor further activity of that process, thread, or component or terminate that process, thread, or component. Alternatively, the security agent component 110 may, at 228, generate copy memory page 216, including the phony/false or deceptive data or code 220, and may, at 230, allow the process 208 to access the phony/false or deceptive data or code 220.


Process 208 may request, at 224, an operation such as a read from or write to a memory location 204 or an execute operation to execute code stored at a memory location 204. Upon noting the request, the security agent component 110 and/or the security agent 108 may determine the memory page 202 associated with the request as well as the specific memory location 204 on that memory page 202. The security agent component 110 and/or the security agent 108 may then determine whether the memory location is one of the memory locations identified by the security agent 108. In FIG. 2b, the memory location 204 identified by the security agent 108 is memory location 204(b), and the operation is a request associated with memory location 204(b). Accordingly, the security agent component 110 and/or the security agent 108 determines that the memory location 204(b) is one of the memory locations identified by the security agent 108. In response, the security agent component 110 may take no action, which may result in the computing device crashing and rebooting. Alternatively, the security agent component 110 may take action to allow the operation and monitor further operation, allow the operation to occur on phony/false or deceptive data or code 220, or to provide a false indication of success to the process 208.


In a first example, the operation request at 224 may be a write operation to modify privilege information stored at memory location 204(b). In response to the request for the write operation, the security agent component 110 may allow the operation to proceed by temporarily resetting, at 226, the privilege attribute 206 to be privilege attribute 210. The security agent component 110 and/or the security agent 108 may also identify the process, thread, or component that made the request for the write operation (i.e., process 208) and may monitor further activity of that process, thread, or component. Alternatively, the security agent component 110 may copy, at 228, the contents of memory page 202 to copy memory page 216, set the privilege attribute 222 to read-write, and temporarily redirect from memory page 202 to copy memory page 216. The security agent component 110 may then allow the write operation to proceed, and the process 208 may modify the copy memory page 216 and receive an indication of success. The security agent component 110 may then return mapping to point to memory page 202. Thus, the memory location 204(b) is protected, the process 208 is tricked into thinking it succeeded, and both objectives are achieved without the computing device crashing.


In a second example, the operation request at 224 may be a read operation to obtain user credentials stored at memory location 204(b). In response to the request for the read operation, the security agent component 110 may allow the operation to proceed by temporarily resetting, at 226, the privilege attribute 206 to be privilege attribute 210. The security agent component 110 and/or the security agent 108 may also identify the process, thread, or component that made the request for the read operation (i.e., process 208) and may monitor further activity of that process, thread, or component. Alternatively, the security agent component 110 may copy, at 228, the contents of memory page 202 to copy memory page 216, set the privilege attribute 222 to read only, and temporarily redirect from memory page 202 to copy memory page 216. In addition to copying the contents of memory page 202, the security agent component 110 may store phony/false or deceptive data 220 at the same offset in copy memory page 216 as the memory location 204(b) is in memory page 202. The security agent component 110 then allows the read operation to proceed, and the process 208 reads the phony/false or deceptive data 220. After the read operation, the security agent component 110 may then return mapping to point to memory page 202. If the process 208 obtained deceptive data 220, such as a username and password for a monitored account, then future use of that username and password may trigger monitoring by the security agent 108 and/or the security agent component 110.


In a third example, the operation request at 224 may be an execute operation to execute code stored at memory location 204(b). In response to the request for the execute operation, the security agent component 110 may allow the operation to proceed by temporarily resetting, at 226, the privilege attribute 206 to be privilege attribute 210. The security agent component 110 and/or the security agent 108 may also identify the process, thread, or component that made the request for the execute operation (i.e., process 208) and may monitor further activity of that process, thread, or component. Alternatively, the security agent component 110 may copy, at 228, the contents of memory page 202 to copy memory page 216, set the privilege attribute 222 to execute, and temporarily redirect from memory page 202 to copy memory page 216. The security agent component 110 may then allow the execute operation to proceed, and the process 208 may execute false code 220 stored at copy memory page 216 and receive an indication of success. The security agent component 110 may then return mapping to point to memory page 202. Thus, the memory location 204(b) is protected, the process 208 is tricked into thinking it succeeded, and both objectives are achieved without the computing device crashing.


Example System


FIG. 3 illustrates a component level view of a computing device configured with a security agent and security agent component configured to execute as a hypervisor. As illustrated, computing device 300 comprises a memory 302 storing a security agent 304, a security agent component 306, page tables 308, user credentials 310, privilege information 312, an OS 314, processes and data 316, and whitelist component(s) and data 318. Also, computing device 300 includes processor(s) 320 with register(s) 322, a removable storage 324 and non-removable storage 326, input device(s) 328, output device(s) 330 and communication connections 332 for communicating with other computing devices 334.


In various embodiments, memory 302 is volatile (such as RAM), nonvolatile (such as ROM, flash memory, etc.) or some combination of the two. Memory 302 may be an example of memory 120, which is described above in detail with respect to FIG. 1. The security agent 304 may be an example of security agent 108, which is described above in detail with respect to FIG. 1. As described herein, the security agent 304 may register with the operating system 314 as a hardware device driver that includes an interrupt handler 305 for handling interrupts generated by the security agent component 306. The interrupt handler 305 may be an example of the interrupt handler 176, which is described above in detail with respect to FIG. 1. The security agent component 306 may be an example of security agent component 110, which is described above in detail with respect to FIG. 1. Page tables 308 may be any sort of page tables, such as page tables mapping virtual addresses to physical addresses of memory pages. Uses of such page tables 308 are described above in detail with respect to FIG. 1 and FIGS. 2a-2b. User credentials 310 may be any sort of user credentials, such as user names and passwords for one or more processes or components. Privilege information 312 may be indications of privileges, such as admin privileges for processes, threads, user accounts, etc. The OS 314 may be any sort of OS, such as the host OS kernel 106 described above in detail with respect to FIG. 1. The processes and data 316 may be any sort of processes and data, such as process 118, which is described above in detail with respect to FIG. 1, or process 208, which is described above in detail with respect to FIGS. 2a-2b. The whitelist component(s) and data 318 may be any sort of components and data, such as whitelist component(s) and data 136, which is described above in detail with respect to FIG. 1b.


In some embodiments, the processor(s) 320 is a central processing unit (CPU), a graphics processing unit (GPU), or both CPU and GPU, or other processing unit or component known in the art. Processor 320 supports hardware-based virtualization (such as Intel™ VT-x) with second level address translation (SLAT). Processor(s) 320 may be an example of processor 124, which is described above in detail with respect to FIGS. 1a-1d. Also, processor 320 may include one or more processor register(s) 322. Processor register(s) 322 may be example(s) of either or both of the debug registers 144, which are described above in detail with respect to FIG. 1c, or the control registers 152, which are described above in detail with respect to FIG. 1d.


Computing device 300 also includes additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 3 by removable storage 324 and non-removable storage 326. Non-transitory computer-readable media may include volatile and nonvolatile, removable and non-removable tangible, physical media implemented in technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. System memory 302, removable storage 324 and non-removable storage 326 are all examples of non-transitory computer-readable media. Non-transitory computer-readable media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible, physical medium which can be used to store the desired information and which can be accessed by the computing device 300. Any such non-transitory computer-readable media may be part of the computing device 300.


Computing device 300 also has input device(s) 328, such as a keyboard, a mouse, a touch-sensitive display, voice input device, etc., and output device(s) 330 such as a display, speakers, a printer, etc. These devices are well known in the art and need not be discussed at length here.


Computing device 300 also contains communication connections 332 that allow the computing device 300 to communicate with other computing devices 334, such as device(s) of a remote security service.


Example Processes

The processes described herein are illustrated as logical flow graphs, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.



FIG. 4 illustrates an example process for initiating execution of a security agent component as a hypervisor for a computing device, determining memory locations of the computing device to be intercepted, and setting intercepts for the determined memory locations. The process 400 includes, at 402, a security agent on a computing device initiating a security agent component as a hypervisor for the computing device. The initiating may include, at 404, storing processor state settings in a data structure and, at 406, instructing a processor of the computing device to initiate the security agent component as the hypervisor based on the data structure. In some implementations, the security agent may include different routines for different operating systems, each of the different routines fixing as invariant a part of the data structure associated with the respective different operating system.


At 408, the security agent may then determine a subset of memory locations in the memory to be intercepted. At 410, the security agent may determine the subset based on a security agent configuration received from a security service.


At 412, the security agent may request that an operating system kernel of the computing device lock page table mappings of the memory locations of the subset of memory location.


At 414, the security agent may determine instructions to be intercepted and, at 416, the security agent component may set intercepts for the determined instructions. The operations at 414 and 416 may also be performed before the operations shown at 408-412 or concurrently with those operations.


At 418, the security agent component may set intercepts for memory locations of the determined subset of memory locations. At 420, setting the intercepts may include setting privilege attributes for pages which include the memory locations of the determined subset of memory locations, or it may include changing the physical memory location of such pages.


At 422, the security agent may receive an updated security agent configuration and, without rebooting, repeat the determining of the subset of memory locations at 408 and cause the security agent component to repeat the setting of the intercepts at 418.


At 424, the security agent component may remove intercepts corresponding to a process upon termination of the process.



FIG. 5 illustrates an example process for protecting memory locations through privilege attributes of pages while enabling operations on other memory locations associated with those pages. The process 500 includes, at 502, identifying memory locations of a subset of memory locations in memory of the computing device to be intercepted. In some implementations, the identified memory locations include a memory location associated with privileges for a process. In further implementations, the identified memory locations include a memory location associated with user credentials.


At 504, pages of the memory which include the identified memory locations may then be determined.


At 506, privilege attributes of the pages may then be set to prevent specific types of operations from affecting the memory locations. When the identified memory locations include a memory location associated with privileges for a process, the specific types of operations may include write operations and the setting includes setting the privilege attribute for the page including the memory location to a read only value to prevent writes to the memory location. When the identified memory locations include a memory location associated with user credentials, the specific types of operations may include read operations and setting includes setting the privilege attribute for the page including the memory location to an inaccessible value to prevent reads of the memory location.


At 508, an operation affecting another memory location associated with one of the pages which differs from the identified memory location associated with that page may be noted.


At 510, the privilege attribute of the one of the pages may then be temporarily reset to allow the operation.


Before, during, or after the operations shown at 508-510, an operation affecting the identified memory location may, at 512, be noted.


At 514, a process, thread, or module that requested the operation may then be identified.


At 516, responsive to noting the operation at 512, the privilege attribute of the page including the one of the identified memory locations may be temporarily reset to allow the operation. At 518, after temporarily resetting the privilege attribute, activities of the process, thread, or module may be monitored.


At 520, responsive to noting the operation at 512, a false indication of success for the operation may be returned. At 522, returning the false indication of success includes allowing the write operation to an alternate memory location and returning an indication that the write operation was successful. At 524, the read operation may be redirected to be performed on an alternate memory location storing false or deceptive user credentials. At 526, use of the deceptive credentials may then be monitored. In some implementations, redirecting to an alternate memory location may involve copying contents of the page including the identified memory location to a page which includes the alternate memory location storing the false or deceptive user credentials.



FIG. 6 illustrates an example process for determining memory locations to be intercepted and setting privilege attributes for memory pages including those memory locations, including setting a memory page to non-executable. The process 600 includes, at 602, a security agent on a computing device initiating a security agent component as a hypervisor for the computing device. The initiating may include, at 604, storing processor state settings in a data structure and, at 606, instructing a processor of the computing device to initiate the security agent component as the hypervisor based on the data structure.


At 608, the security agent may then identify memory locations in the memory to be intercepted. At 610, the security agent may identify those memory locations based on a security agent configuration received from a security service.


At 612, the security agent component then determines pages of memory that include the identified memory locations and sets privilege attributes for those pages to prevent specific types of access to the memory locations. For example, at 614, the security agent component may set at least one of the pages to non-executable to prevent execution of code stored at an identified memory location included in that page.


At 616, the security agent component may note an operation affecting another memory location associated with one of the pages which differs from an identified memory location associated with that page. For instance, the operation may involve execution of code stored at the other memory location. At 618, the security agent component may then temporarily reset the privilege attribute of the page to allow the execution of the code stored at the other memory location.


Alternatively, or additionally, at 620, the security agent component may return a false indication of success to a process seeking to execute the code stored at an identified memory location.


Alternatively, or additionally, at 622, the security agent component may cause an execute operation intended for the code stored at an identified memory location to be performed on other code stored at an alternate memory location.



FIG. 7 illustrates an example process for determining that a memory location is whitelisted and excluding memory page(s) that include the whitelisted memory location from a set of pages to be intercepted. The process 700 includes, at 702, a security agent on a computing device initiating a security agent component as a hypervisor for the computing device and identifying memory locations in memory of the computing device to be intercepted.


At 704, the security agent component determines a set of pages of the memory, each of the pages include at least one of the identified memory locations.


At 706, the security agent component then determines that one of the memory locations is a whitelisted memory location. At 708, the determining may include consulting a component or list identifying different memory regions of the memory and indicating which memory regions are whitelisted. At 710, the determining may additionally or alternatively include receiving the list of different memory regions that are whitelisted from a security service. At 712, the determining may additionally or alternatively include building a list of whitelisted memory locations based on memory accesses of a known safe component.


At 714, the security agent component then excludes one or more pages that include a whitelisted memory location from the set of pages.


At 716, the security agent component then sets privilege attributes for remaining pages of the set of pages to prevent specific types of operations from affecting memory locations included in those pages.



FIG. 8 illustrates an example process for intercepting accesses of debug registers and, when such accesses are from the operating system, responding with operating-system-permitted values. The process 800 includes, at 802, a security agent on a computing device storing memory addresses in debug registers. At 804, such memory addresses may include memory addresses not permitted in a debug register by the operating system of the computing device. Such a memory address not permitted by the operating system to be stored in the debug registers may, for example, be associated with a kernel-mode component. Further, the security agent may initiate a security agent component as a hypervisor for the computing device.


At 806, the security agent component sets intercepts for the debug registers. At 808, setting the intercepts may include setting intercepts for instructions seeking to read the debug registers.


At 810, the security agent component notes a read operation from the operating system attempting to read one of the debug registers. At 812, in response to noting the read operation, the security agent component returns an operating-system-permitted value for the one of the debug registers to the operating system.


At 814, the security agent component may note a read operation from malware attempting to read one of the memory addresses stored in a corresponding one of the debug registers and, at 816, in response, return a false value to the malware or perform a security action.


At 818, the security agent component may further set intercepts for the memory addresses stored in the debug registers. At 820, the setting may include setting privilege attributes for memory pages which include the memory addresses.


At 822, the security agent may further identify other memory locations to be intercepted, and the security agent component may set privilege attributes for memory pages that include the other memory locations.



FIG. 9 illustrates an example process for intercepting instructions for accessing control registers. The process 900 includes, at 902, a security agent on a computing device initiating a security agent component as a hypervisor for the computing device.


At 904, the security agent then determines control registers of the computing device to be protected. Such control registers may include, at 906, a control register storing an on setting for a security feature of the computing device.


At 908, the security agent component then sets intercepts for instructions for performing write operations on the control registers.


At 910, the security agent component further notes a write operation seeking to write an off value to the control register storing the on setting for the security feature. At 912, the security agent component responds to the write operation with a false indication that the security feature has been set to the off value.



FIG. 10 illustrates an example process for intercepting instructions for reading a value of a processor configuration register, such as a Model Specific Register (MSR), to receive system calls at a security agent. The process 1000 includes, at 1002, a security agent on a computing device initiating a security agent component as a hypervisor for the computing device.


At 1004, the security agent component 110 further changes a value of a MSR 160 of a processor 124 of the computing device from an original MSR value to an updated MSR value. The updated MSR value, when read by the processor 124, causes system calls to be received by the security agent 108.


At 1006, the security agent component 110 further sets an intercept for instructions for performing read operations on the MSR 160. This may include setting an intercept for instructions for performing MSR read operations from processes, threads, or components that are different from the processor 124.


At 1008, after setting the intercept at 1006, the security agent component 110 may note a read operation from a process, a thread, or a component that is different from the processor 124 and is attempting to read the value of the MSR 160. The component attempting to read the value of the MSR 160 may be an operating system component (e.g., the host OS kernel 106), or a different component (e.g., an antivirus driver).


At 1010, in response to noting the read operation at 1008, the security agent component 110 further returns the original MSR value to the process, thread, or component attempting to read the value of the MSR 160. This original MSR value is returned at 1010 notwithstanding that the actual value of the MSR 160 has been changed to the updated MSR value at 1004.


At 1012, the processor 124 may read the updated MSR value as part of a system call procedure. This may occur in response to an application executing in user mode of the computing device invoking a process 118 to request access to a system resource, such as to open a file, allocate memory, or the like.


At 1014, because the updated MSR value points to the security agent 108, the security agent 108 receives a system call.


At 1016, the security agent 108, in response to receiving the system call at 1014, further identifies a process, thread, or component associated with the system call. For example, a process 118 that was invoked by a user-level application may be identified at 1016.


At 1018, the security agent 108 further monitors events on the computing device that result from execution of the identified process, thread, or component associated with the system call. This may allow the security agent 108 (perhaps in conjunction with a remote security service) to detect malware and respond with a remedial action to counter the detected malware.


It is to be appreciated that, although FIG. 10 is described with reference to a MSR, the process 1000 can be implemented with any type of processor configuration register, as mentioned elsewhere in this disclosure.



FIG. 11 illustrates an example process for using a system call received by a security agent as a trigger for initiating a security action on data associated with executing user-mode processes. The process 1100 includes, at 1102, the security agent 108 observing process creation events associated with the user-level processes, such as the processes 118 shown in FIG. 1. In this manner, the security agent 108 may keep track of processes 118 executing in the user mode 104 of the computing device by detecting when they are created (and when they are destroyed). Thus, in some implementations, the security agent 108 may also observe process termination events.


At 1104, the security agent component 110 may change a value of the MSR 160 of the processor 124 from an original MSR value to an updated MSR value that points to the security agent 108, as described herein.


At 1106, after changing the value of the MSR 160 at 1104, the security agent 108 monitors for receipt of a system call. If no system call is received, the process 1100 iterates at 1106 until a system call is received. When a system call is received, the process 1100 follows the “yes” route from block 1106 to 1108 where the security agent 108 initiates a security action on data that is identified as being associated with one or more user-level processes 118 executing in the user mode 104 of the computing device. This may include, at 1108, the security agent 108 identifying such data on which the security action is to be initiated. The identification of the data at 1108 may be based on the process creation events observed at 1102. For example, the security agent 108 can look up the currently executing processes 118 and identify data associated therewith. As shown at 1110, the identified data may include kernel-level tokens 174 associated with the currently-executing user-level processes 118 or threads. At 1112, initiating the security action on the data may include the security agent 108 determining whether values of any of the kernel-level tokens 174 have changed from an original token value to an updated token value, such as an updated token value that allows a corresponding user-level process 118 or thread to execute with greater privilege (e.g., an Admin privilege) than a privilege allowed by the original token value.


At 1114, the security agent 108 may take a remedial action in response to determining that the original token value of a token 174 has changed to an updated token value. The remedial action taken at 1114 may be to terminate or suspend the corresponding user-level process 118 or thread, or to monitor events on the computing device that result from the corresponding user-level process 118 or thread executing in the user mode of the computing device, as shown at 1116. Alternatively, the remedial action taken at 1114 may be to restoring the updated token value of the changed kernel-level token 174 to its original token value, as shown at 1118. In some embodiments, such as in response to determining that it is not possible to restore the updated token value to its original token value, the remedial action taken at 1114 may include restoring the updated token value of the changed kernel-level token 174 to a “safe” or “inert” value, or some value, other than the original value, that reflects the original state.


It is to be appreciated that, although FIG. 11 is described with reference to a MSR, the process 1100 can be implemented with any type of processor configuration register, as mentioned elsewhere in this disclosure



FIG. 12 illustrates an example process for improving computational performance of a system by offloading task execution from a security agent component in a hypervisor to a security agent in kernel-mode, the task(s) relating to an operation affecting a page with a protected memory location(s). The process 1200 includes, at 1202, a security agent 108 of a computing device registering with an operating system of the computing device as a hardware device driver that includes an interrupt handler 176 for handling interrupts generated by the security agent component 110. The security agent 108 executes in the kernel mode 102 of the computing device.


At 1204, the security agent 108 further initiates a security agent component 110 as a hypervisor that is configured to act as a hardware device of the computing device by generating interrupts.


At 1206, memory locations in memory of the computing device that are to be intercepted are identified. In some implementations, the identified memory locations include a memory location associated with privileges for a process. In further implementations, the identified memory locations include a memory location associated with user credentials.


At 1208, pages of the memory which include the identified memory locations may then be determined.


At 1210, privilege attributes of the pages may then be set by the security agent component 110 to prevent specific types of operations from affecting the memory locations. When the identified memory locations include a memory location associated with privileges for a process, the specific types of operations may include write operations and the setting includes setting the privilege attribute for the page including the memory location to a read only value to prevent writes to the memory location. When the identified memory locations include a memory location associated with user credentials, the specific types of operations may include read operations and setting includes setting the privilege attribute for the page including the memory location to an inaccessible value to prevent reads of the memory location.


At 1212, an operation affecting a page of memory may be noted. The affected page may include a protected memory location(s) identified at 1206, and the operation may affect that memory location(s). The operation can include any of the example types of operations described herein, such as a read operation to read user credentials stored at the protected memory location.


At 1214, a determination may be made as to whether execution of one or more tasks relating to the operation affecting the page of memory are to be offloaded to the security agent 108 executing as a kernel-level component. The tasks relating to the operation affecting the page of memory can vary, as described herein, such as a task of determining whether the page corresponds to any of the pages that had the privilege attributes adjusted/set by the security agent component 110, determining a current address and/or offsets of the pages, and/or determining whether to temporarily reset the privilege attribute of the affected page. In some implementations, the determination at 1214 may include determining a computational cost of executing the task(s), and determining whether the computational cost of executing the task(s) meets or exceeds a threshold computational cost. If the security agent component 110 determines, at 1214, not to offload the task(s) to the security agent 108 (e.g., because the computational cost of executing the task(s) is less than the threshold computational cost), the process 1200 may follow the “no” route from block 1214 to block 1216 where the security agent component refrains from offloading the task(s), and executes the task(s) relating to the noted operation as a hypervisor.


If, at 1224, the security agent component 110 determines to offload the task(s) to the security agent 108 (e.g., because the computational cost of executing the task(s) meets or exceeds the threshold computational cost), the process 1200 may follow the “yes” route from block 1214 to block 1218 where the security agent component 110 generates an interrupt.


At 1220, the security agent component 110 may further execute an instruction (e.g., an “interrupt_exiting” flag instruction) to place the interrupt in a queue as a pending interrupt, whereby the operating system may receive the pending interrupt upon a component of the operating system executing a different instruction to receive pending interrupts.


At 1222, the operating system may execute an instruction to receive pending interrupts. For example, the operating system may receive the pending interrupt after executing a STI instruction at 1222.


At 1224, the security agent 108 may execute one or more of the tasks relating to the noted operation that designated as tasks to offload to the security agent 108. This may include, at 1226, the security agent component 110 communicating the task(s) to the security agent 108 after generating the interrupt at 1218.


At 1228, the security agent 108 may return from the interrupt to allow the security agent component 110 to proceed, such as by the security agent component 110 executing remaining tasks at 1216. This can include the security agent component 110 temporarily resetting the privilege attribute of the affected page in the instance where the security agent 108 determines that the privilege attribute of the affected page is to be temporarily reset. As shown in FIG. 12, the security agent component 110 may further continue noting operations affecting individual pages of memory at block 1212, which may cause the process 1200 to iterate of the remaining blocks after 1212 when a new operation is noted.



FIG. 13 illustrates an example process for switching between hypervisor and kernel-mode components to perform tasks relating to an operation affecting a page with a protected memory location(s). The process 1300 shows blocks 1300A (1300(A)(1) and 1300(A)(2)) that include tasks that may be performed by the security agent 108 when the security agent component 110 generates an interrupt to offload those tasks to the security agent. The process 1300 also shows block 1300B that includes a task that may be performed by the security agent component 110, such as when the security agent 108 returns from the interrupt.


At 1302, the security agent 108, in response to an interrupt generated by the security agent component 110, may determine that a page of memory affected by a noted operation corresponds to a page that had its privilege attribute set by the security agent component 110. This may involve, at 1304, the security agent 108 determining current addresses and/or offsets of pages of memory. For instance, pages may move in or out (page in, page out), and offsets may change over time since the privilege attribute was initially adjusted/set by the security agent component 110.


At 1306, the security agent 108 may further determine whether to temporarily reset the privilege attribute of the affected page or not. This may involve, at 1308, determining that the noted operation affects a memory location that is different from a protected memory location(s) on the affected page, or, at 1310, determining that the noted operation affects a protected memory location on the page, and identifying the operation requestor at 1312.


At 1314, after returning from an interrupt, the security agent component 110 may temporarily reset a privilege attribute of the affected page. As shown, this may occur in response to the security agent 108 determining that the noted operation affects a non-protected memory location, and the operation is to be allowed to proceed since it does not affect a protected memory location. Alternatively, the temporarily resetting of the privilege attribute of the page at 1314 may occur in response to the security agent identifying an operation requestor at 1312 when the noted operation affects a protected memory location. Following such a determination, the privilege attribute of the page may be reset at 1314, followed by another interrupt to offload execution of a task of monitoring activities of the operation requestor at 1316 by the security agent 108.


Alternatively, at 1318, following identification of an operation requestor when the noted operation affects a protected memory location, the security agent 108 may not return from the interrupt and may instead proceed to perform tasks such as returning a false indication of success to the operation requestor at 1318. This may involve allowing a write operation to an alternate location and indicating a success at 1320.


At 1322, the security agent 108 may redirect the noted operation to an alternative location storing false or deceptive credentials, and, at 1324, the security agent 108 may monitor use of the false or deceptive credentials.


The process 1300 merely illustrates example tasks that can be performed by the security agent 108 and the security agent component 110 in kernel mode 104 and in the hypervisor level 112, respectively, and it is to be appreciated that other tasks may be performed by each component, and/or the tasks shown in FIG. 13 may be performed by the other entity, in some instances. This may depend on the computational cost of executing the task(s) by the security agent component 110 as a hypervisor, and the computational cost that may be saved by offloading the task(s) to the security agent 108 executing as a kernel-level component. The transition from an operation in 1300(A)(1) to an operation in 1300(B) may be enabled by the security agent component 110 generating an interrupt, as described, and the transition from an operation in 1300(B) to an operation in 1300(A)(2) may be enabled by returning from the interrupt, as described.


CONCLUSION

Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.

Claims
  • 1. A computer-implemented method comprising: initiating, by a security agent, a security agent component in a hypervisor of a computing device;changing, by a security agent component executing in the hypervisor of the computing device, a value of a processor configuration register of a processor of the computing device from an original value to an updated value, the original value permitting system calls by a process, a thread, or a component different from the processor to perform an access operation of the processor configuration register and the updated value causing the system calls associated with performing the access operation of the processor configuration register to be intercepted and received by a security agent executing in kernel mode of the computing device;receiving a system call of the system calls associated with performing the access operation of the processor configuration register from the process, the thread, or the component different from the processor by the security agent after the changing of the value of the processor configuration register;in response to receiving the system call by the security agent: initiating, by the security agent, a security action on the process, the thread, or the component, andinitiating, by the security agent, a security action on data that is associated with user-level processes or threads executing in a user mode of the computing device, the data including kernel-level tokens associated with the user-level processes or the threads,
  • 2. The computer-implemented method of claim 1, further comprising: setting, by the security agent component, an intercept for instructions for performing read operations on the processor configuration register;noting, by the security agent component and after the setting of the intercept, a read operation from a process, a thread, or a component that is different from the processor and is attempting to read the value of the processor configuration register; andin response to the noting of the read operation, returning, by the security agent component, the original value to the process, the thread, or the component that is different from the processor.
  • 3. The computer-implemented method of claim 1, further comprising: observing, by the security agent, process creation events associated with the user-level processes or thread creation events associated with the threads; andidentifying the data on which the security action is to be initiated based at least in part on the observing of the process creation events or the thread creation events.
  • 4. The computer-implemented method of claim 1, wherein the updated token value of a changed kernel-level token allows a corresponding user-level process or a corresponding thread to execute with greater privilege than a privilege allowed by the original token value.
  • 5. The computer-implemented method of claim 4, further comprising at least one of: restoring the updated token value of the changed kernel-level token to the original token value or to another token value that is safe, inert, and/or reflects an original state of the changed kernel-level token;terminating the corresponding user-level process or the corresponding thread; suspending the corresponding user-level process or the corresponding thread; ormonitoring events on the computing device that result from the corresponding user-level process or the corresponding thread executing in the user mode of the computing device.
  • 6. The computer-implemented method of claim 1, further comprising: receiving, after the changing of the value of the processor configuration register and based on the security agent component determining that an entity performing the access operation is an operating system of the computing device, the original value of the processor configuration register by an operation system of the computing device from the security agent component.
  • 7. A system comprising: a processor;a security agent configured to be operated by the processor to execute in kernel mode of the system;a security agent component; anda non-transitory computer-readable medium storing computer-readable instructions that, when executed, causes the processor to perform operations comprising: initiating, by the security agent, the security agent component in a hypervisor of the system;changing, by the security agent component, a value of a processor configuration register of the processor from an original value to an updated value, the original value permitting system calls by a process, a thread, or a component different from the processor to perform an access operation of the processor configuration register and the updated value causing the system calls associated with performing the access operation to be intercepted and received by a security agent executing in a kernel mode of the system;receiving a system call associated with performing the access operation of the processor configuration register from the the process, the thread, or the component different from the processor by the security agent after the changing of the value of the processor configuration register;in response to receiving the system call by the security agent: initiating, by the security agent, a security action on the process, the thread, or the component, andinitiating, by the security agent, a security action on data that is associated with user-level processes or threads executing in a user mode of the system, the data including kernel-level tokens associated with the user-level processes or the threads and the security action determines whether values of any of the kernel-level tokens have changed from an original token value to an updated token value,wherein the updated token value allows a corresponding user-level process or a corresponding thread to execute with greater privileged than a privileged allowed by the original token value; andexecuting, based on determining that a kernel-level token of the kernel-level tokens have changed to the updated token value, at least one of:restoring the updated token value of the changed kernel-level token to the original token value or to another token value that is safe, inert, and/or reflects an original state of the changed kernel-level token;terminating the corresponding user-level process or the corresponding thread;suspending the corresponding user-level process or the corresponding thread; ormonitoring events on the system that result from the corresponding user-level process or the corresponding thread executing in the user mode of the computing device system.
  • 8. The system of claim 7, wherein the security agent component is further configured to: set an intercept for instructions for performing read operations on the processor configuration register;after the setting of the intercept, note read operations from an operating system component of an operating system attempting to read the value of the processor configuration register; andin response to noting the read operations, return the original value to the operating system component.
  • 9. The system of claim 7, wherein the security agent is further configured to: in response to receiving a system call of the system calls, identify a process, a thread, or a component associated with the system call; andmonitor events on the system that result from execution of the process, the thread, or the component associated with the system call.
  • 10. The system of claim 7, wherein the security agent is further configured to: observe process creation events associated with the user-level processes or thread creation events associated with the threads; andidentify the data on which the security action is to be initiated based at least in part on the observing of the process creation events or the thread creation events.
  • 11. A non-transitory computer-readable medium storing computer-readable instructions that, when executed, causes one or more processors to perform operations comprising: initiating, by a security agent, a security agent component in a hypervisor of a computing device;changing, by a security agent component executing in the hypervisor of the computing device, a value of a processor configuration register of a processor of the computing device from an original value to an updated value, the original value permitting system calls by a process, a thread, or a component different from the processor to perform an access operation of the processor configuration register and the updated value causing the system calls associated with performing the access operation of the processor configuration register to be intercepted and received by a security agent executing in kernel mode of the computing device;receiving a system call of the system calls associated with performing the access operation of the processor configuration register from the process, the thread, or the component different from the processor by the security agent after the changing of the value of the processor configuration register; andin response to receiving the system call by the security agent: initiating, by the security agent, a security action on the process, the thread, or the component, andinitiating, by the security agent, a security action on data that is associated with user-level processes or threads executing in a user mode of the computing device, the data including kernel-level tokens associated with the user-level processes or the threads,
  • 12. The non-transitory computer-readable medium of claim 11, the operations further comprising: setting, by the security agent component, an intercept for instructions for performing read operations on the processor configuration register;noting, by the security agent component and after the setting of the intercept, a read operation from the process, the thread, or the component that is different from the processor and is attempting to read the value of the processor configuration register; andin response to the noting of the read operation, returning, by the security agent component, the original value to the process, the thread, or the component that is different from the processor.
  • 13. The non-transitory computer-readable medium of claim 11, the operations further comprising: observing, by the security agent, process creation events associated with the user-level processes or thread creation events associated with the threads; andidentifying the data on which the security action is to be initiated based at least in part on the observing of the process creation events or the thread creation events.
  • 14. The non-transitory computer-readable medium of claim 11, wherein the updated token value of a changed kernel-level token allows a corresponding user-level process or a corresponding thread to execute with greater privilege than a privilege allowed by the original token value.
  • 15. The non-transitory computer-readable medium of claim 14, the operations further comprising at least one of: restoring the updated token value of the changed kernel-level token to the original token value or to another token value that is safe, inert, and/or reflects an original state of the changed kernel-level token;terminating the corresponding user-level process or the corresponding thread; suspending the corresponding user-level process or the corresponding thread; ormonitoring events on the computing device that result from the corresponding user-level process or the corresponding thread executing in the user mode of the computing device.
  • 16. The non-transitory computer-readable medium of claim 11, wherein the processor configuration register is a Model Specific Register (MSR) of the processor.
RELATED APPLICATIONS

This patent application is a continuation-in-part of U.S. Non-provisional patent application Ser. No. 17/060,355, filed on Oct. 1, 2020, which is a continuation-in-part of U.S. Non-provisional patent application Ser. No. 15/063,086, filed on Mar. 7, 2016, the entirety of which are hereby incorporated by reference.

US Referenced Citations (923)
Number Name Date Kind
4564903 Guyette et al. Jan 1986 A
4787031 Karger Nov 1988 A
5038281 Peters Aug 1991 A
5179690 Ishikawa Jan 1993 A
5291608 Flurry Mar 1994 A
5469571 Bunnell Nov 1995 A
5515538 Kleiman May 1996 A
5774686 Hammond Jun 1998 A
5854912 Mahalingaiah Dec 1998 A
5909696 Reinhardt Jun 1999 A
5940876 Pickett Aug 1999 A
6044430 MacDonald Mar 2000 A
6076156 Pickett Jun 2000 A
6154818 Christie Nov 2000 A
6173404 Colburn Jan 2001 B1
6226749 Carloganu May 2001 B1
6275879 Goodfellow Aug 2001 B1
6349355 Draves Feb 2002 B1
6408386 Hammond Jun 2002 B1
6516395 Christie Feb 2003 B1
6711673 Mitchell Mar 2004 B1
6807622 McGrath Oct 2004 B1
6823433 Barnes Nov 2004 B1
6854039 Strongin Feb 2005 B1
6880068 McGrath Apr 2005 B1
6889312 McGrath May 2005 B1
6898697 Gao May 2005 B1
6968446 McGrath Nov 2005 B1
6986052 Mittal Jan 2006 B1
6986058 Friedman Jan 2006 B1
7043616 McGrath May 2006 B1
7055152 Ganapathy May 2006 B1
7073173 Willman Jul 2006 B1
7082507 Christie Jul 2006 B1
7096491 Cheng Aug 2006 B2
7130951 Christie Oct 2006 B1
7356679 Le Apr 2008 B1
7398532 Barber Jul 2008 B1
7426644 Strongin Sep 2008 B1
7461144 Beloussov Dec 2008 B1
7464255 Tan Dec 2008 B1
7478237 Costea et al. Jan 2009 B2
7484239 Tester Jan 2009 B1
7493498 Schmidt Feb 2009 B1
7496727 Ludloff Feb 2009 B1
7552477 Satish Jun 2009 B1
7561571 Lovett Jul 2009 B1
7571318 Strongin Aug 2009 B2
7581220 Roeck Aug 2009 B1
7607173 Szor Oct 2009 B1
7633955 Saraiya Dec 2009 B1
7664110 Lovett Feb 2010 B1
7685281 Saraiya Mar 2010 B1
7702843 Chen Apr 2010 B1
7702887 Grohoski Apr 2010 B1
7818808 Neiger Oct 2010 B1
7843906 Chidambaram Nov 2010 B1
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7860961 Finkelstein Dec 2010 B1
7865893 Omelyanchuk et al. Jan 2011 B1
7873693 Mehrotra Jan 2011 B1
7890605 Protassov Feb 2011 B1
7945953 Salinas May 2011 B1
7953903 Finkelstein May 2011 B1
7990994 Yeh Aug 2011 B1
7996836 McCorkendale Aug 2011 B1
8104083 Sobel Jan 2012 B1
8132003 Durham Mar 2012 B2
8136158 Sehr Mar 2012 B1
8145785 Finkelstein Mar 2012 B1
8209704 McCann Jun 2012 B1
8250653 Wang Aug 2012 B2
8266633 Saulsbury et al. Sep 2012 B1
8271996 Gould Sep 2012 B1
8285999 Ghose Oct 2012 B1
8381284 Dang Feb 2013 B2
8386788 Kozuch Feb 2013 B2
8387075 McCann Feb 2013 B1
8448165 Conover May 2013 B1
8473724 Kenville Jun 2013 B1
8510369 Ekke Aug 2013 B1
8510756 Koryakin Aug 2013 B1
8607009 Nicholas et al. Dec 2013 B2
8656482 Tosa et al. Feb 2014 B1
8707433 Mann Apr 2014 B1
8738860 Griffin May 2014 B1
8799904 Kuiper Aug 2014 B2
8832820 Barjatiya Sep 2014 B2
8843684 Jones Sep 2014 B2
8863283 Sallam Oct 2014 B2
8881145 Chiueh Nov 2014 B2
8893258 Rao Nov 2014 B2
8909800 Grebenschikov Dec 2014 B1
8924964 Kodi et al. Dec 2014 B2
8938723 Tormasov Jan 2015 B1
8955115 Sabetta Feb 2015 B2
8996807 Joshi et al. Mar 2015 B2
9003402 Carbone Apr 2015 B1
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9009836 Yarykin Apr 2015 B1
9021163 Czarny Apr 2015 B1
9021498 Thimmappa May 2015 B1
9043580 Henry May 2015 B2
9053216 Coleman Jun 2015 B1
9075720 Blinick Jul 2015 B2
9087189 Koeten Jul 2015 B1
9088569 Arroyo et al. Jul 2015 B2
9092625 Kashyap et al. Jul 2015 B1
9159035 Ismael Oct 2015 B1
9172724 Reddy Oct 2015 B1
9176843 Ismael Nov 2015 B1
9177153 Perrig Nov 2015 B1
9183399 Muff Nov 2015 B2
9189630 Yudin Nov 2015 B1
9201677 Joshi Dec 2015 B2
9203855 Mooring Dec 2015 B1
9203862 Kashyap et al. Dec 2015 B1
9213567 Barde Dec 2015 B2
9274823 Koryakin Mar 2016 B1
9274839 Schluessler Mar 2016 B2
9274974 Chen et al. Mar 2016 B1
9286105 Levchenko Mar 2016 B1
9292350 Pendharkar Mar 2016 B1
9317452 Forschmiedt Apr 2016 B1
9372984 Smith Jun 2016 B2
9378044 Gaurav et al. Jun 2016 B1
9398028 Karandikar Jul 2016 B1
9424058 Wasson Aug 2016 B1
9424155 Pizel Aug 2016 B1
9430646 Mushtaq Aug 2016 B1
9436751 Serebrin Sep 2016 B1
9438634 Ross Sep 2016 B1
9460284 Hajmasan Oct 2016 B1
9465617 Warkentin Oct 2016 B1
9467476 Shieh Oct 2016 B1
9471514 Badishi Oct 2016 B1
9495210 McCann Nov 2016 B1
9503482 Hugenbruch Nov 2016 B1
9531735 Lukacs Dec 2016 B1
9536084 Lukacs Jan 2017 B1
9542554 Salsamendi Jan 2017 B1
9552225 Pope Jan 2017 B2
9563464 Ramasubramanian Feb 2017 B1
9563569 Tsirkin Feb 2017 B2
9575781 Suit Feb 2017 B1
9594598 Brouwer Mar 2017 B1
9596261 Lukacs Mar 2017 B1
9612966 Joshi Apr 2017 B2
9619393 Bt et al. Apr 2017 B1
9628507 Haq Apr 2017 B2
9632848 Oldcorn Apr 2017 B1
9639379 Suit May 2017 B1
9648045 Mooring May 2017 B2
9652615 Watson May 2017 B1
9659182 Roundy May 2017 B1
9672354 Lutas et al. Jun 2017 B2
9680805 Rodgers Jun 2017 B1
9686171 Vemuri Jun 2017 B1
9690606 Ha Jun 2017 B1
9747172 Epstein Aug 2017 B2
9753758 Oldenburg Sep 2017 B1
9756069 Li Sep 2017 B1
9792431 Dalal Oct 2017 B1
9805193 Salsamendi Oct 2017 B1
9811658 Martini Nov 2017 B2
9824209 Ismael Nov 2017 B1
9846588 Ghosh Dec 2017 B2
9881157 Lukacs Jan 2018 B1
9912681 Ismael Mar 2018 B1
9921860 Banga Mar 2018 B1
9921978 Chan Mar 2018 B1
9922192 Kashyap Mar 2018 B1
9934376 Ismael Apr 2018 B1
9935829 Miller Apr 2018 B1
9942268 Danileiko Apr 2018 B1
9959188 Krishnan May 2018 B1
9959214 Habusha May 2018 B1
9973531 Thioux May 2018 B1
9996279 Gonzalez Diaz Jun 2018 B2
9996370 Khafizov Jun 2018 B1
9996448 Tsirkin Jun 2018 B2
10033747 Paithane Jul 2018 B1
10033759 Kabra Jul 2018 B1
10049211 Lukacs Aug 2018 B1
10051008 Mooring Aug 2018 B2
10061915 Roth Aug 2018 B1
10103959 Klein Oct 2018 B2
10104185 Sharifi Mehr Oct 2018 B1
10152600 Rozas Dec 2018 B2
10162665 Eidelman Dec 2018 B1
10181034 Harrison Jan 2019 B2
10303879 Potlapally et al. May 2019 B1
10348867 Gemignani, Jr. Jul 2019 B1
10380336 Suginaka Aug 2019 B2
10423435 Khafizov et al. Sep 2019 B1
10440000 Kumar Oct 2019 B2
10474589 Raskin Nov 2019 B1
10515019 Iyigun Dec 2019 B1
10540524 Dementiev et al. Jan 2020 B2
10565382 Diamant Feb 2020 B1
10616127 Suit Apr 2020 B1
10642501 Patel et al. May 2020 B1
10846145 Xu Nov 2020 B2
10963280 Kaplan Mar 2021 B2
11150927 Sharifi Mehr Oct 2021 B1
11227056 Tang et al. Jan 2022 B2
11640689 Nikitenko May 2023 B1
11928216 Subramanian Mar 2024 B2
20010044891 McGrath Nov 2001 A1
20020019902 Christie Feb 2002 A1
20020069400 Miloushev Jun 2002 A1
20020120809 Machida Aug 2002 A1
20020169979 Zimmer Nov 2002 A1
20020194389 Worley, Jr. Dec 2002 A1
20030033507 McGrath Feb 2003 A1
20030093579 Zimmer May 2003 A1
20030093686 Barnes May 2003 A1
20030126349 Nalawadi Jul 2003 A1
20030135504 Elvanoglu Jul 2003 A1
20030145194 O'Shea Jul 2003 A1
20040117539 Bennett Jun 2004 A1
20040122834 Durrant Jun 2004 A1
20040123288 Bennett Jun 2004 A1
20040215899 Arimilli Oct 2004 A1
20050027914 Hammalund Feb 2005 A1
20050060722 Rochette Mar 2005 A1
20050071127 Mandal Mar 2005 A1
20050076186 Traut Apr 2005 A1
20050120160 Plouffe Jun 2005 A1
20050120219 Munetoh Jun 2005 A1
20050165983 Nam Jul 2005 A1
20050223199 Grochowski Oct 2005 A1
20050228631 Maly Oct 2005 A1
20050228980 Brokish Oct 2005 A1
20050246453 Erlingsson Nov 2005 A1
20060026385 Dinechin et al. Feb 2006 A1
20060026577 Dinechin et al. Feb 2006 A1
20060064567 Jacobson Mar 2006 A1
20060075381 Laborczfalvi Apr 2006 A1
20060075402 Neiger Apr 2006 A1
20060117325 Wieland Jun 2006 A1
20060130016 Wagner Jun 2006 A1
20060136608 Gilbert Jun 2006 A1
20060161719 Bennett Jul 2006 A1
20060161918 Giers Jul 2006 A1
20060184948 Cox Aug 2006 A1
20060190945 Kissell Aug 2006 A1
20060190946 Kissell Aug 2006 A1
20060195683 Kissell Aug 2006 A1
20060206892 Vega Sep 2006 A1
20060242700 Fischer Oct 2006 A1
20060282624 Yokota Dec 2006 A1
20070022474 Rowett Jan 2007 A1
20070022479 Sikdar Jan 2007 A1
20070028244 Landis Feb 2007 A1
20070043531 Kosche Feb 2007 A1
20070050763 Kagan Mar 2007 A1
20070050764 Traut Mar 2007 A1
20070061441 Landis Mar 2007 A1
20070074208 Ling Mar 2007 A1
20070079090 Rajagopal Apr 2007 A1
20070130569 Heffley Jun 2007 A1
20070136577 Bade Jun 2007 A1
20070156978 Dixon Jul 2007 A1
20070180436 Travostino Aug 2007 A1
20070234020 Jensen Oct 2007 A1
20070261102 Spataro Nov 2007 A1
20070266389 Ganguly Nov 2007 A1
20070300219 Devaux Dec 2007 A1
20080005297 Nos Jan 2008 A1
20080015808 Wilson Jan 2008 A1
20080016314 Li Jan 2008 A1
20080016339 Shukla Jan 2008 A1
20080028461 Pouliot Jan 2008 A1
20080034194 Peak Feb 2008 A1
20080040715 Cota-Robles Feb 2008 A1
20080046679 Bennett Feb 2008 A1
20080046725 Lo Feb 2008 A1
20080059677 Archer Mar 2008 A1
20080065838 Pope Mar 2008 A1
20080065839 Pope Mar 2008 A1
20080072223 Cowperthwaite et al. Mar 2008 A1
20080072236 Pope Mar 2008 A1
20080077767 Khosravi Mar 2008 A1
20080082772 Savagaonkar Apr 2008 A1
20080114985 Savagaonkar May 2008 A1
20080147353 Ford Jun 2008 A1
20080148259 Hankins Jun 2008 A1
20080155154 Kenan Jun 2008 A1
20080163254 Cota-Robles et al. Jul 2008 A1
20080163366 Chinya Jul 2008 A1
20080168258 Armstrong Jul 2008 A1
20080168479 Purtell Jul 2008 A1
20080184373 Traut Jul 2008 A1
20080222160 MacDonald Sep 2008 A1
20080244155 Lee Oct 2008 A1
20080244206 Heo Oct 2008 A1
20080250222 Gokhale Oct 2008 A1
20080256336 Henry Oct 2008 A1
20080271014 Serebrin Oct 2008 A1
20080313647 Klein Dec 2008 A1
20080313656 Klein Dec 2008 A1
20090007100 Field Jan 2009 A1
20090013406 Cabuk Jan 2009 A1
20090037682 Armstrong Feb 2009 A1
20090037908 Armstrong Feb 2009 A1
20090037936 Serebrin Feb 2009 A1
20090037941 Armstrong Feb 2009 A1
20090044274 Budko Feb 2009 A1
20090063835 Yao Mar 2009 A1
20090070760 Khatri Mar 2009 A1
20090083522 Boggs Mar 2009 A1
20090113425 Ports Apr 2009 A1
20090119087 Ang May 2009 A1
20090119748 Yao May 2009 A1
20090133016 Brown May 2009 A1
20090138625 Lee May 2009 A1
20090138729 Hashimoto May 2009 A1
20090164709 Lee Jun 2009 A1
20090165132 Jain Jun 2009 A1
20090172328 Sahita Jul 2009 A1
20090198862 Okitsu Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090204978 Lee Aug 2009 A1
20090210888 Lee Aug 2009 A1
20090216984 Gainey, Jr. Aug 2009 A1
20090217098 Farrell Aug 2009 A1
20090217264 Heller Aug 2009 A1
20090228673 Waters Sep 2009 A1
20090249053 Zimmer et al. Oct 2009 A1
20090254990 McGee Oct 2009 A1
20090259875 Check et al. Oct 2009 A1
20090288167 Freericks Nov 2009 A1
20090293057 Larkin Nov 2009 A1
20090313445 Pandey Dec 2009 A1
20090320042 Thelen Dec 2009 A1
20090320129 Pan Dec 2009 A1
20090327576 Oshins Dec 2009 A1
20090328035 Ganguly Dec 2009 A1
20090328058 Papaefstathiou Dec 2009 A1
20090328074 Oshins Dec 2009 A1
20100023666 Mansell Jan 2010 A1
20100031325 Maigne et al. Feb 2010 A1
20100031360 Seshadri Feb 2010 A1
20100064117 Henry Mar 2010 A1
20100083261 Jayamohan Apr 2010 A1
20100083275 Jayamohan Apr 2010 A1
20100088771 Heller et al. Apr 2010 A1
20100094613 Biles et al. Apr 2010 A1
20100122250 Challener May 2010 A1
20100131729 Fulcheri May 2010 A1
20100131966 Coleman May 2010 A1
20100138843 Freericks Jun 2010 A1
20100161875 Chang et al. Jun 2010 A1
20100161976 Bacher Jun 2010 A1
20100161978 Bacher Jun 2010 A1
20100169948 Budko Jul 2010 A1
20100169968 Shanbhogue Jul 2010 A1
20100191889 Serebrin Jul 2010 A1
20100223447 Serebrin Sep 2010 A1
20100223612 Osisek et al. Sep 2010 A1
20100235645 Henry Sep 2010 A1
20100241734 Miyajima Sep 2010 A1
20100242039 Noguchi Sep 2010 A1
20100250230 Ganguly Sep 2010 A1
20100250824 Belay Sep 2010 A1
20100250869 Adams Sep 2010 A1
20100250895 Adams et al. Sep 2010 A1
20100251235 Ganguly Sep 2010 A1
20100262722 Vauthier et al. Oct 2010 A1
20100262743 Zimmer Oct 2010 A1
20100281273 Lee et al. Nov 2010 A1
20100299665 Adams Nov 2010 A1
20100313201 Warton et al. Dec 2010 A1
20100318997 Li Dec 2010 A1
20110010533 Buford Jan 2011 A1
20110016508 Wallace Jan 2011 A1
20110055469 Natu Mar 2011 A1
20110082962 Horovitz Apr 2011 A1
20110099627 Proudler Apr 2011 A1
20110102443 Dror May 2011 A1
20110107007 van Riel May 2011 A1
20110107331 Evans May 2011 A1
20110107339 Cabrera May 2011 A1
20110122884 Tsirkin May 2011 A1
20110126205 Gaist May 2011 A1
20110126217 Gaist May 2011 A1
20110141124 Halls Jun 2011 A1
20110145552 Yamada Jun 2011 A1
20110153909 Dong Jun 2011 A1
20110153926 Fang Jun 2011 A1
20110154010 Springfield Jun 2011 A1
20110154133 Ganti Jun 2011 A1
20110167195 Scales Jul 2011 A1
20110167422 Eom Jul 2011 A1
20110167434 Gaist Jul 2011 A1
20110173643 Nicolson Jul 2011 A1
20110179417 Inakoshi Jul 2011 A1
20110197004 Serebrin Aug 2011 A1
20110197190 Hattori Aug 2011 A1
20110202917 Laor Aug 2011 A1
20110202919 Hayakawa Aug 2011 A1
20110219208 Asaad Sep 2011 A1
20110219447 Horovitz Sep 2011 A1
20110225458 Zuo Sep 2011 A1
20110225624 Sawhney Sep 2011 A1
20110231839 Bennett Sep 2011 A1
20110239213 Aswani Sep 2011 A1
20110239306 Avni Sep 2011 A1
20110246171 Cleeton Oct 2011 A1
20110246986 Nicholas Oct 2011 A1
20110270944 Keilhau Nov 2011 A1
20110295984 Kunze Dec 2011 A1
20110302577 Reuther Dec 2011 A1
20110307681 Piry Dec 2011 A1
20110307888 Raj Dec 2011 A1
20110320652 Craddock Dec 2011 A1
20110320682 Mcdougall Dec 2011 A1
20110320772 Craddock Dec 2011 A1
20110320823 Saroiu Dec 2011 A1
20110321158 Craddock Dec 2011 A1
20120030518 Rajwar Feb 2012 A1
20120033673 Goel Feb 2012 A1
20120042034 Goggin Feb 2012 A1
20120042145 Sehr Feb 2012 A1
20120047313 Sinha et al. Feb 2012 A1
20120047369 Henry Feb 2012 A1
20120054740 Chakraborty Mar 2012 A1
20120054877 Rosu Mar 2012 A1
20120066681 Levy Mar 2012 A1
20120072638 Grubb Mar 2012 A1
20120075314 Malakapalli Mar 2012 A1
20120079164 Hakewill Mar 2012 A1
20120079479 Hakewill Mar 2012 A1
20120084487 Barde Apr 2012 A1
20120084777 Jayamohan Apr 2012 A1
20120102334 O'Loughlin Apr 2012 A1
20120110237 Li May 2012 A1
20120131309 Johnson et al. May 2012 A1
20120131575 Yehuda May 2012 A1
20120144489 Jarrett Jun 2012 A1
20120147021 Cheng Jun 2012 A1
20120151117 Tuch Jun 2012 A1
20120151206 Paris Jun 2012 A1
20120166767 Patel Jun 2012 A1
20120173842 Frey Jul 2012 A1
20120179855 Tsirkin Jul 2012 A1
20120185688 Thornton et al. Jul 2012 A1
20120188258 Mccrary Jul 2012 A1
20120194524 Hartog Aug 2012 A1
20120198278 Williams Aug 2012 A1
20120200579 Hartog Aug 2012 A1
20120203890 Reynolds Aug 2012 A1
20120204193 Nethercutt Aug 2012 A1
20120216281 Uner Aug 2012 A1
20120227045 Knauth Sep 2012 A1
20120233434 Starks Sep 2012 A1
20120233608 Toeroe Sep 2012 A1
20120240112 Nishiguchi Sep 2012 A1
20120240181 McCorkendale Sep 2012 A1
20120246641 Gehrmann Sep 2012 A1
20120254982 Sallam Oct 2012 A1
20120254993 Sallam Oct 2012 A1
20120254994 Sallam Oct 2012 A1
20120254995 Sallam Oct 2012 A1
20120254999 Sallam Oct 2012 A1
20120255000 Sallam Oct 2012 A1
20120255001 Sallam Oct 2012 A1
20120255002 Sallam Oct 2012 A1
20120255003 Sallam Oct 2012 A1
20120255004 Sallam Oct 2012 A1
20120255010 Sallam Oct 2012 A1
20120255011 Sallam Oct 2012 A1
20120255012 Sallam Oct 2012 A1
20120255013 Sallam Oct 2012 A1
20120255014 Sallam Oct 2012 A1
20120255016 Sallam Oct 2012 A1
20120255017 Sallam Oct 2012 A1
20120255018 Sallam Oct 2012 A1
20120255021 Sallam Oct 2012 A1
20120255031 Sallam Oct 2012 A1
20120260065 Henry Oct 2012 A1
20120260123 Madampath Oct 2012 A1
20120266252 Spiers Oct 2012 A1
20120278525 Serebrin Nov 2012 A1
20120278800 Nicholas Nov 2012 A1
20120297057 Ghosh Nov 2012 A1
20120311307 Chynoweth Dec 2012 A1
20120311315 Ekberg Dec 2012 A1
20120311580 Emelianov Dec 2012 A1
20120317570 Dalcher Dec 2012 A1
20120317585 Elko Dec 2012 A1
20120324442 Barde Dec 2012 A1
20120331464 Saito Dec 2012 A1
20120331480 Ertugay Dec 2012 A1
20130007469 Aratsu Jan 2013 A1
20130007470 Violleau Jan 2013 A1
20130031291 Edwards Jan 2013 A1
20130031292 Van Riel Jan 2013 A1
20130031293 Van Riel Jan 2013 A1
20130042115 Sweet Feb 2013 A1
20130055340 Kanai Feb 2013 A1
20130061056 Proudler Mar 2013 A1
20130061096 McCoy Mar 2013 A1
20130061250 Kothandapani Mar 2013 A1
20130067199 Henry Mar 2013 A1
20130067245 Horovitz Mar 2013 A1
20130086299 Epstein Apr 2013 A1
20130086550 Epstein Apr 2013 A1
20130086581 Frazier Apr 2013 A1
20130086696 Austin Apr 2013 A1
20130091318 Bhattacharjee Apr 2013 A1
20130091500 Earl Apr 2013 A1
20130091543 Wade Apr 2013 A1
20130091568 Sharif Apr 2013 A1
20130103380 Brandstatter Apr 2013 A1
20130107872 Lovett May 2013 A1
20130111593 Shankar May 2013 A1
20130117766 Bax May 2013 A1
20130132702 Patel May 2013 A1
20130138863 Tsirkin May 2013 A1
20130139216 Austin May 2013 A1
20130139264 Brinkley May 2013 A1
20130148669 Noguchi Jun 2013 A1
20130152207 Cui et al. Jun 2013 A1
20130152209 Baumann Jun 2013 A1
20130155079 Hartog Jun 2013 A1
20130160114 Greenwood Jun 2013 A1
20130167149 Mitsugi Jun 2013 A1
20130174144 Cheng Jul 2013 A1
20130174148 Amit Jul 2013 A1
20130179892 Frazier Jul 2013 A1
20130185720 Tuch Jul 2013 A1
20130185721 Ikegami Jul 2013 A1
20130185737 Farrell Jul 2013 A1
20130185739 Farrell Jul 2013 A1
20130191824 Muff Jul 2013 A1
20130205295 Ebcioglu Aug 2013 A1
20130219389 Serebrin Aug 2013 A1
20130227559 Tsirkin Aug 2013 A1
20130227568 Anderson Aug 2013 A1
20130229421 Cheng Sep 2013 A1
20130232238 Cohn Sep 2013 A1
20130237204 Buck Sep 2013 A1
20130246995 Ferrao Sep 2013 A1
20130257594 Collins Oct 2013 A1
20130275997 Soares Oct 2013 A1
20130276056 Epstein Oct 2013 A1
20130276057 Smith Oct 2013 A1
20130283295 Glover Oct 2013 A1
20130283370 Vipat et al. Oct 2013 A1
20130312098 Kapoor Nov 2013 A1
20130312099 Edwards Nov 2013 A1
20130316754 Skog Nov 2013 A1
20130326504 Tsirkin Dec 2013 A1
20130333040 Diehl et al. Dec 2013 A1
20130339960 Greiner et al. Dec 2013 A1
20140006661 Chappell Jan 2014 A1
20140006734 Li et al. Jan 2014 A1
20140006804 Tkacik Jan 2014 A1
20140007091 Arges Jan 2014 A1
20140025939 Smith Jan 2014 A1
20140033266 Kim Jan 2014 A1
20140053272 Lukacs Feb 2014 A1
20140059302 Hayakawa Feb 2014 A1
20140059642 Deasy Feb 2014 A1
20140068612 Torrey Mar 2014 A1
20140068636 Dupont Mar 2014 A1
20140101311 Smeets Apr 2014 A1
20140101402 Gschwind Apr 2014 A1
20140101406 Gschwind Apr 2014 A1
20140101657 Bacher Apr 2014 A1
20140104287 Nalluri Apr 2014 A1
20140115652 Kapoor Apr 2014 A1
20140115701 Moshchuk Apr 2014 A1
20140137114 Bolte May 2014 A1
20140149634 Tosatti May 2014 A1
20140149779 Allen-Ware May 2014 A1
20140149795 Musha May 2014 A1
20140156872 Buer Jun 2014 A1
20140157258 Dow Jun 2014 A1
20140165056 Ghai Jun 2014 A1
20140173169 Liu Jun 2014 A1
20140173293 Kaplan Jun 2014 A1
20140181533 Boivie Jun 2014 A1
20140189194 Sahita Jul 2014 A1
20140189687 Jung Jul 2014 A1
20140196059 Weinsberg Jul 2014 A1
20140215467 Niesser Jul 2014 A1
20140223556 Bignon Aug 2014 A1
20140229938 Tsirkin Aug 2014 A1
20140229943 Tian Aug 2014 A1
20140230077 Muff Aug 2014 A1
20140237586 Itani Aug 2014 A1
20140244949 Abali Aug 2014 A1
20140245444 Lutas Aug 2014 A1
20140245446 Shanmugavelayutham Aug 2014 A1
20140250286 Kondo Sep 2014 A1
20140258663 Zeng Sep 2014 A1
20140258716 Macmillan Sep 2014 A1
20140281694 Gotsubo Sep 2014 A1
20140282507 Plondke Sep 2014 A1
20140282539 Sonnek Sep 2014 A1
20140282542 Smith Sep 2014 A1
20140283036 Salamat Sep 2014 A1
20140283040 Wilkerson Sep 2014 A1
20140283056 Bachwani Sep 2014 A1
20140304814 Ott Oct 2014 A1
20140317731 Ionescu Oct 2014 A1
20140351930 Sun Nov 2014 A1
20140358972 Guarrieri Dec 2014 A1
20140359229 Cota-Robles Dec 2014 A1
20140372719 Lange Dec 2014 A1
20140372751 Silverstone Dec 2014 A1
20140373005 Agrawal Dec 2014 A1
20140379955 Dong Dec 2014 A1
20140379956 Chang Dec 2014 A1
20150013008 Lukacs Jan 2015 A1
20150026807 Lutas Jan 2015 A1
20150032946 Goss Jan 2015 A1
20150033227 Lin et al. Jan 2015 A1
20150033305 Shear Jan 2015 A1
20150039891 Ignatchenko Feb 2015 A1
20150052607 Al Hamami Feb 2015 A1
20150058619 Sweet Feb 2015 A1
20150067672 Mitra Mar 2015 A1
20150074665 Kamino Mar 2015 A1
20150082305 Hepkin et al. Mar 2015 A1
20150089502 Horovitz Mar 2015 A1
20150089645 Vandergeest Mar 2015 A1
20150095548 Tsirkin Apr 2015 A1
20150100791 Chen Apr 2015 A1
20150101049 Lukacs Apr 2015 A1
20150106572 Stone Apr 2015 A1
20150121135 Pape Apr 2015 A1
20150143055 Guthrie May 2015 A1
20150143362 Lukacs May 2015 A1
20150146715 Olivier May 2015 A1
20150146716 Olivier May 2015 A1
20150149997 Tsirkin May 2015 A1
20150160998 Anvin Jun 2015 A1
20150161384 Gu Jun 2015 A1
20150163109 Ionescu Jun 2015 A1
20150178071 Pavlik Jun 2015 A1
20150178078 Anvin Jun 2015 A1
20150199198 van de Ven et al. Jul 2015 A1
20150199514 Tosa Jul 2015 A1
20150199516 Dalcher Jul 2015 A1
20150212867 Klee Jul 2015 A1
20150220354 Nair Aug 2015 A1
20150220455 Chen Aug 2015 A1
20150242233 Brewerton Aug 2015 A1
20150254017 Soja Sep 2015 A1
20150261559 Sliwa Sep 2015 A1
20150261560 Sliwa Sep 2015 A1
20150261690 Epstein Sep 2015 A1
20150261713 Kuch Sep 2015 A1
20150261952 Sliwa Sep 2015 A1
20150263993 Kuch Sep 2015 A1
20150268979 Komarov Sep 2015 A1
20150269085 Gainey, Jr. Sep 2015 A1
20150277872 Gschwind Oct 2015 A1
20150277946 Busaba Oct 2015 A1
20150277947 Busaba Oct 2015 A1
20150277948 Bradbury Oct 2015 A1
20150278085 Bybell Oct 2015 A1
20150278106 Gschwind Oct 2015 A1
20150278126 Maniatis Oct 2015 A1
20150286583 Kanai Oct 2015 A1
20150293774 Persson Oct 2015 A1
20150294117 Cucinotta et al. Oct 2015 A1
20150301761 Sijstermans Oct 2015 A1
20150310578 You Oct 2015 A1
20150319160 Ferguson Nov 2015 A1
20150326531 Cui Nov 2015 A1
20150331812 Horman Nov 2015 A1
20150334126 Mooring Nov 2015 A1
20150339480 Lutas Nov 2015 A1
20150347052 Grisenthwaite Dec 2015 A1
20150347166 Noel Dec 2015 A1
20150356023 Peter Dec 2015 A1
20150356297 Guri Dec 2015 A1
20150358309 Edwards, Jr. Dec 2015 A1
20150363763 Chang Dec 2015 A1
20150370590 Tuch Dec 2015 A1
20150370591 Tuch Dec 2015 A1
20150370592 Tuch Dec 2015 A1
20150370724 Lutas Dec 2015 A1
20150373023 Walker Dec 2015 A1
20150379265 Lutas Dec 2015 A1
20150381442 Delgado Dec 2015 A1
20160004863 Lazri Jan 2016 A1
20160011893 Strong Jan 2016 A1
20160011895 Tsirkin Jan 2016 A1
20160034295 Cochran Feb 2016 A1
20160041881 Simoncelli Feb 2016 A1
20160048458 Lutas Feb 2016 A1
20160048460 Kadi Feb 2016 A1
20160048464 Nakajima Feb 2016 A1
20160048680 Lutas Feb 2016 A1
20160050071 Collart Feb 2016 A1
20160055108 Williamson et al. Feb 2016 A1
20160062784 Chai Mar 2016 A1
20160062940 Cota-Robles Mar 2016 A1
20160063660 Spector Mar 2016 A1
20160077847 Hunter Mar 2016 A1
20160077858 Hunter Mar 2016 A1
20160077884 Hunter Mar 2016 A1
20160077981 Kegel Mar 2016 A1
20160078342 Tang Mar 2016 A1
20160085568 Dupre et al. Mar 2016 A1
20160092382 Anvin Mar 2016 A1
20160092678 Probert Mar 2016 A1
20160098273 Bartik Apr 2016 A1
20160098367 Etsion Apr 2016 A1
20160099811 Hawblitzel Apr 2016 A1
20160110291 Gordon Apr 2016 A1
20160117183 Baumeister Apr 2016 A1
20160124751 Li May 2016 A1
20160132349 Bacher May 2016 A1
20160135048 Huxham May 2016 A1
20160139962 Tsirkin May 2016 A1
20160147551 Tsirkin May 2016 A1
20160148001 Bacher et al. May 2016 A1
20160154663 Guthrie Jun 2016 A1
20160156665 Mooring Jun 2016 A1
20160164880 Colesa et al. Jun 2016 A1
20160170769 LeMay Jun 2016 A1
20160170816 Warkentin Jun 2016 A1
20160170881 Guthrie Jun 2016 A1
20160170912 Warkentin Jun 2016 A1
20160179558 Busaba Jun 2016 A1
20160179564 Chen Jun 2016 A1
20160179696 Zmudzinski Jun 2016 A1
20160180079 Sahita Jun 2016 A1
20160180115 Yamada Jun 2016 A1
20160188353 Shu et al. Jun 2016 A1
20160188354 Goldsmith et al. Jun 2016 A1
20160202980 Henry Jul 2016 A1
20160210069 Lutas Jul 2016 A1
20160210179 Hans Jul 2016 A1
20160210465 Craske Jul 2016 A1
20160212620 Paczkowski Jul 2016 A1
20160216982 Variath Jul 2016 A1
20160224399 Zheng Aug 2016 A1
20160224474 Harriman Aug 2016 A1
20160224794 Roberts Aug 2016 A1
20160232347 Badishi Aug 2016 A1
20160232872 Yoo Aug 2016 A1
20160239323 Tsirkin Aug 2016 A1
20160239328 Kaplan Aug 2016 A1
20160239333 Cowperthwaite Aug 2016 A1
20160246630 Tsirkin Aug 2016 A1
20160246636 Tsirkin Aug 2016 A1
20160246644 Canton Aug 2016 A1
20160253110 Tsirkin Sep 2016 A1
20160253196 Van Riel Sep 2016 A1
20160259750 Keidar Sep 2016 A1
20160259939 Bobritsky Sep 2016 A1
20160283246 Fleming Sep 2016 A1
20160283258 Bacher Sep 2016 A1
20160283260 Bacher Sep 2016 A1
20160283404 Xing Sep 2016 A1
20160283736 Allen Sep 2016 A1
20160285638 Pearson Sep 2016 A1
20160285913 Itskin Sep 2016 A1
20160285970 Cai Sep 2016 A1
20160292816 Dong Oct 2016 A1
20160299712 Kishan et al. Oct 2016 A1
20160306749 Tsirkin et al. Oct 2016 A1
20160314009 Tsirkin Oct 2016 A1
20160314309 Rozak-Draicchio Oct 2016 A1
20160328254 Ahmed Nov 2016 A1
20160328348 Iba Nov 2016 A1
20160335436 Harshawardhan Nov 2016 A1
20160337329 Sood Nov 2016 A1
20160342789 Perez Nov 2016 A1
20160352518 Ford Dec 2016 A1
20160357647 Shirai Dec 2016 A1
20160359896 Hay Dec 2016 A1
20160364304 Hanumantharaya Dec 2016 A1
20160364338 Zmudzinski Dec 2016 A1
20160364341 Banginwar Dec 2016 A1
20160364349 Okada Dec 2016 A1
20160371105 Sieffert Dec 2016 A1
20160378498 Caprioli Dec 2016 A1
20160378522 Kaplan Dec 2016 A1
20160378684 Zmudzinski Dec 2016 A1
20170006057 Perez Lafuente Jan 2017 A1
20170010981 Cambou Jan 2017 A1
20170024560 Linde Jan 2017 A1
20170026181 Chhabra Jan 2017 A1
20170031699 Banerjee Feb 2017 A1
20170032119 Dore Feb 2017 A1
20170039080 Chadha Feb 2017 A1
20170039366 Ionescu Feb 2017 A1
20170046187 Tsirkin Feb 2017 A1
20170046212 Fernandez Feb 2017 A1
20170060781 Soja Mar 2017 A1
20170068575 Hardage, Jr. Mar 2017 A1
20170091487 Lemay Mar 2017 A1
20170093578 Zimmer Mar 2017 A1
20170097851 Chen Apr 2017 A1
20170102957 Marquardt Apr 2017 A1
20170109251 Das Apr 2017 A1
20170109525 Sistany Apr 2017 A1
20170109530 Diehl et al. Apr 2017 A1
20170123992 Bradbury et al. May 2017 A1
20170124326 Wailly May 2017 A1
20170126677 Kumar May 2017 A1
20170126706 Minea May 2017 A1
20170126726 Han May 2017 A1
20170132156 Axnix et al. May 2017 A1
20170134176 Kim May 2017 A1
20170139777 Gehrmann May 2017 A1
20170139840 Klein May 2017 A1
20170147370 Williamson May 2017 A1
20170153987 Gaonkar Jun 2017 A1
20170161089 Frazier Jun 2017 A1
20170168737 Kumar Jun 2017 A1
20170171159 Kumar Jun 2017 A1
20170177365 Doshi Jun 2017 A1
20170177377 Thiyagarajah Jun 2017 A1
20170177392 Bacher et al. Jun 2017 A1
20170177398 Bacher et al. Jun 2017 A1
20170177415 Dhanraj Jun 2017 A1
20170177429 Stark Jun 2017 A1
20170177441 Chow Jun 2017 A1
20170177854 Gligor Jun 2017 A1
20170177860 Suarez Jun 2017 A1
20170177877 Suarez Jun 2017 A1
20170177909 Sarangdhar Jun 2017 A1
20170185436 Deng et al. Jun 2017 A1
20170185536 Li Jun 2017 A1
20170185784 Madou Jun 2017 A1
20170192801 Barlev Jul 2017 A1
20170192810 Lukacs Jul 2017 A1
20170199768 Arroyo Jul 2017 A1
20170206104 Sliwa Jul 2017 A1
20170206175 Sliwa Jul 2017 A1
20170206177 Tsai et al. Jul 2017 A1
20170213028 Chen Jul 2017 A1
20170213031 Diehl et al. Jul 2017 A1
20170220369 Kaplan Aug 2017 A1
20170220447 Brandt Aug 2017 A1
20170220795 Suginaka Aug 2017 A1
20170222815 Meriac Aug 2017 A1
20170228271 Tsirkin Aug 2017 A1
20170228535 Shanbhogue Aug 2017 A1
20170244729 Fahrny Aug 2017 A1
20170249173 Tsirkin Aug 2017 A1
20170249176 Elias Aug 2017 A1
20170255778 Ionescu Sep 2017 A1
20170257399 Mooring Sep 2017 A1
20170262306 Wang Sep 2017 A1
20170293581 Maugan Oct 2017 A1
20170295195 Wettstein Oct 2017 A1
20170323113 El-Moussa Nov 2017 A1
20170329623 Dong Nov 2017 A1
20170331884 Colle Nov 2017 A1
20170353485 Brown Dec 2017 A1
20170364685 Shah Dec 2017 A1
20180004539 Liguori Jan 2018 A1
20180004561 Liguori Jan 2018 A1
20180004868 Adam Jan 2018 A1
20180004954 Liguori Jan 2018 A1
20180011729 Yu Jan 2018 A1
20180033116 Tian Feb 2018 A1
20180034781 Jaeger Feb 2018 A1
20180048660 Paithane Feb 2018 A1
20180089436 Smith Mar 2018 A1
20180165791 Dong Jun 2018 A1
20180203805 Hatta Jul 2018 A1
20180285143 Bacher et al. Oct 2018 A1
20180336348 Ng Nov 2018 A1
20180349162 Tian Dec 2018 A1
20180373556 Tian Dec 2018 A1
20180373570 Xu Dec 2018 A1
20190005224 Oliver Jan 2019 A1
20190005267 Soman Jan 2019 A1
20190034633 Seetharamaiah Jan 2019 A1
20190146668 Gschwind May 2019 A1
20190146697 Gschwind May 2019 A1
20190146700 Gschwind May 2019 A1
20190146710 Gschwind May 2019 A1
20190146789 Gschwind May 2019 A1
20190146795 Gschwind May 2019 A1
20190146820 Gschwind May 2019 A1
20190146832 Gschwind May 2019 A1
20190146874 Gschwind May 2019 A1
20190146918 Gschwind May 2019 A1
20190146929 Gschwind May 2019 A1
20190163513 Noorshams May 2019 A1
20190227810 Jacquin Jul 2019 A1
20190325133 Goodridge Oct 2019 A1
20200036602 Leibovici Jan 2020 A1
20200092103 Zavertnik Mar 2020 A1
20200097661 Block Mar 2020 A1
20200099536 Block Mar 2020 A1
20200159558 Bak et al. May 2020 A1
20200241902 Freche Jul 2020 A1
20200394065 Bak et al. Dec 2020 A1
20210021418 Makhalov Jan 2021 A1
20210026950 Ionescu Jan 2021 A1
20210049028 Price Feb 2021 A1
20210049292 Ionescu Feb 2021 A1
20210073003 Jacquin Mar 2021 A1
20210334377 Drori Oct 2021 A1
20220012042 Doshi Jan 2022 A1
20220070225 Drori Mar 2022 A1
20220129591 K Apr 2022 A1
20220198021 Subramanian Jun 2022 A1
20220358049 Tsirkin Nov 2022 A1
20220358220 Smith Nov 2022 A1
20230129610 Jacquin Apr 2023 A1
20230229758 Terpstra Jul 2023 A1
20230229779 Terpstra Jul 2023 A1
20230261867 Makhalov Aug 2023 A1
20230267214 Zhang Aug 2023 A1
20230342446 Reddy Oct 2023 A1
20240004681 Graf Jan 2024 A1
Foreign Referenced Citations (3)
Number Date Country
3017392 May 2016 EP
WO2012135192 Oct 2012 WO
WO2013055499 Apr 2013 WO
Non-Patent Literature Citations (30)
Entry
Pham et al “Reliability and Security Monitoring of Virtual Machines Using Hardware Architectural Invariants,” 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, IEEE Computer Society pp. 13-24 (Year: 2014).
Binun et al.“Self-Stabilizing Virtual Machine Hypervisor Architecture for Resilent Cloud,” 2014 IEEE 10th World Congress on Services, IEEE Computer Society, pp. 200-207 (Year: 2014).
Huin et al.“An Agent-Based Architecture to Add Security in a Cooperative Information System,” Third International IEEE Conference on Signal-Image Technologies and Internet-Based System, pp. 262-271 (Year: 2008).
Zhang et al.“Formal Verficiation of Interrupt Injection in a Hypervisor,” 2014 Theoretical Aspects of Software Engineering Conference, IEEE Computer Society, pp. 74-81 (Year: 2014).
Moratelli et al Harware-Assisted Interrupt Delivery Optimization For Virtualized Embedded Platforms, IEEE, pp. 304-307 (Year: 2015).
Terrell et al “Setting up of a Cloud Cyber Infrastructure using Xen Hypervisor,” 2013 10th International Conference on Information Technology: New Generations, pp. 648-652 (Year: 2013).
Ding et al.“Return-Oriented Programming Attack on the Xen Hypervisor,” 2012 Seventh International Conference on Availability, Reliability and Security, IEEE, pp. 479-484 (Year: 2012).
Nwebonyi et al “BYOD: Network: Enhancing Security Through Trust-Aided Access Control Mechanisms,” International Journal of Cyber-Security and Digital Forensics (IJCSDF), The Society of Digital Information and Wireless Communication, pp. 272-289 (Year: 2014).
Grimm et al Separating Access Control Policy, Enforcement, and Functionality in Extensible Systems, ACM Transactions on Computer Systems, vol. 19, No. 1, Feb. 2001, pp. 36-70.
Datta et al “A Logic of Secure Systems and its Application to Trusted Computing,” IEEE Computer Society, pp. 221-236 (Year: 2009).
Kornaros et al “Hardware Support for Cost-Effective System-Level Protection in Multi-Core SoCs,” IEEE, pp. 41-48 (Year: 2015).
Azab et al “HIMA: A Hypervisor-Based Integrity Measurement Agent,” IEEE Computer Society, pp. 461—(Year: 2009).
Chiueh et al “Surreptitious Deployment and Execution of Kernel Agents in Windows Guiests,” IEEE Computer Society, pp. 507-514, (Year: 2012).
The Extended European Search Report mailed Apr. 16, 2021 for European Patent Application No. 20206917.5, 10 pages.
Wang, et. al., “Countering Kernel Rootkits with Lightweight Hook Protection”, Computer and Communications Security, Nov. 9, 2009, pp. 545-554.
The European Office Action mailed on Jul. 25, 2019 for European Patent Application No. 17156043.6, a counterpart of U.S. Appl. No. 15/063,086, 8 pages.
The Extended European Search Report mailed Jul. 13, 2017 for European patent application No. 17156043.6, 10 pages.
Office action for U.S. Appl. No. 15/063,086, mailed on Oct. 6, 2017, Ionescu, “Hypervisor-Based Interception of Memory Accesses”, 36 pages.
Office action for U.S. Appl. No. 15/063,086, mailed on May 9, 2018, Ionescu, “Hypervisor-Based Interception of Memory Accesses”, 27 pages.
Wen, et al., “FVisor: Towards Thwarting Unauthorized File Accesses with A Light-weght Hypervisor”, 2014 IEEE 17th International Conference on Computational Science and Engineering, 2014, pp. 620-626.
Nguyen et al “MAVMM: Lightweight and Purpose Built VMM for Malware Analysis,” 2009 Annual Computer Security Applications Conference, IEEE Computer Society, 2009, pp. 441-450.
Office Action for U.S. Appl. No. 17/060,355, mailed on Nov. 24, 2023, Ion-Alexandru Ionescu, “Hypervisor-Based Interception of Memory and Register Accesses”, 17 pages.
Qingbo et al “System Monitoring and Controlling Mechanism based on Hypervisor,” 2009 IEEE International Symposium on Parallel and Distributed Processing with Applications, IEEE Computer Society, 2009, pp. 549-554.
Branco et al “Architecture for Automation of Malware Analysis,” IEEE, 2010, pp. 106-112.
Jin et al “H-SVM: Hardware-Assisted Secure Virtual Machines under a Vulnerable Hypervisor,” IEEE Transactions on Computers, vol. 64, No. 10, Oct. 2015, pp. 2833-2846 (Year: 2015).
Kienzle et al “Endpoint Configuration Compliance Monitoring via Virtual Machine Introspection,” Proceedings of the 43rd Hawaii International Conference on System Sciences, IEEE Computer Society, pp. 1-10 (Year: 2010).
Office Action for U.S. Appl. No. 17/060,355, mailed on May 11, 2023, Inventor #1 Ion-Alexandru Ionescu, “Hypervisor-Based Interception of Memory and Register Accesses,” 12 pages.
Office Action for U.S. Appl. No. 17/060,355, dated Oct. 1, 2024, 25 pages.
Wang et al., “A Resource Allocation Model for Hybrid Storage Systems”, 2015 15th IEE/ACM International Symposium on Cluster, Cloud and Grid Computing, IEE Computer Society, pp. 91-100.
Wong et al., “Zygaria: Storage Performance as a Managed Resource”, IEEE Computer Society, pp. 1-10.
Related Publications (1)
Number Date Country
20210026950 A1 Jan 2021 US
Continuation in Parts (2)
Number Date Country
Parent 17060355 Oct 2020 US
Child 17062237 US
Parent 15063086 Mar 2016 US
Child 17060355 US