Embodiments described herein generally relate to techniques for detecting jump oriented programming exploits.
Return and jump oriented programming (ROP/JOP) exploits are a growing threat for software applications. This technique allows an attacker to execute code even if security measures such as non-executable memory and code signing are used. In ROP, an attacker gains control of the call stack and then executes carefully chosen machine instruction sequences, called “gadgets.” Each gadget typically ends in a return instruction and is code within an existing program (or library). Chained together via a sequence of carefully crafted return addresses, these gadgets allow an attacker to perform arbitrary operations. JOP attacks do not depend upon the stack for control flow, but use a dispatcher gadget to take the role of executing functional gadgets that perform primitive operations.
Detection of ROP exploits is complicated due to the nature of the attack. A number of techniques have been proposed to subvert attacks based on return-oriented programming.
The first approach is randomizing the location of program and library code, so that an attacker cannot accurately predict the location of usable gadgets. Address space layout randomization (ASLR) is an example of this approach. Unfortunately, ASLR is vulnerable to information leakage attacks and once the code location is inferred, a return-oriented programming attack can still be constructed. Randomization approach can be taken further by employing relocation at runtime. This complicates the process of finding gadgets but incurs significant overhead.
Second approach, taken by kBouncer, modifies the operating system to track that return instructions actually divert control flow back to a location immediately following a call instruction. This prevents gadget chaining, but carries a heavy performance penalty. In addition, it is possible to mount JOP attacks without using return instructions at all, by using JMP instructions. kBouncer is not effective against JOP attacks.
Thirdly, some Intrusion Protection System (IPS) invalidate memory pages of a process except one currently executed page. Most regular jumps land within the same page. Passing control flow to a different page causes an exception which allows the IPS to check the control flow. This technique may also introduce a noticeable overhead.
Finally, there is work in progress targeting hardware-assisted ROP detection based on a series of sequentially mispredicted RET instructions. While providing a high detection rate, the technique is not currently available and will only be available in future processors.
Better approaches to both ROP and JOP attacks that does not incur large performance penalties would be desirable.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without these specific details. In other instances, structure and devices are shown in block diagram form in order to avoid obscuring the invention. References to numbers without subscripts or suffixes are understood to reference all instance of subscripts and suffixes corresponding to the referenced number. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
As used herein, the term “a computer system” can refer to a single computer or a plurality of computers working together to perform the function described as being performed on or by a computer system.
Modern computer processors have a Performance Monitoring Unit (PMU) for monitoring selected events. The diagram in
Modern processor architectures also provide a branch recording mechanism. Typically, the last branch recording mechanism tracks not only branch instructions (like JMP, Jcc, LOOP, and CALL instructions), but also other operations that cause a change in the instruction pointer, like external interrupts, traps, and faults. The branch recording mechanisms generally employ a set of processor model specific registers, referred to as a last branch record (LBR) stack, each entry of which stores a source address and a destination address of the last branch, thus the LBR stack provides a record of recent branches. Some embodiments of an LBR stack may also record an indication of whether the branch was mispredicted, i.e., one or more of the target of the branch and the direction (taken, not taken) was mispredicted. In addition, control registers may allow the processor to filter which kinds of branches are to be captured in the LBR stack.
One of the ways the Event Select registers 150 may be configured is to cause the PMU 110 to count branch mispredict events. These events may be caused by ROP and JOP exploits, as well as for other reasons. Where branch capture filtering is available, the filter may be employed to limit the captured branches to those of interest in ROP or JOP exploits. For JOP exploits, the branches of interest are typically near indirect jumps. For ROP exploits, the branches of interest are typically CALLs or RETs. However, embodiments may filter other types of branches or do no branch capture filtering, if desired. For example, another type of exploit, known as call oriented programming (COP), uses gadgets that end with indirect CALL instructions. In COP exploits, gadgets are chained together by pointing the memory-indirect locations to the next gadget in sequence. COP exploits may be detected using a similar approach to that used for detecting ROP and JOP exploits, with the branches of interest being CALLs.
By using these facilities, embodiments disclosed herein can detect ROP and JOP exploits without significant processor overhead.
The PMU 110 is configured to count branch mispredict events caused by ROP or JOP exploit. The LBR registers are configured to store the relevant branch records.
When a mispredict event occurs (or, preferably, when a mispredict count exceeds a predetermined threshold) the reason for the misprediction may be analyzed by matching the expected program code flow with the real one extracted from the LBR stack 200. The analysis is fairly simple because the from and to addresses 220, 240 are readily available from the LBR stack 200 and the from and to addresses directly point to the code in question, allowing separating valid reasons (say, indirect CALL or deep recursion) from exploit behavior (by employing, for example, static code flow analysis of the program).
Using the hardware PMU and related registers 100 and the LBR stack 200 to collect mispredicted branch data for analysis introduces the following advantages:
1. Low overhead compared to all existing methods (all events are gathered by CPU via PMU 110 and LBR stack 200).
2. Ease of analysis: LBR event data points exactly to the suspected code.
3. High ROP/JOP detection rate with an ability to fine-tune the sensitivity and minimize the false positive rate.
4. Generic to majority of processor platforms: most recent processor platforms already have all the hardware needed for implementing this invention.
5. Operating system (OS) agnostic: events collection is fully hardware-based, with no OS interaction or enablement needed.
6. Resilience to OS, Hypervisor, Basic Input/Output System (BIOS), and Unified Extensible Firmware Interface (UEFI) malware: even in the presence of an OS or firmware-based malware, events will be reliably collected and securely delivered to the monitoring agent.
7. PMU logic allows counting mispredicted RET instructions and enabling a PMU interrupt (PMI) once the counter reach a predetermined threshold. This provides additional hardware-supported sensitivity control to maximize the true positive rates (fine-tuning will allow catching the smallest observed ROP/JOP shellcode sequences). Not every mispredicted branch indicates an exploit. In one embodiment, the threshold value may be empirically determined, based on analysis of detected ROP and JOP exploits. In some embodiments, the threshold value may be configured based on a policy.
An implementation according to one embodiment comprises the following components:
1. PMU 110 event counters, reporting address of instruction, that can indicate various conditions: (a) mispredicted branches for JMP and RET instructions; and (optionally to assist code analysis) (b) memory, I/O, and cache usage, debug instructions and self-modifying code, (c) crypto opcode statistics; and (d) typical patterns of exploitations (changes of stack pointer).
2. An LBR stack 200 configured to store addresses of transitions caused by JMPs/CALLs/RETs.
3. A PMI handler, implementing collection of counters data and LBR data.
4. A software handler for processing the PMU counters, LBR data and providing a verdict whether actual code flow matches an expected one. This analysis may employ either static or dynamic code flow analysis; for example, code de-compilation or partial code emulation to obtain the expected code flow. A heuristic and/or analytics approach may also be taken to reach the verdict. Many ways to perform the analysis may be used as desired, based on any chosen form of code analysis. One heuristic approach described below.
5. An interface to security software or reporting tools to implement actions/policies in case of detection.
A memory 305 coupled to the processor 310 may be used for storage of information related to the detection and analysis techniques described herein. The memory may be connected to the processor in any desired way, including busses, point-to-point interconnects, etc. The memory may be also be used for storing instructions that when executed cause the computer 300 to execute the collection driver 325, the analytical client 330, and the anti-malware software 340.
One skilled in the art will recognize that other conventional elements of a computer system or other programmable device may be included in the system 300, such as a keyboard, pointing device, displays, etc.
Processor 310 may comprise, for example, a microprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC), or any other digital or analog circuitry configured to interpret and/or execute program instructions and/or process data. In some embodiments, processor 310 may interpret and/or execute program instructions and/or process data stored in memory 305. Memory 305 may be configured in part or whole as application memory, system memory, or both. Memory 305 may include any system, device, or apparatus configured to hold and/or house one or more memory modules. Each memory module may include any system, device or apparatus configured to retain program instructions and/or data for a period of time (e.g., computer-readable storage media). Instructions, logic, or data for configuring the operation of system 300, such as configurations of components such as the performance monitoring hardware 315, the collection driver 325, the analytical client 330, or anti-malware software 340 may reside in memory 305 for execution by processor 310.
While a single processor 310 is illustrated in
Memory 305 may include one or more memory modules and comprise random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), programmable read-write memory, and solid-state memory. Memory 305 may also include a storage device providing any form of non-volatile storage including, but not limited to, all forms of optical and magnetic, including solid-state storage elements, including removable media. The storage device may be a program storage device used for storage of software to control computer 300, data for use by the computer 300 (including performance monitoring configuration data), or both. The instructions for configuring the performance monitoring hardware as well as for processing PMIs and analyzing the collected data may be provided on one or more machine readable media, used either as part of the memory 305 or for loading the instructions from the media into the memory 305. Although only a single memory 305 is illustrated in
The computer system 300 may be any type of computing device, such as, for example, a smart phone, smart tablet, personal digital assistant (PDA), mobile Internet device, convertible tablet, notebook computer, desktop computer, server, or smart television.
In block 420, a PMU event is detected by the collection driver 325 upon generation of a PMI. The registers of the PMU and control registers 100 are interrogated to determine which PMU event caused the PMI. The collection driver 325 may also read a block of memory at the address of interrupt (obtained from the stack), read the content of the LBR stack 200, and read the content of memory pointed to by LBR entries (from and to addresses). The collection driver 325 may then forward the collected information to the analytical client for analysis.
Blocks 430-470 implement a simple heuristic analysis approach that may be used to determine whether a ROP or JOP event has occurred according to one embodiment. This heuristic is illustrative and by way of example only. Other heuristics may be used instead of or in addition to the illustrated heuristic. Alternately, the analytical client may perform code analysis (static, dynamic, or both, as desired). This analysis may be performed locally by security software or the expected fingerprint may be created externally (e.g., by the compiler and/or linker or by recording typical execution patterns in controlled environment), delivered along with the software or dynamically queried through the network and compared to the observed to/from addresses when a ROPevent 430 occurs. Techniques such as code decompilation or partial code emulation to obtain the expected code flow and compare the expected code flow with the actual code flow may be used. In one embodiment, whitelists may be used to list from/to address pairs that are known to be good; alternately, a blacklist of known bad from/to address pairs may be used. A combination of a whitelist and a blacklist may also be used.
As illustrated in
After all LBR entries 210 are considered, if the ROPEVENT counter exceeds a predetermined threshold value in block 460, an ROP event is signaled or indicated in block 470. In alternate embodiments, the ROP event is signaled or indicated if the ROPEVENT counter meets or exceeds the threshold value.
In other embodiments, instead of initializing the ROPEVENT counter to zero and incrementing it each time a RET points to an address not following a CALL, the ROPEVENT counter may be set to a predetermined threshold value and repeatedly decremented. In such an embodiment, an ROP event may be indicated if the ROPEVENT counter reaches 0 or any other predetermined low threshold value.
Finally, in block 480, security software 340 (anti-malware or host intrusion protection system software) may take an action responsive to the determination that an ROP event has occurred.
Advanced analytics in addition may take into account additional contextual data and implement extra checks based on other factors, such as:
1. Distribution of from/to addresses.
2. Uniqueness of from, to and from/to addresses.
3. Matching of from/to addresses and other PMU counters to a distribution that characterizes the specific process (software fingerprinting).
By taking into account the address of the instruction causing the PMI (which is stored on the stack) raised by reaching threshold of counter, the analytical client 330 may determine which process was responsible for the PMI, and may limit the analysis to specific monitored processes. For example, the analytical client 330 may filter only addresses belonging to the address space of the monitored process. In some embodiments, the data about process location in memory is available from the OS thru Process Walking or enumerating processes. Embodiments may exclude certain processes to suppress incorrect detections or to improve system performance. The analytical client may analyze the time sequence of specific counters for a selected process as well as the distribution of the addresses of instructions causing those events. In addition, the distribution of branch misprediction instructions may be used to form a software fingerprint.
The simple heuristic illustrated in
Referring now to
Programmable device 500 is illustrated as a point-to-point interconnect system, in which the first processing element 570 and second processing element 580 are coupled via a point-to-point interconnect 550. Any or all of the interconnects illustrated in
As illustrated in
Each processing element 570, 580 may include at least one shared cache 546. The shared cache 546a, 546b may store data (e.g., instructions) that are utilized by one or more components of the processing element, such as the cores 574a, 574b and 584a, 584b, respectively. For example, the shared cache may locally cache data stored in a memory 532, 534 for faster access by components of the processing elements 570, 580. In one or more embodiments, the shared cache 546a, 546b may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), or combinations thereof
While
First processing element 570 may further include memory controller logic (MC) 572 and point-to-point (P-P) interconnects 576 and 578. Similarly, second processing element 580 may include a MC 582 and P-P interconnects 586 and 588. As illustrated in
Processing element 570 and processing element 580 may be coupled to an I/O subsystem 590 via respective P-P interconnects 576 and 586 through links 552 and 554. As illustrated in
In turn, I/O subsystem 590 may be coupled to a first link 516 via an interface 596. In one embodiment, first link 516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another I/O interconnect bus, although the scope of the present invention is not so limited.
As illustrated in
Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of
Referring now to
The programmable devices depicted in
Although embodiments are described above that are directed at either return-oriented or jump-oriented programming exploits, in some embodiments both types of exploits may be detected by combining the techniques described above.
The techniques described above may be implemented as part of any desired type of anti-malware system, such as an intrusion protection system. By using hardware performance monitoring capability and last branch recording, the techniques may be used to detect relatively difficult-to-detect ROP and JOP exploits without the need for a specific signature of the exploit, and with less performance impact than a purely software-based technique as has been discussed in the literature previously. Furthermore, proper design of the analytical engine may avoid the negative impact of false positives in the analysis.
The following examples pertain to further embodiments.
Example 1 is a machine readable medium, on which are stored instructions, comprising instructions that when executed cause a programmable device to: configure hardware performance monitoring counters to count mispredicted branches; configure a hardware last branch mechanism to capture a predetermined category of branches; collect performance monitoring counter data and last branch data responsive to an interrupt generated upon a predetermined condition of the hardware performance monitoring counters; and analyze the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred.
In Example 2 the subject matter of Example 1 optionally includes wherein the malware exploit is a return-oriented programming exploit.
In Example 3 the subject matter of Example 2 optionally includes wherein the instructions that when executed cause the programmable device to analyze the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred comprise instructions that when executed cause the programmable device to: count last branch instances having a from address pointing to a return instruction and a to address pointing to an instruction not following a call instruction; modify a return-oriented programming event counter; and indicate a return-oriented programming event responsive to the return-oriented programming event counter having a predetermined relation to a predetermined threshold value.
In Example 4 the subject matter of Example 1 optionally includes wherein the malware exploit is a jump-oriented programming exploit.
In Example 5 the subject matter of Example 4 optionally includes wherein the instructions that when executed cause the programmable device to analyze the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred comprise instructions that when executed cause the programmable device to: look for a sequence of last branch instances having from addresses pointing to an indirect jump instruction with an alternating constant address of a dispatcher's entry point and leave point.
In Example 6 the subject matter of Example 1 optionally includes wherein the predetermined category of branches comprises return instructions.
In Example 7 the subject matter of Example 1 optionally includes wherein the predetermined category of branches comprises near indirect jump instructions.
In Example 8 the subject matter of Examples 1-7 optionally includes wherein the instructions further comprise instructions that when executed cause the programmable device to: take an anti-malware action responsive to a determination that a malware exploit has occurred, wherein the anti-malware action comprises one or more of termination or changing a sensitivity of a monitoring behavior of a program that triggered the malware exploit.
Example 9 is a programmable device programmed to detect malware exploits, comprising: a processor, comprising: a performance monitoring unit; and a last branch record stack; and a memory, coupled to the processor, on which are stored instructions, comprising instructions that when executed cause the processor to: configure the performance monitoring unit to count mispredicted branches; configure the last branch record stack to capture a predetermined category of branches; collect mispredicted branch counts and last branch data from the performance monitoring unit and last branch record stack, responsive to an interrupt generated upon a predetermined condition of the performance monitoring unit; and analyze the mispredicted branch counts and the last branch data to determine whether a malware exploit has occurred.
In Example 10 the subject matter of Example 9 optionally includes wherein the malware exploit is a return-oriented programming exploit.
In Example 11 the subject matter of Example 10 optionally includes wherein the instructions that when executed cause the processor to analyze the mispredicted branch counts and the last branch data comprise instructions that when executed cause the processor to: increment a return-oriented programming event counter responsive to a last branch instance having a from address pointing to a return instruction and a to address pointing to an instruction not following a call instruction; and indicate a return-oriented programming exploit has occurred responsive to the return-oriented programming event counter meeting or exceeding a predetermined threshold value.
In Example 12 the subject matter of Example 10 optionally includes wherein the predetermined category of branches comprises return instructions.
In Example 13 the subject matter of Example 9 optionally includes wherein the malware exploit is a jump-oriented programming exploit.
In Example 14 the subject matter of Example 13 optionally includes wherein the instructions that when executed cause the processor to analyze the mispredicted branch counts and the last branch data comprise instructions that when executed cause the processor to: look for a sequence of last branch instances having from addresses pointing to an indirect jump instruction with an alternating constant address of a dispatcher's entry point and leave point.
In Example 15 the subject matter of Example 13 optionally includes wherein the predetermined category of branches comprises near indirect jump instructions.
In Example 16 the subject matter of Examples 9-15 optionally includes wherein the instructions further comprise instructions that when executed cause the processor to: take an anti-malware action responsive to a determination that that a malware exploit has occurred.
Example 17 is a method of detecting malware exploits, comprising: counting mispredicted branches in a performance monitoring unit of a processor; capturing last branch information by the processor; collecting a mispredicted branch count and the last branch information responsive to a performance monitoring interrupt; and determining whether a malware exploit has occurred based on the mispredicted branch count and last branch information.
In Example 18 the subject matter of Example 17 optionally includes wherein counting mispredicted branches comprises configuring a control register of the performance monitoring unit to cause the performance monitoring unit to count mispredicted branches.
In Example 19 the subject matter of Example 17 optionally includes further comprising: configuring the performance monitoring unit to generate the performance monitoring interrupt responsive to counting a threshold number of mispredicted branches.
In Example 20 the subject matter of Examples 17-19 optionally includes wherein capturing last branch information comprises: configuring a last branch record unit to capture return instruction branches.
In Example 21 the subject matter of Examples 17-19 optionally includes wherein capturing last branch information comprises: configuring a last branch record unit to capture near indirect jump branches.
In Example 22 the subject matter of Examples 17-19 optionally includes wherein the malware exploit is a return-oriented programming exploit, and wherein determining whether a malware exploit has occurred comprises: counting occurrences of a last branch instance having a from address pointing to a return instruction and a to address pointing to an instruction not following a call instruction; and indicating the malware exploit has occurred responsive to a threshold number of occurrences.
In Example 23 the subject matter of Examples 17-19 optionally includes wherein the malware exploit is a jump-oriented programming exploit, and wherein determining whether a malware exploit has occurred comprises: finding a sequence of last branch instances having from addresses pointing to an indirect jump instructions alternating with a constant address of a dispatcher entry point or leave point.
In Example 24 the subject matter of Examples 17-19 optionally includes further comprising: taking an anti-malware action responsive to the determination that an exploit has occurred.
In Example 25 the subject matter of Examples 17-19 optionally includes wherein determining whether a malware exploit has occurred comprises detecting whether either of a return-oriented programming exploit or a jump-oriented programming exploit has occurred.
Example 26 is a programmable device, comprising: means for configuring hardware performance monitoring counters to count mispredicted branches; means for configuring a hardware last branch mechanism to capture a predetermined category of branches; means for collecting performance monitoring counter data and last branch data responsive to an interrupt generated upon a predetermined condition of the hardware performance monitoring counters; and means for analyzing the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred.
In Example 27 the subject matter of Example 26 optionally includes wherein the malware exploit is a return-oriented programming exploit.
In Example 28 the subject matter of Example 27 optionally includes wherein means for analyzing the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred comprises: means for counting last branch instances having a from address pointing to a return instruction and a to address pointing to an instruction not following a call instruction; means for modifying a return-oriented programming event counter; and means for indicating a return-oriented programming event responsive to the return-oriented programming event counter having a predetermined relation to a predetermined threshold value.
In Example 29 the subject matter of Example 26 optionally includes wherein the malware exploit is a jump-oriented programming exploit.
In Example 30 the subject matter of Example 29 optionally includes wherein the means for analyzing the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred comprises: means for looking for a sequence of last branch instances having from addresses pointing to an indirect jump instruction with an alternating constant address of a dispatcher's entry point and leave point.
In Example 31 the subject matter of Example 26 optionally includes wherein the predetermined category of branches comprises return instructions.
In Example 32 the subject matter of Example 26 optionally includes wherein the predetermined category of branches comprises near indirect jump instructions.
In Example 33 the subject matter of Examples 26-32 optionally includes wherein further comprising: means for taking an anti-malware action responsive to a determination that a malware exploit has occurred, wherein the anti-malware action comprises one or more of termination or changing a sensitivity of a monitoring behavior of a program that triggered the malware exploit.
Example 34 is a machine readable medium, on which are stored instructions, comprising instructions that when executed cause a programmable device to: configure hardware performance monitoring counters to count mispredicted branches; configure a hardware last branch mechanism to capture a predetermined category of branches; collect performance monitoring counter data and last branch data responsive to an interrupt generated upon a predetermined condition of the hardware performance monitoring counters; and analyze the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred.
In Example 35 the subject matter of Example 34 optionally includes wherein the instructions that when executed cause the programmable device to analyze the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred comprise instructions that when executed cause the programmable device to: count last branch instances having a from address pointing to a return instruction and a to address pointing to an instruction not following a call instruction; modify a return-oriented programming event counter; and indicate a return-oriented programming event responsive to the return-oriented programming event counter having a predetermined relation to a predetermined threshold value.
In Example 36 the subject matter of Example 34 optionally includes wherein the instructions that when executed cause the programmable device to analyze the performance monitoring counter data and the last branch data to determine whether a malware exploit has occurred comprise instructions that when executed cause the programmable device to: look for a sequence of last branch instances having from addresses pointing to an indirect jump instruction with an alternating constant address of a dispatcher's entry point and leave point.
In Example 37 the subject matter of Example 34 optionally includes wherein the predetermined category of branches comprises return instructions or near indirect jump instructions.
In Example 38 the subject matter of Examples 34-37 optionally includes wherein the instructions further comprise instructions that when executed cause the programmable device to: take an anti-malware action responsive to a determination that a malware exploit has occurred, wherein the anti-malware action comprises one or more of termination or changing a sensitivity of a monitoring behavior of a program that triggered the malware exploit.
Example 39 is a programmable device programmed to detect malware exploits, comprising: a processor, comprising: a performance monitoring unit; and a last branch record stack; and a memory, coupled to the processor, on which are stored instructions, comprising instructions that when executed cause the processor to: configure the performance monitoring unit to count mispredicted branches; configure the last branch record stack to capture a predetermined category of branches; collect mispredicted branch counts and last branch data from the performance monitoring unit and last branch record stack, responsive to an interrupt generated upon a predetermined condition of the performance monitoring unit; and analyze the mispredicted branch counts and the last branch data to determine whether a malware exploit has occurred.
In Example 40 the subject matter of Example 39 optionally includes wherein the instructions that when executed cause the processor to analyze the mispredicted branch counts and the last branch data comprise instructions that when executed cause the processor to: increment a return-oriented programming event counter responsive to a last branch instance having a from address pointing to a return instruction and a to address pointing to an instruction not following a call instruction; and indicate a return-oriented programming exploit has occurred responsive to the return-oriented programming event counter meeting or exceeding a predetermined threshold value.
In Example 41 the subject matter of Example 39 optionally includes the predetermined category of branches comprises return instructions or near indirect jump instructions.
In Example 42 the subject matter of Example 39 optionally includes wherein the instructions that when executed cause the processor to analyze the mispredicted branch counts and the last branch data comprise instructions that when executed cause the processor to: look for a sequence of last branch instances having from addresses pointing to an indirect jump instruction with an alternating constant address of a dispatcher's entry point and leave point.
In Example 43 the subject matter of Examples 39-42 optionally includes wherein the instructions further comprise instructions that when executed cause the processor to: take an anti-malware action responsive to a determination that that a malware exploit has occurred.
Example 44 is a method of detecting malware exploits, comprising: counting mispredicted branches in a performance monitoring unit of a processor; capturing last branch information by the processor; collecting a mispredicted branch count and the last branch information responsive to a performance monitoring interrupt; determining whether a malware exploit has occurred based on the mispredicted branch count and last branch information; and taking an anti-malware action responsive to the determination that an exploit has occurred.
In Example 45 the subject matter of Example 44 optionally includes wherein counting mispredicted branches comprises configuring a control register of the performance monitoring unit to cause the performance monitoring unit to count mispredicted branches, further comprising configuring the performance monitoring unit to generate the performance monitoring interrupt responsive to counting a threshold number of mispredicted branches.
In Example 46 the subject matter of Examples 44-45 optionally includes wherein capturing last branch information comprises: configuring a last branch record unit to capture return instruction branches or near indirect jump branches.
In Example 47 the subject matter of Examples 44-45 optionally includes wherein the malware exploit is a return-oriented programming exploit, and wherein determining whether a malware exploit has occurred comprises: counting occurrences of a last branch instance having a from address pointing to a return instruction and a to address pointing to an instruction not following a call instruction; and indicating the malware exploit has occurred responsive to a threshold number of occurrences.
In Example 48 the subject matter of Examples 44-45 optionally includes wherein the malware exploit is a jump-oriented programming exploit, and wherein determining whether a malware exploit has occurred comprises: finding a sequence of last branch instances having from addresses pointing to an indirect jump instructions alternating with a constant address of a dispatcher entry point or leave point.
It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.