Testing even simple commercial computer applications can be extremely complex because the number of independent code paths to be tested can be very large. Each of these code paths, in turn, is composed of a very large number of individual functions, which may be composed of one or more blocks of non-adjoining instructions that further complicate testing. There is a need in the computer industry for an approach that performs such complex testing in an efficient manner.
The present disclosure is directed to systems and methods that automate detection of input/output validation (e.g., testing) and output resource management vulnerability. The systems and methods may analyze a set of computer routines. The analysis may include a determination of a likelihood of vulnerability to unexpected behavior for one or more computer routines of the set. Based upon the analysis, the systems and methods may identify the one or more computer routines of the set having the likelihood of vulnerability. The systems and methods may asynchronously and dynamically manipulate at least one of the one or more computer routines through a testing technique. The systems and methods may determine unexpected behavior of at least one of the one or more computer routines.
In some embodiments, the systems and methods may deploy one or more patches to correct the unexpected behavior of at least one of the one or more computer routines. In some embodiments, the systems and methods may analyze the set of computer routines and at least one corresponding sequence of computer routines of the set.
In some embodiments of the systems and methods, the analysis may further include at least one of the following: extracting a histogram including a frequency of usage associated with at least one computer routine of the set, determining size of one or more buffer (e.g., memory segment) read or write computer operations associated with the one or more computer routines, determining the size of one or more corresponding stacks associated with the one or more computer routines, determining size of one or more memory read or write operations based upon examining a corresponding loop size, and performing taint analysis of at least one computer routine of the set. The histogram may include, but is not limited to, at least one of the following: a log file, a graph, a table, other user display and other types of display. Some embodiments may include one or more computer threads. Some embodiments may include two or more computer threads (e.g., multi-threaded). In some embodiments, a computer thread (e.g., computer thread of execution) may represent the smallest sequence of programmed instructions that can be managed independently by a scheduler (e.g., a method by which resources are assigned to complete work), which may be part of the computer operating system. In some embodiments, the computer threads may include a sequence of computer routines which may include at least (e.g., one or more) of function calls and system calls. According to some embodiments, the histogram may depict how many times a given function or system call of a computer thread of a computer application is executed.
In some embodiments of the systems and methods, the one or more computer routines may include at least one (or more) of: a function and a system call. Some embodiments of the systems and methods may manipulate the at least one of the one or more computer routines by at least one of the following: modifying data associated with the one or more computer routines, the data exceeding a corresponding buffer (e.g., memory segment) size, and modifying values that are declared in memory regions associated with (e.g., accessed by) the one or more computer routines.
Some embodiments of the systems and methods may determine unexpected behavior of at least one of the one or more computer routines including determining that a control flow of a thread associated with the one or more computer routines has changed as a result of the manipulation, determining a failure condition that caused the thread to change its control flow, and displaying the failure condition.
In some embodiments of the systems and methods, for at least one function of the one or more computer routines, the computer testing technique may provide at least one of invalid, unexpected, and random data to at least one of an input of the at least one function, logic within the at least one function, and an output of the at least one function. In some embodiments of the systems and methods, for at least one system call of the one or more computer routines, the computer testing technique may provide at least one of invalid, unexpected, and random data to a system call parameter associated with the at least one system call.
In some embodiments of the systems and methods, the system call parameter may be associated with at least one of the following: thread synchronization, process synchronization, thread scheduling, process scheduling, memory, memory allocation, memory de-allocation, memory writing, memory reading, a network socket, creation of a network socket, network socket input, network socket output, pipe creation, system input, system output, shared memory fifo creation, a terminal input, a terminal output, file handling, file creation, file writing, file reading, disk input, and disk output.
In some embodiments, the systems may include an analysis engine. The systems may also include a validation engine that may be communicatively coupled to the analysis engine (e.g., threads and processes being examined). The systems may also include an instrumentation engine that may be communicatively coupled to at least one of the analysis engine and the validation engine.
In some embodiments, the analysis engine and the validation engine may comprise a processor fabric including one or more processors. In some embodiments, the analysis engine, the validation engine, and an instrumentation engine may comprise a processor fabric including one or more processors.
The foregoing will be apparent from the following more particular description of example embodiments of the disclosure, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present disclosure.
A description of example embodiments of the disclosure follows.
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
Some embodiments may help in improving robustness of not only the basic functionality of software (e.g., computer applications), but also software error handling functionality, and may also exercise those code paths that are too difficult to reach when a test suite designed to exercise and test the functionality of the application is executed. In some embodiments, such software testing can be automated in a user friendly manner. In some embodiments, the status of software testing may be displayed to testers, developers, and management of the company developing the software under test.
Multi-State Stress and Computer Application Execution:
Testing of computer routines (including but not limited to functions, system calls, and other types of computer routines) to detect and handle errors may be complex. Cyclomatic complexity is a measure of the number of linearly independent paths (e.g., independent code paths) through a computer application's source code. Some computer application may be very complex and more difficult to stress test. As such, error handling in larger computer applications is of great importance. Therefore, some embodiments exercise and check the error handling functionality (e.g., capabilities) of computer routines (including but not limited to functions or system calls).
In some embodiments, differences exist between the states. In some example embodiments, (1) applying stress on an input may help to find the function or system call's vulnerability against erroneous (e.g., bad) input. In some example embodiments, (2) changing data in the body of the function may serve to exercise code paths in the function that are not otherwise easily exercised. In some example embodiments, (3) artificially changing the output values may create unusual conditions, so that error conditions and/or exception handling code may be exercised. According to some example embodiments, the first two states (1) and (2) may therefore be considered as “Code Exercise Test” states and the third state (3) may be considered as a “Negative Test” state.
However, in some embodiments, such stress (e.g., stress tests) are not limited to being applied at only three states, and may be applied at four or more states. Some embodiments may include a fourth state (4) which may be time dependent. In some example embodiments, for code that executes repeatedly, stress may be applied to a given instance of invocation. In an example embodiment, stress may be applied on the N-th (e.g., first, hundredth, or other number) instance of execution.
Some embodiments, before applying stress testing, may identify at least one of critical and high value computer routines (e.g., at least one of functions, system calls, and other computer routines). As such, to follow, this disclosure describes methods (and systems) of identifying at least one of critical and high value functions and then subjecting these functions to the three-state stress described above.
Detection Process
As part of the manipulation, some example embodiments may fuzz (e.g., perform fuzz testing on, including but not limited to providing invalid, unexpected, and random data to the inputs of) the computer routine (e.g., including but not limited to a function or system call) with data larger than the buffer size (e.g., memory segment size) to examine if it is vulnerable to buffer error vulnerability. Some example embodiments may perform fuzzing using string inputs larger than the stack size.
Some example embodiments may fuzz numbers (e.g., which may include parameters, input and output values) that are declared in memory regions that may (or may not) be provided by users by one or more of the following: (1) some example embodiments may change numbers to be larger than a given architecture size (such as 8/32/64-bit or N-bit architecture size) if the underlying instructions are math operations, including but not limited to add, subtract, multiply and/or divide operations; (2) some example embodiments may change the sign of such numbers; (3) some example embodiments may change the value of such numbers to zero, if the disassembly (conversion of a program from its executable form into a form of assembler language that is readable by a human), shows division type operation and/or if the number is used as an address; and (4) other method(s) of fuzzing numbers. Numbers may include integers and/or floating point numbers (including but not limited to single-precision, double-precision, N-bit precision, and/or other types of precision) and may include a corresponding sign.
In order to achieve the manipulation of the one or more computer routines (for example, fuzz input and/or output values), some embodiments may modify one or more stacks (e.g., computer stack arrays) in computer memory. In order to achieve the manipulation, some embodiments may modify the stack pointer and/or the values within the stack. In order to achieve the manipulation, some embodiments may modify one or more of the following computer registers: the EAX (accumulator register), EBX (base register), ECX (counter register), EDX (data register), ESI (source index register), EDI (destination register), EBP (base pointer), and/or ESP (stack pointer), other registers, and other pointers.
The method (and/or system) 300 may determine unexpected behavior of at least one of the one or more computer routines 308. Some example embodiments may check to see if the control flow of the thread changed as a result of fuzzing by comparing the control flow extracted with and without that function or system call being attacked. Some example embodiments may identify the precise failure that caused the thread to change its control flow. Some embodiments may report the unexpected behavior to a display (e.g., dashboard) in the form of a failure condition being displayed, in a standard format, including but not limited to a syslog format and/or other formats.
In some embodiments the technique used to fuzz a function may include providing fake (e.g., false or unexpected) input and letting the function execute with the manipulated input. In some embodiments, a system call may execute normally, but fuzzing may overwrite the system call's result before it is read by an entity making the system call. As such, in some embodiments, the method (and system) of fuzzing may be different between functions and system calls.
In some embodiments, the method (and/or system) 300 may optionally deploy 310 one or more patches to correct the unexpected behavior of at least one of the one or more computer routines. In some embodiments, the method (and/or system) may analyze the set of computer routines and at least one corresponding sequence of computer routines of the set.
In some embodiments of the method (and/or system), the analysis 302 may further include at least one of the following. Some embodiments may extract a histogram including a frequency of usage associated with at least one computer routine of the set. Some example embodiments may extract a histogram of the most commonly used functions and system calls. Some embodiments may determine size of one or more buffer read or write computer operations associated with the one or more computer routines. Some embodiments may determine size of one or more corresponding stacks associated with the one or more computer routines. Some example embodiments may identify functions that include large buffer read and/or write operations and/or their corresponding stack sizes at the time of creation of the stack. Some embodiments may determine size of one or more memory read or write operations based upon examining a corresponding loop size. Some embodiments may perform taint analysis of at least one computer routine of the set. Some embodiments may identify at least one instruction associated with the one or more computer routines, the at least one instruction performing a computer operation that includes at least one of incrementing a value, decrementing a value, adding a value, subtracting a value, multiplying a value, and dividing a value. Some example embodiments may identify computer instructions (e.g., computer routines) that perform math operations, including but not limited to increment, decrement, add, subtract, multiply, and/or divide, of two or more numbers. Some example embodiments may determine if at least one of the two or more numbers are in user-provided input by performing taint analysis.
In some embodiments of the method (and/or system), the one or more computer routines may include at least one of: a function and a system call. Some example embodiments may identify the functions and system call sequences used by each thread for each use case to map the run time control flow of an application.
Some embodiments of the method (and/or system) may manipulate 306 the at least one of the one or more computer routines by at least one of the following: modifying data associated with the one or more computer routines, the data exceeding a corresponding buffer size, and modifying values that are declared in memory regions associated with the one or more computer routines.
Some embodiments of the method (and/or system) may determine unexpected behavior 308 of at least one of the one or more computer routines including determining that a control flow of a thread associated with the one or more computer routines has changed as a result of the manipulation, determining a failure condition that caused the thread to change its control flow, and displaying the failure condition.
In some embodiments of the method (and/or system), for at least one function of the one or more computer routines, the computer testing technique in 306 may provide at least one of invalid, unexpected, and random data to at least one of an input of the at least one function, logic within the at least one function, and an output of the at least one function. In some embodiments of the method (and/or system), for at least one system call of the one or more computer routines, the computer testing technique may provide at least one of invalid, unexpected, and random data to a system call parameter associated with the at least one system call.
In some embodiments, a system call parameter may include a return value of a system call. As such, in some embodiments, the system call return value may be “overwritten” by a known system call parameter (e.g., system call return value or system call error code) with a fake (false or unexpected) result.
In some embodiments of the system (and/or system), the system call parameter may be associated with at least one of the following: thread synchronization, process synchronization, thread scheduling, process scheduling, memory, memory allocation, memory de-allocation, memory writing, memory reading, a network socket, creation of a network socket, network socket input, network socket output, pipe creation, system input, system output, shared memory fifo creation, a terminal input, a terminal output, file handling, file creation, file writing, file reading, disk input, and disk output.
In some embodiments, the systems may include an analysis engine. The systems may also include a validation engine that may be communicatively coupled to the analysis engine (e.g., threads and processes being examined). The systems may also include an instrumentation engine that may be communicatively coupled to at least one of the analysis engine and the validation engine.
Computer Operating Environments and Resources:
In some embodiments, the computer routines (collectively, 412, 414, 416, 420, 422, and 424 in
In some embodiments, testing of software may be accomplished by a combination of manual and/or automated methods that exercise independent code paths. In some embodiments, the set of independent code paths may be a very large vector space. Therefore, the number of tests to be performed may be very large. Given that manual testing of the large vector space may be cumbersome, as timely creation of dependencies that may be required by the test suite rests squarely on the tester, some embodiments provide automated testing. Therefore, executing the test suite in a repeatable manner when feature-functionality changes may be challenging.
To overcome this challenge, some embodiments provide an advantage in that automated testing methods (and systems) of some embodiments do not require that dependencies including test files, input data, system resources, hardware to be ready prior to the launch of the test suite. As such, some embodiments overcome the deficiencies of existing approaches, which require dependencies to be available to avoid resulting failures.
As the test suite executes, the application may experience stress due to minor variations in the environment in which the application operates. In some embodiments, the operating environment 430 may include the kernel 436, the runtime libraries 434 and/or other libraries 434 used by the application as well as the external data that is presented to the application 432. To test the computer application 432, some embodiments create variations in the operating environment using automated mechanisms.
In some embodiments, other runtime stress may be introduced in real time through a runtime computer routine's (such as an API, function, or system call) “hooking” of kernel 436, library 434, and raw application 432 computer code. As known in the art of computer programming, “hooking” may include one or more of a range of techniques to alter behavior of applications, of an operating system, and/or other software components by intercepting computer routine calls, messages and/or events passed between software components (including but not limited to the software components illustrated in
In some embodiments, external data input stress may be introduced in real time by changing either the runtime arguments presented to a computer routine (including but not limited to an API and/or a function and/or system call), and/or by changing the body of a given computer routine at run time, and/or even be percolated upwards into the call stack by changing the return value of a given library and/or user functionality at run time.
Stress-Testing Displays:
As such,
As illustrated in
Once the list of targeted stress is set up, a test suite may be used to exercise the application under test. Some embodiments may cause the execution of the appropriate stress in the stress vector set up by the tester. For situations where the stress is a negative stress, which tests a code block's handling of invalid input and/or unexpected behavior in regards to the code block's functionality (e.g., exception handling), some embodiments may report the success and/or failure of the stress vector by observing the next transition after the test. Negative stress that causes the application to crash or hang may be reported on a dashboard and/or web portal 600. For code exerciser tests, normal testing may continue and other logical, performance, load, runtime, resource exhaustion, and security testing may continue.
In some embodiments, for each critical function transition on the call graph, a tester can set up the aforementioned variety of stress using a graphical user interface or the command line. In some embodiments, for each stress instance, a few fixed parameters may be provided. In some embodiments, these fixed parameters may include (a) the function transition boundary, (b) the thread number, (c) the instance count at which the stress may be applied and/or (d) the instance count at which the stress may be removed. In some embodiments, the tester may also indicate if the stress instance is a code exerciser stress, and/or a negative test, which tests a code block's handling of invalid input and/or unexpected behavior in regards to the code block's functionality (e.g., exception handling). If the stress is a negative stress (e.g., a stress associated with a negative test), then the tester may also specify the next transition that may occur for the stress test to pass and for the stress test to fail.
According to some embodiments, the following indicates how each variable parameter of the stress may be reflected in a list, such as the dashboard of
In some embodiments, each of the identified critical functionality may be subjected to a variety of stress. When the aforementioned list may be exhaustively tested, the next set of critical functionality may be targeted until gradually the last of the functionality is tested. In some embodiments, one advantage of the aforementioned real time code substitution mechanism is that it may also be used to return errors, which enables hard-to-reach independent code paths to get exercised as well.
Monitoring Agent and Analysis Engine Infrastructure
As the application's code begins to load into memory, the Instrumentation and Analysis Engine (i.e., instrumentation engine) 705 performs several different load time actions. Once all the modules have loaded up, the instrumented instructions of the application generate runtime data. The Client Daemon 708 initializes the Instrumentation and Analysis Engine 705, the Streaming Engine 710 and the GUI 711 processes in the CPU at 736 by reading one or more configuration files from the Configuration database 709. It also initializes intercommunication pipes between the instrumentation engine, Streaming Engine, GUI, Instrumentation & Analysis Engine 705 and itself. The Client Daemon also ensures that if any Monitoring Agent process, including itself, becomes unresponsive or dies, it will be regenerated. This ensures that the Monitoring Agent 702 is a high availability enterprise grade product.
The Instrumentation and Analysis Engine 705 pushes load and runtime data collected from the application into the Streaming Engine. The Streaming Engine packages the raw data from the Monitoring Agent 702 into the PDU. Then it pushes the PDU over a high bandwidth, low latency communication channel 712 to the Analysis Engine 728. If the Monitoring Agent 702 and the Analysis Engine 728 are located on the same machine this channel can be a memory bus. If these entities are located on different hardware but in the same physical vicinity, the channel can be an Ethernet or Fiber based transport, which allows remote connections to be established between the entities to transport the load and runtime data across the Internet.
The infrastructure of the Analysis Engine 728 includes the Network Interface Card (NIC) 713, the Packet Pool 714, the Time Stamp Engine 715, the Processor Fabric 716, the Hashing Engine 717, the TCAM Engine 718, the Application Map database 719, and the Thread Context database 720, which may contain a table of the memory addresses used by a class of user executing an application monitored by the system. The infrastructure of the Analysis Engine 728 further includes the Content Analysis Engine 721, the Events and Event Chains 722, the Event Management Engine 723, the Event Log 724, the Application Daemon 725, the Analysis Engine Configuration database 726, the Network Interface 727, the Dashboard or CMS 737, the SMS/SMTP Server 729, the OTP Server 730, the Upgrade Client 731, the Software Upgrade Server 732, Software Images 733, the Event Update Client 734, and the Event Upgrade Server 735.
The PDU together with the protocol headers is intercepted at the Network Interface Card 713 from where the PDU is pulled and put into the Packet Pool 714. The timestamp fields in the PDU are filled up by the Time Stamp Engine 715. This helps to make sure that no packet is stuck in the packet Pool buffer for an inordinately long time.
The Processor Fabric 716 pulls packets from the packet buffer and the address fields are hashed and replaced in the appropriate location in the packet. This operation is performed by the Hashing Engine 717. Then the Processor Fabric starts removing packets from the packet buffer in the order they arrived. Packets with information from the load time phase are processed such that the relevant data is extracted and stored in the Application Map database 719. Packets with information from the runtime phase are processed in accordance with
The transition target data is saved in the Thread Context database 720 which has a table for each thread. The Processor fabric also leverages the TCAM Engine 718 to perform transition and memory region searches. Since the processor fabric performing lookups using hashes, the actual time used is predictable and very short. By choosing the number of processors in the fabric carefully, per packet throughput can be suitable altered.
When the Analysis Engine 728 performs searches, it may, from time to time find an invalid transition, invalid operation of critical/admin functions or system calls, or find a memory write on undesirable locations. In each of these cases, the Analysis Engine 728 dispatches an event of the programmed severity as described by the policy stored in the Event and Event Chain database 722 to the Event Management Engine 723. The raw event log is stored in the Event Log Database 724. The Dashboard/CMS 737 can also access the Event Log and display application status.
A remedial action is also associated with every event in the Event and Event Chain database 722. A user can set the remedial action from a range of actions from ignoring the event in one extreme to terminating the thread in the other extreme. A recommended remedial action can be recommended to the analyst using the Event Update Client 734 and Event Upgrade Server 735. In order to change the aforementioned recommended action, an analyst can use the Dashboard/CMS 737 accordingly. The Dashboard/CMS 737 provides a GUI interface that displays the state of each monitored application and allows a security analyst to have certain control over the application, such as starting and stopping the application. When an event is generated, the Event Chain advances from the normal state to a subsequent state. The remedial action associated with the new state can be taken. If the remedial action involves a non-ignore action, a notification is sent to the Security Analyst using and SMS or SMTP Server 729. The SMS/SMTP address of the security analyst can be determined using an LDAP or other directory protocol. The process of starting or stopping an application from the Dashboard/CMS 737 requires elevated privileges so the security analyst must authenticate using an OTP Server 730.
New events can also be created and linked into the Event and Event Chain database 722 with a severity and remedial action recommended to the analyst. This allows unique events and event chains for a new attack at one installation to be dispatched to other installations. For this purpose, all new events and event chains are loaded into the Event Upgrade Server 735. The Event Update Client 734 periodically connects and authenticates to the Event Upgrade Server 735 to retrieve new events and event chains. The Event Update Client then loads these new events and event chains into the Events and Events Chain database 722. The Content Analysis Engine 721 can start tracking the application for the new attacks encapsulated into the new event chains.
Just as with the Client Daemon, the Appliance Daemon 725 is responsible for starting the various processes that run on the Analysis Engine 728. For this purpose, it must read configuration information from the Analysis Engine Configuration database 726. The daemon is also responsible for running a heartbeat poll for all processes in the Analysis Engine 728. This ensures that all the devices in the Analysis Engine ecosystem are in top working condition at all times. Loss of three consecutive heartbeats suggests that the targeted process is not responding. If any process has exited prematurely, the daemon will revive that process including itself.
From time to time, the software may be upgraded in the Appliance host, or of the Analysis Engine 728 or of the Monitoring Agent 702 for purposes such as fixing errors in the software. For this purpose, the Upgrade Client 731 constantly checks with the Software Upgrade Server 732 where the latest software is available. If the client finds that the entities in the Analysis Engine 728 or the Monitoring Agent 702 are running an older image, it will allow the analysts to upgrade the old image with a new image from the Software Upgrade Server 732. New images are bundled together as a system image 733. This makes it possible to provision the appliance or the host with tested compatible images. If one of the images of a subsystem in the Analysis Engine 728 or the Monitoring Agent 702 does not match the image for the same component in the System image, then all images will be rolled to a previous known good system image.
PDU for Monitoring Agent and Analysis Engine Communication
The Application Provided Data Section contains data from various registers as well as source and target addresses that are placed in the various fields of this section. The Protocol Version contains the version number of the PDU 752. As the protocol version changes over time, the source and destination must be capable of continuing to communicate with each other. This 8 bit field describes the version number of the packet as generated by the source entity. A presently unused reserved field 756 follows the Protocol Version field.
The next field of the Application Provided Data Section is the Message Source/Destination Identifiers 757, 753, and 754 are used to exchange traffic within the Analysis Engine infrastructure as shown in
Monitoring Agent Side Entities
1. GUI
2. Instrumentation and Analysis Engine
3. Client Message Router
4. Streaming Engine
5. Client Side Daemon
6. CLI Engine
7. Client Watchdog
8. Client Compression Block
9. Client iWarp/RDMA/ROCE Ethernet Driver (100 Mb/1 Gb/10 Gb)
Per PCI Card Entities (Starting Address=20+n*20)
20. Analysis Engine TOE block
21. Analysis Engine PCI Bridge
22. Decompression Block
23. Message Verification Block
24. Packet Hashing Block
25. Time-Stamping Block
26. Message Timeout Timer Block
27. Statistics Counter Block
28. Analysis Engine Query Router Engine
29. Analysis Engine Assist
Analysis Engine Host Entities
200. Analysis Engine PCIe Driver
201. Host Routing Engine
202. Content Analysis Engine
203. Log Manager
204. Daemon
205. Web Engine
206. Watchdog
207. IPC Messaging Bus
208. Configuration Database
209. Log Database
SIEM Connectors
220. SIEM Connector 1—Dashboard/CMS
221. SIEM Connector 2—HP ArcSight
222. SIEM Connector 3—IBM QRadar
223. SIEM Connector 4—Alien Vault USM
Analysis Engine Infrastructure Entities
230. Dashboard/CMS
231. SMTP Server
232. LDAP Server
233. SMS Server
234. Entitlement Server
235. Database Backup Server
236. OTP Client
237. OTP Server
238. Checksum Server
239. Ticketing Server
240. Event Chain Upgrade Server
241. Software Update Server
All User Applications
255. User Applications—Application PID is used to identify the application issuing a query
Another field of the Application Provided Data section is the Message Type field which indicates the type of data being transmitted 755. At the highest level, there are three distinct types of messages that flow between the various local Monitoring Agent side entities, between the Analysis Engine appliance side entities and between Monitoring Agent side and appliance side entities. Furthermore, messages that need to travel over a network must conform to the OSI model and other protocols.
The following field of the Application Provided Data section is the Packet Sequence Number field containing the sequence identifier for the packet 779. The Streaming Engine will perform error recovery on lost packets. For this purpose it needs to identify the packet uniquely. An incrementing signed 64 bit packet sequence number is inserted by the Streaming Engine and simply passes through the remaining Analysis Engine infrastructure. If the sequence number wraps at the 64 bit boundary, it may restart at 0. In the case of non-application packets such as heartbeat or log message etc., the packet sequence number may be −1.
The Application Provided Data section also contains the Canary Message field contains a canary used for encryption purposes 761. The Monitoring Agent 702 and the Analysis Engine 728 know how to compute the Canary from some common information but of a fresh nature such as the Application Launch time, PID, the license string, and an authorized user name.
The Application Provided Data section additionally contains generic fields that are used in all messages. The Application Source Instruction Address 780, Application Destination Instruction Address 758, Memory Start Address Pointer 759, Memory End Address Pointer 760, Application PID 762, Thread ID 763, Analysis Engine Arrival Timestamp 764, and Analysis Engine Departure Timestamp 765 fields which hold general application data.
The PDU also contains the HW/CAE Generated section. In order to facilitate analysis and to maintain a fixed time budget, the Analysis Engine hashes the source and destination address fields and updates the PDU prior to processing. The HW/CAE Generated section of the PDU is where the hashed data is placed for later use. This section includes the Hashed Application Source Instruction Address 766, Hash Application Destination Instruction Address 767, Hashed Memory Start Address 768, and Hashed Memory End Address 769 fields. The HW/CAE Generated section additionally contains other fields related to the Canary 771 including the Hardcoded Content Start Magic header, API Name Magic Header, Call Context Magic Header and Call Raw Data Magic Header are present in all PDU packets.
The HW/CAE Generated section also includes a field 770 to identify other configuration and error data which includes Result, Configuration Bits, Operating Mode, Error Code, and Operating Modes data. The Result part of the field is segmented to return Boolean results for the different Analysis Engine queries—the transition playbook, the code layout, the Memory (Stack or Heap) Overrun, and the Deep Inspection queries. The Configuration Bits part of the field indicates when a Compression Flag, Demo Flag, or Co-located Flag is set. The presence of the flag in this field indicates to the Analysis Engine 728 whether the packet should be returned in compression mode. The Demo Flag indicates that system is in demo mode because there is no valid license for the system. In this mode, logs and events will not be available in their entirety. The Co-located Flag indicates that the application is being run in the Analysis Engine 728 so that Host Query Router Engine can determine where to send packets that need to return to the Application. If this flag is set, the packets are sent via the PCI Bridge, otherwise they are sent over the Ethernet interface on the PCI card. The Operating Mode part of the field indicates whether the system is in Paranoid, Monitor, or Learn mode. These modes will be discussed in more details later in this section. Lastly, the Error Code part of the field indicates an error in the system. The first eight bits of the error code will correspond to the message source. The remaining 12 bits will correspond to the actual error reported by each subsystem.
The PDU also contains the Content Analysis Engine or Raw Data. All variable data such as arguments and return value of the OS library calls and System Calls is placed in this section of the PDU. The data in this section contains the content of the data collected from the application and is primarily targeted at the Content Analysis Engine. This section contains the Variable Sized API Name or Number 772, the Call Content Magic Header 777, the Variable Sized Call Content 774, the Call Raw Data Magic Header 778, Variable Sized Raw Data Contents 776, and two reserved 773 and 775 fields. Furthermore, these fields can be overloaded for management messages.
Digital Processing Infrastructure
Client computer(s)/devices 50 and server computer(s) 60 provide processing, storage, and input/output devices executing application programs and the like. The client computer(s)/devices 50 can also be linked through communications network 70 to other computing devices, including other client devices/processes 50 and server computer(s) 60. The communications network 70 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, local area or wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth®, etc.) to communicate with one another. Other electronic device/computer network architectures are suitable.
Client computers/devices 50 may be configured with the monitoring agent. Server computers 60 may be configured as the analysis engine which communicates with client devices (i.e., monitoring agent) 50 for detecting database injection attacks. The server computers 60 may not be separate server computers but part of cloud network 70. In some embodiments, the server computer (e.g., analysis engine) may analyze a set of computer routines and identify one or more computer routines of the set having a likelihood of vulnerability. The client (monitoring agent, and/or in some embodiments a validation engine) 50 may communicate a manipulation of the computer routines through a testing technique to the server (analysis engine) 60. In some embodiments, the client 50 may include client applications or components (e.g., instrumentation engine) executing on the client (i.e., monitoring agent, and/or in some embodiments a validation engine) 50 for initiating tests to asynchronously and dynamically manipulate the computer routines and determine unexpected behavior of the computer routines, and the client 50 may communicate this information to the server (e.g., analysis engine) 60.
Embodiments or aspects thereof may be implemented in the form of hardware (including but not limited to hardware circuitry), firmware, or software. If implemented in software, the software may be stored on any non-transient computer readable medium that is configured to enable a processor to load the software or subsets of instructions thereof. The processor then executes the instructions and is configured to operate or cause an apparatus to operate in a manner as described herein.
Some embodiments may transform the behavior and/or data of a set of computer routines by asynchronously and dynamically manipulating at least one of the computer routines through at testing technique. The testing technique may include modification of a value, input parameter, return value, or code body associated with one or more of the computer routines, thereby transforming the behavior (and/or data) of the computer routine.
Some embodiments may provide functional improvements to the quality of computer applications, computer program functionality, and/or computer code by detecting improper handling of error conditions and/or vulnerabilities in the computer applications and/or computer code by way of the testing techniques. Some embodiments may check to see if the control flow of a thread changed as a result of manipulation (e.g., fuzzing), by comparing the control flow extracted with and without the given computer routine being attacked (through an attack vector). Some embodiments may deploy a code path to correct and/or replace the computer routine to avoid the unexpected and/or incorrect behavior. As such, some embodiments may detect and correct computer code functionality, thereby providing a substantial functional improvement.
Some embodiments solve a technical problem (thereby providing a technical effect) of robustness of basic functionality of software and its error handling functionality. Some embodiments also solve a technical problem of exercising code paths that are too hard to reach in other test suites (thereby providing a technical effect). Some embodiments also provide a display to users in order to report status of testing, and thereby improve efficiency of testing, and thereby also solving a technical problem of lack of efficiency in test (and thereby also providing a technical effect).
Further, hardware, firmware, software, routines, or instructions may be described herein as performing certain actions and/or functions of the data processors. However, it should be appreciated that such descriptions contained herein are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
It should be understood that the flow diagrams, block diagrams, and network diagrams may include more or fewer elements, be arranged differently, or be represented differently. But it further should be understood that certain implementations may dictate the block and network diagrams and the number of block and network diagrams illustrating the execution of the embodiments be implemented in a particular way.
Accordingly, further embodiments may also be implemented in a variety of computer architectures, physical, virtual, cloud computers, and/or some combination thereof, and, thus, the data processors described herein are intended for purposes of illustration only and not as a limitation of the embodiments.
While this disclosure has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure encompassed by the appended claims.
This application is a continuation of U.S. application Ser. No. 15/318,429, filed Dec. 13, 2016, which is the U.S. National Stage of International Application No. PCT/US2015/037471, filed Jun. 24, 2015, which designates the U.S., published in English, and claims the benefit of U.S. Provisional Application No. 61/998,318, filed on Jun. 24, 2014. The entire teachings of the above applications are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4080650 | Beckett | Mar 1978 | A |
4215406 | Gomola et al. | Jul 1980 | A |
4466077 | Iannucci et al. | Aug 1984 | A |
4672534 | Kamiya | Jun 1987 | A |
4751667 | Ross | Jun 1988 | A |
4803720 | Newell et al. | Feb 1989 | A |
5161193 | Lampson et al. | Nov 1992 | A |
5222220 | Mehta | Jun 1993 | A |
5224160 | Paulini et al. | Jun 1993 | A |
5235551 | Sinofsky et al. | Aug 1993 | A |
5297274 | Jackson | Mar 1994 | A |
5321828 | Phillips et al. | Jun 1994 | A |
5359659 | Rosenthal | Oct 1994 | A |
5390309 | Onodera | Feb 1995 | A |
5440723 | Arnold et al. | Aug 1995 | A |
5611043 | Even et al. | Mar 1997 | A |
5684948 | Johnson et al. | Nov 1997 | A |
5784552 | Bishop et al. | Jul 1998 | A |
5826012 | Lettvin | Oct 1998 | A |
5829039 | Sugino et al. | Oct 1998 | A |
5850559 | Angelo et al. | Dec 1998 | A |
5873124 | Draves | Feb 1999 | A |
5890005 | Lindholm | Mar 1999 | A |
5909580 | Crelier et al. | Jun 1999 | A |
5933594 | La Joie et al. | Aug 1999 | A |
5978584 | Nishibata et al. | Nov 1999 | A |
5983348 | Ji | Nov 1999 | A |
6077312 | Bates et al. | Jun 2000 | A |
6119206 | Tatkar et al. | Sep 2000 | A |
6134652 | Warren | Oct 2000 | A |
6151618 | Wahbe et al. | Nov 2000 | A |
6178522 | Zhou et al. | Jan 2001 | B1 |
6237137 | Beelitz | May 2001 | B1 |
6240501 | Hagersten | May 2001 | B1 |
6263489 | Olsen et al. | Jul 2001 | B1 |
6275893 | Bonola | Aug 2001 | B1 |
6516408 | Abiko et al. | Feb 2003 | B1 |
6553429 | Wentz et al. | Apr 2003 | B1 |
6665316 | Eidson | Dec 2003 | B1 |
6775780 | Muttik | Aug 2004 | B1 |
6782478 | Probert | Aug 2004 | B1 |
6832302 | Fetzer et al. | Dec 2004 | B1 |
6895508 | Swanberg et al. | May 2005 | B1 |
6948091 | Bartels et al. | Sep 2005 | B2 |
6973577 | Kouznetsov | Dec 2005 | B1 |
6981176 | Fruehling et al. | Dec 2005 | B2 |
7181585 | Abrashkevich et al. | Feb 2007 | B2 |
7257763 | Srinivasan et al. | Aug 2007 | B1 |
7260845 | Kedma et al. | Aug 2007 | B2 |
7272748 | Conover et al. | Sep 2007 | B1 |
7281225 | Jain et al. | Oct 2007 | B2 |
7284276 | Conover et al. | Oct 2007 | B2 |
7328323 | Conover | Feb 2008 | B1 |
7380245 | Lovette | May 2008 | B1 |
7383166 | Ashar et al. | Jun 2008 | B2 |
7386839 | Golender et al. | Jun 2008 | B1 |
7453910 | Biberstein et al. | Nov 2008 | B1 |
7480919 | Bray et al. | Jan 2009 | B2 |
7484239 | Tester et al. | Jan 2009 | B1 |
7490268 | Keromytis et al. | Feb 2009 | B2 |
7526654 | Charbonneau | Apr 2009 | B2 |
7526755 | DeLine et al. | Apr 2009 | B2 |
7539875 | Manferdelli et al. | May 2009 | B1 |
7555747 | Agesen | Jun 2009 | B1 |
7603704 | Bruening et al. | Oct 2009 | B2 |
7603715 | Costa et al. | Oct 2009 | B2 |
7613954 | Grey et al. | Nov 2009 | B2 |
7634812 | Costa et al. | Dec 2009 | B2 |
7644440 | Sinha et al. | Jan 2010 | B2 |
7730305 | Eun et al. | Jun 2010 | B2 |
7747725 | Williams et al. | Jun 2010 | B2 |
7853803 | Milliken | Dec 2010 | B2 |
7895651 | Brennan | Feb 2011 | B2 |
7971044 | Dieffenderfer et al. | Jun 2011 | B2 |
7971255 | Kc et al. | Jun 2011 | B1 |
8042180 | Gassoway | Oct 2011 | B2 |
8151117 | Hicks | Apr 2012 | B2 |
8261326 | Ben-Natan | Sep 2012 | B2 |
8307191 | Jain | Nov 2012 | B1 |
8336102 | Neystadt et al. | Dec 2012 | B2 |
8353040 | Tahan et al. | Jan 2013 | B2 |
8407523 | Stewart et al. | Mar 2013 | B2 |
8510596 | Gupta et al. | Aug 2013 | B1 |
8954738 | Asokan et al. | Feb 2015 | B2 |
8958546 | Probert | Feb 2015 | B2 |
8966312 | Gupta et al. | Feb 2015 | B1 |
9230455 | Probert | Jan 2016 | B2 |
9418227 | Franklin | Aug 2016 | B2 |
9516053 | Muddu et al. | Dec 2016 | B1 |
10079841 | Gupta et al. | Sep 2018 | B2 |
10114726 | Gupta | Oct 2018 | B2 |
10331888 | Gupta et al. | Jun 2019 | B1 |
10354074 | Gupta | Jul 2019 | B2 |
20010013094 | Etoh et al. | Aug 2001 | A1 |
20010023479 | Kimura et al. | Sep 2001 | A1 |
20010033657 | Lipton et al. | Oct 2001 | A1 |
20010047510 | Angel et al. | Nov 2001 | A1 |
20020129226 | Eisen et al. | Sep 2002 | A1 |
20020138554 | Feigen et al. | Sep 2002 | A1 |
20020169999 | Bhansali et al. | Nov 2002 | A1 |
20030014667 | Kolichtchak | Jan 2003 | A1 |
20030023865 | Cowie et al. | Jan 2003 | A1 |
20030028755 | Ohsawa et al. | Feb 2003 | A1 |
20030033498 | Borman et al. | Feb 2003 | A1 |
20030041290 | Peleska | Feb 2003 | A1 |
20030079158 | Tower et al. | Apr 2003 | A1 |
20030120884 | Koob et al. | Jun 2003 | A1 |
20030120885 | Bonola | Jun 2003 | A1 |
20030145253 | de Bonet | Jul 2003 | A1 |
20030188160 | Sunder et al. | Oct 2003 | A1 |
20030188174 | Zisowski | Oct 2003 | A1 |
20030191940 | Sinha et al. | Oct 2003 | A1 |
20030021725 | Wyatt | Nov 2003 | A1 |
20030212913 | Vella | Nov 2003 | A1 |
20030217277 | Narayanan | Nov 2003 | A1 |
20040049660 | Jeppesen et al. | Mar 2004 | A1 |
20040103252 | Lee et al. | May 2004 | A1 |
20040117682 | Xu | Jun 2004 | A1 |
20040120173 | Regev et al. | Jun 2004 | A1 |
20040133777 | Kiriansky et al. | Jul 2004 | A1 |
20040157639 | Morris et al. | Aug 2004 | A1 |
20040162861 | Detlefs | Aug 2004 | A1 |
20040168078 | Brodley et al. | Aug 2004 | A1 |
20040215755 | O'Neill | Oct 2004 | A1 |
20040221120 | Abrashkevich et al. | Nov 2004 | A1 |
20040268095 | Shpeisman et al. | Dec 2004 | A1 |
20040268319 | Tousignant | Dec 2004 | A1 |
20050010804 | Bruening et al. | Jan 2005 | A1 |
20050022153 | Hwang | Jan 2005 | A1 |
20050028048 | New et al. | Feb 2005 | A1 |
20050033980 | Willman et al. | Feb 2005 | A1 |
20050039178 | Marolia et al. | Feb 2005 | A1 |
20050055399 | Savchuk | Mar 2005 | A1 |
20050071633 | Rothstein | Mar 2005 | A1 |
20050086502 | Rayes et al. | Apr 2005 | A1 |
20050108562 | Khazan et al. | May 2005 | A1 |
20050132179 | Glaum et al. | Jun 2005 | A1 |
20050138409 | Sheriff et al. | Jun 2005 | A1 |
20050144471 | Shupak et al. | Jun 2005 | A1 |
20050144532 | Dombrowa et al. | Jun 2005 | A1 |
20050172115 | Bodorin et al. | Aug 2005 | A1 |
20050195748 | Sanchez | Sep 2005 | A1 |
20050210275 | Homing et al. | Sep 2005 | A1 |
20050223238 | Schmid et al. | Oct 2005 | A1 |
20050246522 | Samuelsson et al. | Nov 2005 | A1 |
20050273854 | Chess et al. | Dec 2005 | A1 |
20050283601 | Tahan | Dec 2005 | A1 |
20050283835 | Lalonde et al. | Dec 2005 | A1 |
20050289527 | Illowsky et al. | Dec 2005 | A1 |
20060002093 | Wyatt | Jan 2006 | A1 |
20060002103 | Conti et al. | Jan 2006 | A1 |
20060002385 | Johnsen et al. | Jan 2006 | A1 |
20060026311 | Nicolai et al. | Feb 2006 | A1 |
20060075274 | Zimmer et al. | Apr 2006 | A1 |
20060095895 | K. | May 2006 | A1 |
20060126799 | Burk | Jun 2006 | A1 |
20060143707 | Song et al. | Jun 2006 | A1 |
20060155905 | Leino et al. | Jul 2006 | A1 |
20060161583 | Burka et al. | Jul 2006 | A1 |
20060195745 | Keromytis et al. | Aug 2006 | A1 |
20060212837 | Prasad | Sep 2006 | A1 |
20060242703 | Abeni | Oct 2006 | A1 |
20060026543 | Shankar et al. | Nov 2006 | A1 |
20060245588 | Hatakeyama | Nov 2006 | A1 |
20060248519 | Jaeger et al. | Nov 2006 | A1 |
20060271725 | Wong | Nov 2006 | A1 |
20060282891 | Pasko | Dec 2006 | A1 |
20070011686 | Ben-Zvi | Jan 2007 | A1 |
20070016953 | Morris et al. | Jan 2007 | A1 |
20070027815 | Sobel et al. | Feb 2007 | A1 |
20070050848 | Khalid | Mar 2007 | A1 |
20070067359 | Barrs et al. | Mar 2007 | A1 |
20070118646 | Gassoway | May 2007 | A1 |
20070136455 | Lee et al. | Jun 2007 | A1 |
20070157003 | Durham et al. | Jul 2007 | A1 |
20070169075 | Lill et al. | Jul 2007 | A1 |
20070174549 | Gyl et al. | Jul 2007 | A1 |
20070174703 | Gritter et al. | Jul 2007 | A1 |
20070192854 | Kelley et al. | Aug 2007 | A1 |
20070274311 | Yang | Nov 2007 | A1 |
20080016339 | Shukla | Jan 2008 | A1 |
20080215925 | Degenaro et al. | Sep 2008 | A1 |
20080250496 | Namihira | Oct 2008 | A1 |
20080263505 | StClair | Oct 2008 | A1 |
20080301647 | Neystadt et al. | Dec 2008 | A1 |
20090144698 | Fanning | Jun 2009 | A1 |
20090144898 | Wu | Jun 2009 | A1 |
20090158075 | Biberstein et al. | Jun 2009 | A1 |
20090217377 | Arbaugh et al. | Aug 2009 | A1 |
20090290226 | Asakura et al. | Nov 2009 | A1 |
20100005531 | Largman et al. | Jan 2010 | A1 |
20100064111 | Kunimatsu et al. | Mar 2010 | A1 |
20100153785 | Keromytis | Jun 2010 | A1 |
20100028753 | Kim et al. | Nov 2010 | A1 |
20120166878 | Sinha et al. | Jun 2012 | A1 |
20120192053 | Waltenberger | Jul 2012 | A1 |
20120239857 | Jibbe et al. | Sep 2012 | A1 |
20120284697 | Choppakatla et al. | Nov 2012 | A1 |
20130086020 | Addala | Apr 2013 | A1 |
20130086550 | Epstein | Apr 2013 | A1 |
20130111547 | Kraemer | May 2013 | A1 |
20130239215 | Kaufman | Sep 2013 | A1 |
20130250965 | Yakan | Sep 2013 | A1 |
20130333040 | Diehl et al. | Dec 2013 | A1 |
20140047282 | Deb et al. | Feb 2014 | A1 |
20140108803 | Probert | Apr 2014 | A1 |
20140237599 | Gertner et al. | Aug 2014 | A1 |
20140304815 | Maeda | Oct 2014 | A1 |
20140337639 | Probert | Nov 2014 | A1 |
20150039717 | Chiu et al. | Feb 2015 | A1 |
20150163242 | Laidlaw et al. | Jun 2015 | A1 |
20160094349 | Probert | Mar 2016 | A1 |
20160212159 | Gupta et al. | Jul 2016 | A1 |
20170063886 | Muddu et al. | Mar 2017 | A1 |
20170063888 | Muddu et al. | Mar 2017 | A1 |
20170123957 | Gupta | May 2017 | A1 |
20170132419 | Gupta | May 2017 | A1 |
20180324195 | Gupta et al. | Nov 2018 | A1 |
20190138725 | Gupta | May 2019 | A1 |
Number | Date | Country |
---|---|---|
101154258 | Apr 2008 | CN |
102012987 | Apr 2011 | CN |
1 085 418 | Mar 2001 | EP |
1 703 395 | Sep 2006 | EP |
1758021 | Feb 2007 | EP |
2003330736 | Nov 2003 | JP |
2004287810 | Oct 2004 | JP |
2005258498 | Sep 2005 | JP |
2005276185 | Oct 2005 | JP |
2006053760 | Feb 2006 | JP |
2008-129714 | Jun 2008 | JP |
2009031859 | Feb 2009 | JP |
2010257150 | Nov 2010 | JP |
2011059930 | Mar 2011 | JP |
2011198022 | Oct 2011 | JP |
2014531647 | Nov 2014 | JP |
WO 2010067703 | Jun 2010 | WO |
2014021190 | Feb 2014 | WO |
WO 201503 8944 | Mar 2015 | WO |
2015200046 | Dec 2015 | WO |
WO 2015200508 | Dec 2015 | WO |
WO 2015200511 | Dec 2015 | WO |
WO 2017218872 | Dec 2017 | WO |
Entry |
---|
“Multi-tier Application Database Debugging,” Jun. 2011. Visual Studio 2005. Retrieved from Internet: Dec. 5, 2016. http://msdn.microsoft.com/en-us/library/ms165059(v=vs.80).aspx. |
“Software Instrumentation,” edited by Wah, B., Wiley Encyclopedia of Computer Science and Engineer, Wiley, pp. 1-11, XP007912827, Jan. 1, 2008. |
“Troubleshooting n-tier.” Apr. 2011. Retrieved from Internet: http://drc.ideablade.com/xwiki/bin/view/Documentation/deploy-troubleshooting-ntier. |
Aarniala, J., “Instrumenting Java bytecode,” Seminar work for the Compilers-course, Department of Computer Science University of Helsinki, Finland (Spring 2005). |
Ashcraft, K. and Engler, D., “Using Programmer-Written Compiler Extensions to Catch Security Holes,” Slides presented at the Proceedings of the IEEE Symposium on Security and Privacy, Berkeley, CA, pp. 1-14, (May 2002). |
Austin, T., et al., “Efficient Detection of All Pointer and Array Access Errors,” Proceedings of the ACM SIGPLAN 94 Conference on Programming Language Design and Implementation, Orlando, FL, 12 pages (Jun. 1994). |
Baratloo, A., et al., “Transparent Run-Time Defense Against Stack Smashing Attacks,” Proceedings of the USENIX 2000 Annual Technical Conference, San Diego, CA, 12 pages (Jun. 2000). |
Barrantes, E., et al., “Randomized Instruction Set Emulation to Distrupt Binary Code Injection Attacks,” Proceedings of the 10th Annual ACM Conference on Computer and Communications Security, Washington, DC, 10 pages (Oct. 2003). |
Berger, E. and Zorn, B., “Diehard: Probabilistic Memory Safety for Unsafe Languages,” Proceedings of the Programming Language Design and Implementation (PLDI), 11 pages (Jun. 2006). |
Bernat, A.R et al., “Anywhere, Any-Time Binary Instrumentation,” Proceedings of the 10th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools (PASTE), ACM, Szeged, Hungary (Sep. 2011). |
Bhatkar, S., et al., Address Obfuscation: An Efficient Approach to Combat a Broad Range of Memory Error Exploits, Proceedings of the 12th USENIX Security Symposium, Washington, DC, 16 pages (Aug. 2003). |
Buck, B., et al., “An API For Runtime Code Patching,” Jan. 1, 2000, vol. 14, No. 4, pp. 317-329, XP008079534, p. 317, Jan. 1, 2000. |
Bush, W., et al., “A Static Analyzer for Finding Dynamic Programming Errors,” Software: Practice and Experience, 30(7): 775-802 (2000). |
Chew, M. and Song, D., “Mitigating Buffer Overflows by Operating System Randomization,” (Report No. CMU-CS-02-197), Carnegie Mellon University, 11 pages (Dec. 2002). |
Chiueh, T. and Hsu, F., “RAD: A Compile-Time Solution to Buffer Overflow Attacks,” Proceedings of the 21st International Conference on Distributed Computing Systems, Pheonix, AZ, 20 pages (Apr. 2001). |
Cowan, C., et al., “FormatGuard: Automatic Protection from Printf Format String Vulnerabilities,” Proceedings of the 10th USENIX Security Symposium, Washington, DC, 9 pages (Aug. 2001). |
Cowan, C., et al., “PointGuardTM: Protecting Pointers From Buffer Overflow Vulnerabilities,” Proceedings of the 12th USENIX Security Symposium, Washington, DC, 15 pages (Aug. 2003). |
Cowan, C., et al., “Protecting Systems from Stack Smashing Attacks with StackGuard,” Linux Expo, Raleigh, NC, 11 pages (May 1999). |
Cowan, C., et al., “Stackguard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks,” Proceedings of the 7th USENIX Security Conference, San Antonio, TX, 16 pages (Jan. 1998). |
Dhurjati, D., et al., “Memory Safety Without Runtime Checks or Garbage Collection,” Proceedings of the 2003 ACM SIGPLAN Conference on Language, Compiler, and Tool Support for Embedded Systems, San Diego, CA, 12 pages (Jun. 2003). |
Dor, S., et al., “Cleanness Checking of String Manipulation in C Programs via Integer Analysis,” Proceedings of the 8th International Static Analysis Symposium, Paris, France, Springer LNCS 2126:194-212 (2002). |
Engler, D., et al., “Checking System Rules Using System-Specific, Programmer-Written Compiler Extensions,” Stanford University, 16 pages (Oct. 2000). |
Erlingsson, U. and Schneider, F., “SASI Enforcement of Security Policies: A Retrospective,” Proceedings of the New Security Paradigm Workshop, Caledon Hills, Ontario, Canada, (Sep. 1999). |
Etoh, H and Yoda, K., “Protecting from Stack-Smashing Attacks,” IBM Research Division, Tokyo Research Laboratory, Jun. 2000, www.trl.ibm.com, 23 pages, retrieved from Internet Nov. 6, 2007. |
Evans, D. and Larachelle D., “Improving Security Using Extensible Lightweight Static Analysis,” IEEE Software, 19(1):43-51 (Jan.-Feb. 2002). |
Evans, D., “Policy-Directed Code Safety,” Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 135 pages, Oct. 1999. |
Evans, D., “Policy-Directed Code Safety,” Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, (May 1999). |
Feng, H., et al., “Anomaly Detection using Call Stack Information,” IEEE Security and Privacy, Oakland, CA, 14 pages (May 2003). |
Fink, G. and Bishop, M., “Property-Based Testing: A New Approach to Testing for Assurance,” ACM SIGSOFT Software Engineering Notes, 22(4): 74-80 (Jul. 1997). |
Forrest, S., et al., “A Sense of Self for Unix Processes,” Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, 9 pages (May 1996). |
Foster, J., et al., “A Theory of Type Qualifiers,” Proceedings of the 1999 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Atlanta, GA, 12 pages (May 1999). |
Frantzen, M. and Shuey, M., “StackGhost: Hardware Facilitated Stack Protection,” Proceedings of the 10th USENIX Security Symposium, Washington, DC, 11 pages (Aug. 2001). |
Ganapathy, V., et al., “Buffer Overrun Detection using Linear Programming and Static Analysis,” Proceedings of the 10th ACM Conference on Computer and Communication Security, Washington D.C, 10 pages (Oct. 2003). |
Gaurav, S., et al., “Countering Code-Injection Attacks With Instruction-Set Randomization,” Proceedings of the 10th ACM Conference on Computer and Communications Security (CCS2003), Washington, DC, 9 pages (Oct. 2003). |
Ghosh, A.K and O'Connor, T., “Analyzing Programs for Vulnerability to Buffer Overrun Attacks,” Proceedings of the 21st NIST-NCSC National Information Systems Security Conference, 9 pages (Oct. 1998). |
Goldberg, I., et al.., “A Secure Environment for Untrusted Helper Applications,” Proceedings of the 6th USENIX Security Symposium, San Jose, CA, 13 pages (Jul. 1996). |
Gravelle, R., “Debugging JavaScript: Beyond Alerts.” Retrieved from Internet: Jun. 2009. http://www.webreference.com/programming/javascript/rg34/index.html. |
Grimes, R., “Preventing Buffer Overruns in C++,” Dr Dobb's Journal: Software Tools for the Professional Programmer, 29(1): 49-52 (Jan. 2004). |
Hastings, R. and Joyce, B., “Purify: Fast Detection of Memory Leaks and Access Errors,” Proceedings of the Winter 92 USENIX Conference, San Francisco, CA, 10 pages (Jan. 1992). |
Haugh, E. and Bishop, M., “Testing C Programs for Buffer Overflow Vulnerabilities,” Proceedings of the 10th Network and Distributed System Security Symposium (NDSS03), San Diego, CA, 8 pages (Feb. 2003). |
Howard, M., “Protecting against Pointer Subterfuge (Kinda!),” Jan. 2006, 4 pages, [retrieved from Internet Feb. 26, 2016] http://blogs.msdn.com/b/michael_howard/archive/2006/01/30/520200.aspx. |
http://bochs.sourceforge.net, The Open Source IA-32, 59 pages, retrieved from Internet Nov. 15, 2007. |
Hunt, G. and Brubacher, D., “Detours: Binary Interception of Win32 Functions,” Jul. 1999, 9 pages, Retrieved from the Internet: https://www.microsoft.com/en-us/research/publication/detours-binary-interception-of-win32-functions/. |
Jim, T., et al., “Cyclone: A safe dialect of C,” Proceedings of the USENIX Annual Technical Conference, Monterey, CA, 14 pages (Jun. 2002). |
Jones, Richard W. M. and Kelly, Paul H. J., “Backwards-Compatible Bounds Checking For Arrays and Pointers in C Programs,” Proceedings of the 3rd International Workshop on Automatic Debugging, Linkoping, Sweden, 29 pages (May 1997). |
Kendall, Samuel C., “Bcc: Runtime Checking for C Programs,” Proceedings of the USENIX Summer 1983 Conference, Toronto, Ontario, Canada, 14 pages, (Jul. 1983). |
Kiriansky, V. , et al., “Secure Execution Via Program Shepherding,” Proceedings of the 11th USENIX Security Symposium, San Francisco, CA, 16 pages (Aug. 2002). |
Krennmair, A., “ContraPolice: a libc Extension for Protecting Applications from Heap-Smashing Attacks,” www.synflood.at/contrapolice, 5 pages, retrieved from Internet, Nov. 28, 2003. |
Lamport, Leslie, “Operating Time, Clocks, and the Ordering of Events in a Distributed System,” Jul. 1978, Retrieved from the Internet: http://research.microsoft.com/en-us/um, pp. 558-565. |
Larochelle, D. and Evans, D., “Statically Detecting Likely Buffer Overflow Vulnerabilities,” 2001 USENIX Security Symposium, Washington, D. C., 13 pages (Aug. 2001). |
Larson, E. and Austin, T., “High Coverage Detection of Input-Related Security Faults,” Proceedings of the 12th USENIX Security Symposium, Washington, District of Columbia, U.S.A, 16 pages (Aug. 2003). |
Larus, J. R., et al., “Righting Software,” IEEE Software, 21(3): 92-100 (May 2004). |
Lee, R. B., et al., “Enlisting Hardware Architecture to Thwart Malicious Code Injection,” First International Conference on Security in Pervasive Computing, LNCS vol. 2802, pp. 237-252, (Mar. 2003). |
Lhee, K. and Chapin, S., “Buffer Overflow and Format String Overflow Vulnerabilities,” Software-Practice and Experience, 33(5): 423-460 (Apr. 2003). |
Lhee, K. and Chapin, S., “Type-Assisted Dynamic Buffer Overflow Detection,” Proceedings of the 11th USENIX Security Symposium, San Francisco, CA, 9 pages (Aug. 2002). |
Lyashko, A., “Hijack Linux System Calls: Part II. Miscellaneous Character Drivers,” Oct. 2011, 6 pages [retrieved from Internet Feb. 26, 2016] http://syprog.blogspot.com/2011/10/hijack-linux-system-calls-part-ii.html. |
Messier, M. and Viega, J., “Safe C String Library V1.0.3.,” www.zork.org/safestr, 33 pages, retrieved from Internet, Nov. 2003. |
Necula, G., et al., “CCured: Type-Safe Retrofitting of Legacy Code,” 29th SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), Portland, OR, pp. 128-139 (Jan. 2002). |
Nergal, “The advanced return-into-libc exploits, PaX Case Study,” Phrack Magazine, 58(4), 30 pages (Dec. 2001). |
Oiwa, Y, et al., “Fail-Safe ANSI-C Compiler: An Approach to Making C Programs Secure,” Proceedings of the International Symposium on Software Security, Tokyo, Japan, 21 pages (Nov. 2002). |
Ozdoganoglu, H., et al., “SmashGuard: A Hardware Solution to Prevent Security Attacks on the Function Return Address,” (Report No. TR-ECE 03-13), Purdue University, 37 pages (Nov. 2003). |
Perens, Electric Fence Malloc Debugger, www.perens.com/FreeSoftware, 10 pages, (Mar. 20, 2006). |
Phrack Magazine, “The Frame Pointer Overwriting,” 55(9): 1-9 (Sep. 1999). |
Prasad, M. and Chiueh., T., “A Binary Rewriting Defense against Stack-Based Buffer Overflow Attacks,” USENIX Technical Conference, 14 pages (Jun. 2003). |
Prevelakis, V. and Spinellis, D., “Sandboxing Applications” Proceedings of the 2001 USENIX Annual Technical Conference (FREENIX Track), Boston, MA, 8 pages (Jun. 2001). |
Provos, N., “Improving Host Security with System Call Policies,” Proceedings of the 12th USENIX Security Symposium, Washington, DC, 15 pages (Aug. 2003). |
Pyo, Changwoo and Lee, Gyungho, “Encoding Function Pointers and Memory Arrangement Checking Against Buffer Overflow Attack,” 4th International Conference Information and Communications Security (ICICS), pp. 25-36 (Dec. 2002). |
RATS website. Secure Software Inc., http://www.securesw.com/downloadrats.htm, retrieved from Internet 2009. |
Robbins, Tim, “How to Stop a Hacker . . .”, Feb. 2001, 2 pages, Retrieved from Internet: http://box3n.gumbynet.org. |
Robertson, W., “Run-time Detection of Heap-based Overflows,” Proceedings of the 17th Large Installation Systems Administrators Conference, San Diego, CA, 15 pages (Oct. 2003). |
Rugina, R. and Rinard, M., “Symbolic Bounds Analysis of Pointers, Array Indices, and Accessed Memory Regions,” Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, Vancouver, BC, Canada, 14 pages (Jun. 2000). |
Ruwase, O. and Lam, M.S., “A Practical Dynamic Buffer Overflow Detector,” Proceedings of the 11th Annual Network and Distributed System Security Symposium, San Diego, CA, 11 pages (Feb. 2004). |
Schneider, F. B., “Enforceable Security Policies,” ACM Transactions on Information and System Security, 3(1): 30-50 (Feb. 2000). |
Sekar, R., et al., “A Fast Automaton-Based Method for Detecting Anomalous Program Behaviors,” Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, 12 pages (May 2001). |
Simon, A. and King, A., “Analyzing String Buffers in C,” In Proc. Intl. Conference on Algebraic Methodology and Software Technology, LNCS 2422: 365-380 (Sep. 2002). |
Simon, I., “A Comparative Analysis of Methods of Defense against Buffer Overflow Attacks,” Technical report, California State Univ, 2001. [http://www.mcs.csuhayward.edu/˜simon/security/boflo.html], 11 pages (Jan. 2001). |
Small, C., “A Tool For Constructing Safe Extensible C++ Systems,” 3rd USENIX Conference—Object-Oriented Technologies, Portland, OR, pp. 175-184 (Jun. 1997). |
Snarskii, Alexander, Freebsd libc stack integrity patch, ftp://ftp.lucky.net/pub/unix/local/libc-letter, 5 pages (Feb. 1997). |
Steffen, J. L., “Adding Run-Time Checking to the Portable C Compiler,” Software: Practice and Experience, 22(4): 305-316 (Apr. 1992). |
Suffield, A., “Bounds Checking for C and C++,” Technical Report, Imperial College, 55 pages (2003). |
Tanenbaum, A S., “Distributed Operating Systems,” Prentice Hall, (1995), Table of Contents included only, 4 pages, Published Date: Aug. 25, 1994. |
The NX Bit. Wikipedia article, www.wikipedia.org/wiki/NXbit, 9 pages, retrieved from Internet—Feb. 3, 2009. |
The PaX project. Powepoint presentation, Nov. 9, 2000, 13 pages, Retrieved from internet: http://pageexec.virtualave.net. |
Vendicator, Stack Shield, “A ‘stack smashing’ technique protection tool for Linux,” http://www.angelfire.com/sk/stackshield, 1 page, (Jan. 2000) (retrieved from Internet Feb. 2010). |
Viega, J., et al., “ITS4: A Static Vulnerability Scanner for C and C++ Code,” Proceedings of the 16th Annual Computer Security Applications Conference, New Orleans, LA, 11 pages (Dec. 2000). |
VMware Server 2, Product Datasheet; VMWare Virtual Server, http://www.vmware.com ; retrieved from Internet, 2 pages, Feb. 3, 2010. |
Wagner, D. and Dean, D., “Intrusion Detection via Static Analysis,” IEEE Symposium on Security and Privacy, Oakland, CA, pp. 156-168 (May 2001). |
Wagner, D., et al., “A First Step Towards Automated Detection of Buffer Overrun Vulnerabilities,” Proceedings of the Networking and Distributed System Security Symposium, San Diego, CA, 15 pages (Feb. 2000). |
Wahbe, R., “Efficient Software-Based Fault Isolation,” Proceedings of the 14th ACM Symposium on Operating System Principles, Asheville, NC, 14 pages (Dec. 1993). |
Wheeler, David, Flawfinderwebsite, retrieved from Internet: https://www.dwheeler.com/flawfinder/, 11 pages, (Jun. 2001). |
Wojtczuk, R., “Defeating Solar Designer's Non-executable Stack Patch,” http://www.openwall.com, 11 pages (Jan. 1998). |
www.cert.org, Computer Emergency Response Team (CERT), 2 pages, retrieved from Internet Feb. 3, 2009. |
www.metasploit.org, “Metasploit Projects,” 3 pages, retrieved from Internet Feb. 3, 2009. |
X86 Assembly Guide, University of Virginia Computer Science CS216: Program and Data Representation, 17 pages, Spring 2006 [retrieved from Internet Feb. 26, 2016] http://www.cs.virginia.edu/˜evans/cs216/guides/x86.html. |
Xie, Y., et al., “ARCHER: Using Symbolic, Path-sensitive Analysis to Detect Memory Access Errors,” Proceedings of the 9th European Software Engineering Conference, Helsinki, Finland, 14 pages (Sep. 2003). |
Xu, J., et al., “Architecture Support for Defending Against Buffer Overflow Attacks,” Proceedings of the Second Workshop on Evaluating and Architecting System dependability, San Jose, CA, 8 pages (Oct. 2002). |
Xu, J., et al., “Transparent Runtime Randomization for Security,” Proceedings of the 22nd International Symposium on Reliable Distributed Systems (SRDS'03), Florence, Italy, 10 pages (Oct. 2003). |
Yong, Suan Hsi and Horwitz, Susan, “Protecting C Programs from Attacks Via Invalid Pointer Dereferences,” Proceedings of the 9th European Software Engineering Conference, 10 pages (Sep. 2003). |
Zhu, G. and Tyagi, Z., “Protection Against Indirect Overflow Attacks on Pointers,” Second Intl. Workshop on Information Assurance Workshop, pp. 97-106 (Apr. 2004). |
International Search Report, dated Sep. 28, 2015, for International Application No. PCT/US2015/037471, entitled “System and Methods For Automated Detection of Input and Output Validation and Resource Management Vulnerability,” consisting of 4 pages. |
Written Opinion, dated Sep. 28, 2015, for International Application No. PCT/US2015/037471, entitled “System and Methods For Automated Detection of Input and Output Validation and Resource Management Vulnerability,” consisting of 7 pages. |
International Preliminary Report on Patentability, dated Dec. 27, 2016, for International Application No. PCT/US2015/037471, entitled “System and Methods For Automated Detection of Input and Output Validation and Resource Management Vulnerability,” consisting of 9 pages. |
Number | Date | Country | |
---|---|---|---|
20200042714 A1 | Feb 2020 | US |
Number | Date | Country | |
---|---|---|---|
61998318 | Jun 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15318429 | US | |
Child | 16513498 | US |