The field of invention pertains generally to computing systems and, more specifically, to a framework for efficient security coverage of mobile software applications using machine learning.
With the emergence of mobile and/or handheld computing, e.g., as embodied by the prevalence of tablet computers and smart phones, the security of the application software that runs on these devices has become a matter of concern. The concern is becoming particularly acute as more powerful mobile platforms are supporting more capable and important application software applications. With increased capability and importance the applications and underlying platforms are handling more sensitive information more intensively.
The invention will be more fully understood with reference to the following detailed description in conjunction with the drawings, of which:
a and 11b pertain to different use cases of the framework of
The central intelligence engine 103 controls the testing strategy of an application under test 108 that is executing within the dynamic runtime environment 102. Essentially, the central intelligence engine 103 identifies “regions of interest” within the code of the application 108, determines specific stimuli to reach these regions of interest, causes such stimuli to be applied to the application 108, monitors the behavior of the application 108 in response to these stimuli and determines whether or not the application is “safe” or “unsafe” in view of the application's observed behavior.
As observed in
The explorer engine 103_2 assists the behavior and logic engine by “studying” the internal structure and operation of the application 108 looking for and identifying “regions of interest” within the application code (i.e., portions of the code that correspond to unsafe operations as opposed to benign/safe operations). Besides being notified of the possibility that certain regions of interest may exist in the application based on the behavior and logic engine's 103_1 observations of the application's behavior, the explorer engine 103_2 may also look for certain kinds of “regions of interest” based on one or more rules provided by the user, one or more rules gleaned from a machine learning platform and/or one or more “hardcoded” rules. Subsequent applied stimuli and observations of the application 108 are focused on the identified regions of interest. By focusing the stimuli and observations on the regions of interest the overall testing and characterization of the application is more streamlined and efficient because execution of large portions of benign application code is largely avoided.
According to one approach, the behavior and logic engine 103_1 is implemented as an inference engine. As is understood in the art, an inference engine recursively collects information (“facts”) describing the current state of a system under observance and matches them against applicable “rules” for the system. “Acts” are defined for specific patterns/sets of facts that match the applicable rules. The set of available acts for a particular set of matching facts and rules corresponds to an “agenda”. The engine then performs “conflict resolution” which is the process of determining which acts from the “agenda” should be performed (the “conflict set”) and in what order they should be performed in. After conflict resolution, the selected acts are then performed on the system in the determined order (here, the conflict set may define a next set of stimuli to be applied to the application). The acts generate a new state of the system being observed, which corresponds to the generation of new facts. The process described above then repeats in a recursive fashion until some conclusion about the system is reached. That is, the engine's recursive conflict resolution actions are generally aimed at reaching some conclusion about the system (in this case, ideally, whether the application software is “safe” or “unsafe”).
The explorer engine 103_2 analyzes a control flow graph or other representation of the application's internal structure/operation that defines different states of the application and the stimuli needed to cause a transition from one particular application state to another particular application state (including multiple transitions through multiple states). In an embodiment, the representation of the application that is analyzed by the explorer engine 103_2 is generated by the static instrumentation engine 101 with one or more various software analysis techniques (e.g., control flow analysis, data flow analysis, value set analysis, event analysis, etc.). The explorer engine 103_2, e.g., through reference to various rules that describe appropriate and/or inappropriate code structures and/or code, and/or by way of notification from the behavior and logic engine 103_1 that certain inappropriate code structures and/or code may exist in the application based on its observed behavior, identifies “regions of interest” within the representation of the application. The explorer engine 103_2 then attempts to identify specific stimuli that may be applied to the application to cause it to transition to an identified region of interest.
The corpus of rules available to the behavior and logic and explorer engines 103_1, 103_2 are provided from: i) rules 128 provided from the platform specific knowledge base 104; ii) rules 106 generated from a machine learning platform 105; and, iii) customer/user provided rules 107. More discussion concerning the use of these rules is provided in more detail further below.
The run-time test engine and observation environment 102 includes the application software being observed 108 and an instance 109 of the type of operating system the application is expected to later run-on if ultimately deemed safe. In various embodiments, the run-time test environment 102 may include a first virtual machine 110 between the application under test 108 and the operating system instance 109. Here, the application software 108 is typically received as abstract executable code (e.g., Java byte code) or other CPU hardware agnostic code. The first virtual machine 110 converts the application's generic executable code into appropriate instructions for the underlying hardware platform 111. The first virtual machine 110 and application under test 108 can together be referred to as a “process”.
The operating system instance 109 may also run on a second virtual machine 112 that itself runs on a virtual machine monitor (VMM) layer 120 that exists between the second virtual machine 112 and hardware platform 111.
As is known in the art, a VMM layer 220 is responsible for partitioning/allocating the resources of the underlying hardware platform 211 (e.g., system memory, CPU threads, non volatile storage space, etc.) amongst the various second virtual machines 212_1 through 212_N. Essentially, each of the second virtual machines 212_1 through 212_N attempts to present the image of an entire computing system and its resources to its respective operating system instance 209_1 through 209_N. The VMM layer 220 and its virtual machines 212_1 through 212_N largely hide from the operating system instances 109_1 through 109_N the perspective that they are actually sharing a single underlying computing system 211.
The existence of multiple second virtual machines 212_1 through 212_N essentially permits the instantiation of multiple run time test processes 222_1 through 222_N that are isolated from one another. The concurrent existence of multiple, isolated run time test processes 222_1 through 222_N permits different types of coverage and observation sequences to be concurrently run on a single application.
That is, different instances of the same application may be provided in different run time processes so that different types of coverage and observance sequences can be concurrently performed on the same application. Alternatively or in combination, different applications can be concurrently observed in the multiple run-time processes. For instance, a first application may be observed in a first run time test process (e.g., process 222_1) while a second, different application is observed in a second run time test process (e.g., process 222_2). The second different application may be a different version of the first or a different application entirely.
Additionally, instances of different operating system types may support different run time processes. For example, an ANDROID operating system instance may support a first run time process while an iOS operating system instance may support a second run time process. Elsewise, different versions of same operating instance may concurrently execute on a virtual machine monitor layer to support two different run time processes. Concurrent testing of multiple application instances (whether different instances of the same application or respective instances of different applications or different versions of a same application or some combination thereof) enhances the overall performance of the system.
The central intelligence engine 103, returning to
Here, each of the first virtual machine 310, the operating system instance 309 and the second virtual machine 312 is retrofitted with various monitoring functions 313_1 through 313_M that the central intelligence engine 103 is able to enable/disable. For example, the central intelligence engine 103 may enable certain monitoring functions (e.g., monitoring functions 313_1 and 313_3) while disabling the remaining monitoring functions (e.g., functions 313_2 and 313_4 (not shown) through 313_M). In an embodiment, the monitoring functions at least include: i) a system calls monitoring function 313_1; ii) a data tracking monitoring function 313_2; and, iii) a device operation monitoring function 313_3.
As observed in
The system calls monitoring function 313_1 monitors the run time execution of the application's executable code and flags any system calls. Here, a system call is essentially any invocation 315 of the underlying operating system instance 309 made by the application under test 308 or its virtual machine 310. As is understood in the art, an operating system provides services for basic uses of the hardware platform. An application's request to use such a service corresponds to a system call. The types of system calls an application can make typically include process control system calls (e.g., load, execute, create process, terminate process, wait (e.g., for an event), allocate or free a system memory range), file management system calls (e.g., create/delete file, open/close file, get/set file attributes), information maintenance system calls (e.g., get/set time or date) and I/O system calls such as communication system calls (e.g., create/delete network connection, send/receive messages, attach/detach remote devices) and user interface operating system (OS) calls.
In order to flag any system calls made by the application 308 or virtual machine 310, in an embodiment, monitoring function 313_1 detects a system call (such as any of or a masked subset of any of the system calls mentioned above) and reports the event to the central intelligence engine 103 along with any parameters associated with the call. For example, if an application seeks to open a network connection to a particular network address, the system call monitoring function 313_1 will report both the request to open the connection and the network address to the central intelligence engine 103. The monitoring function may intercept system calls by “hooking” the system calls to capture the passed parameter.
The data tracking monitoring function 313_2 tracks specific items of data within the application 308. As is understood by those of ordinary skill, data is usually identified by the memory location and/or register location where it is stored. The executable code of an application 308 specifically calls out and identifies these locations. Frequently, data will move from one memory/register location to another. The data tracking monitoring function 313_2 tracks the movement of a specific item of data and reports any suspicious activity to the central intelligence engine 103. More specifically, in an embodiment, the data tracking monitoring function 313_2 is provided with the identity of a specific “sensitive” (e.g., highly confidential) data item, and, reports to the central intelligence engine any attempt by the application to cause the data to be directed out of the run time environment (such as attempting to send the data over a network connection), or, stored in a file or other storage (e.g., register and/or memory) location other than an approved location.
In an embodiment, the data tracking monitoring function 313_2 maintains internal tables having an entry for register and system memory addresses referred to by the application code. Each entry also identifies whether its corresponding register/memory address is “tainted”. The data tracking monitoring function 313_2 marks as tainted any register/memory location where the sensitive information is kept. Additionally, the data tracking monitoring function 313_2 marks as tainted any register or memory location to which a tainted register/memory location's content is moved. The data tracking monitoring function 313_2 will also clear a tainted register/memory location (i.e., mark it as no longer tainted) if it is overwritten with the contents of a non-tainted register/memory location or is otherwise erased (e.g., by being reset to cleared to all zeroes).
By so doing, all locations where the sensitive information resides are known. Any attempt by the application 308 to direct data from a tainted location outside the run time environment 302 or to an “unapproved” register, memory or file location is reported to the central intelligence engine 103. The report includes pertinent ancillary information associated with the attempt (such as the network address to where a data transmission was attempted, or the unapproved file location where an attempted store was made). In the case of unapproved network destinations and/or storage locations, the data tracking monitoring function 313_2 is informed beforehand of at least one of the data item's approved or unapproved data destinations/locations by the central intelligence engine 103. In many cases, the identity of the sensitive information to the central intelligence engine 103 is made by way of the user provided rules 107.
The device operation monitoring function 313_3 monitors calls 316 made by the application 308 or virtual machine 310 to the underlying hardware platform directly (i.e., not through an OS system call). Here, a “device” is generally understood to be any attachment or peripheral (attachments/peripherals are typically coupled to a hardware system's I/O control hub (ICH) or system memory control hub (MCH)). Attachments/peripherals typically include non volatile storage devices (e.g., disk drives, SSD devices), network interfaces (e.g., SMS functions, HTTP functions), keyboards, displays and mouse/touchpad/control stick devices, integrated camera devices, integrated audio devices (both input (e.g., microphone) and output (e.g., speaker system) and printers among other possible devices. In the context of the device monitoring function 313_3, however, the term “device” is understood to be broader than just peripherals. For example, if an application attempts to directly write to control register space (such as model specific register space) of a CPU core or a memory controller within the hardware platform the device operation monitoring function 313_3 will track these operations as well.
Here, depending on system implementation, various devices within the underlying hardware may be manipulated by the application 308 or virtual machine 310 through direct communication to the underlying hardware without involvement of the operating system (e.g., by writing to the underlying platform's register space). These operations are tracked by the device operation monitoring function 313_3. By contrast, the application's behavior with respect to those devices or functions called thereon that are not directly communicated to the hardware are typically manipulated through the operating system 309. These calls are therefore tracked with the system call monitoring function 313_1.
When a call is made to a device directly through the hardware, the device operation monitoring function 313_3 reports the call to the central intelligence engine 103 identifying both the targeted device and the type of call that was made.
As mentioned above, in one approach, a monitoring function will not monitor and/or report out an event it is designed to detect unless it is specifically enabled (e.g., by the central intelligence engine 103) beforehand.
In many cases the application instance 308 is a mobile application that is effected with abstract executable code (e.g., Java bytecode) that needs to be converted into the object code of a particular type of CPU by the first virtual machine 310. In cases where the application instance 308 is provided as object code that is already targeted for a specific CPU type (i.e., the first virtual machine 310 is not needed) the monitoring functions 313_1 to 313_M may nevertheless be integrated into the run time environment so as observe the interface between the application 308 and the operating system instance 309. For example, as stated earlier, the data tracking monitoring function 313_2 can be integrated into the second virtual machine 312 instead.
Along with the monitoring functions 313_1 through 313_M, stimuli functions 314_1 through 314_P are also integrated into the run time environment 302. Whereas the monitoring functions 313_1 through 313_M are designed to report observed behaviors of the application 308 to the central intelligence engine 103, by contrast, the stimuli functions 314_1 through 314_P are designed to apply specific input values and/or signals to the application 308 (e.g., to drive the application's execution to a region of interest, to observe the application's behavioral response to these inputs, etc.). The specific input values and/or signals to be applied are provided by the central intelligence engine 103.
As observed in
The data value stimuli function 314_1 is able to set specific low level data values of the application's code. The data value may be specific data that is processed by the application or control data used to control the application. For example, the data value stimuli function 314_1 may be used to set an instruction pointer to a specific value to begin or jump application execution to a specific point in the application's code. Likewise, the data value stimuli function 314_1 may be used to create/change/delete any data value within the register or system memory space that is processed by the application 308. This capability may be used, for instance, to change the state of the application 308 to any particular state so the application's behavior in response to the artificially set state can be observed.
The OS event/state stimuli function 314_2 is used to create any event that the OS might report to the application 308 (e.g., incoming call, incoming packet, etc.) or present any OS state that is observable to the application 308 (e.g., such as the state of various devices within the system). Here, the OS event/state stimuli 314_2 is essentially used to manipulate the OS portion of the application's environment. Likewise, the hardware event/state stimuli function 314_3 is used to create any event that the hardware might report to the application 308 (e.g., an incoming call for an SMS device that does not communicate to the application through the OS, etc.) or present any state of the hardware observable to the application 308 (e.g., such as the state of various control registers within the system). Here, the hardware event/state stimuli 314_3 is essentially used to manipulate the hardware portion of the application's environment.
Whereas the run time environment has standard monitoring and stimuli functions embedded in the software platform beneath the application under test, the static instrumentation engine 101, returning to
Notably, mobile applications written for ANDROID® of Google, Inc. as well as applications written in Java® (whether mobile or desktop) conform very well to the framework outlined in
Other applications, e.g., for other systems, may not normally use an available first virtual machine. In one approach, applications that normally use an available first virtual machine are stimulated/monitored in the dynamic runtime environment with one or more functions (e.g., functions 313_1 and 313_M-2 among others) being embedded in the first virtual machine level 310, whereas applications that are not normally written to run on an available first virtual machine level (e.g., an application that has been compiled to run on its underlying hardware CPU) may have these monitoring functions embedded in the underlying OS instance 309 or second virtual machine level 310 of the run time environment 302. Alternatively one or more of these stimulation/monitoring functions may be statically added to the applications themselves by the static instrumentation engine 101 of
According to one embodiment of the process flow of the static instrumentation engine 401, an application to be observed 408 is provided to the translator 414 in a first low level form (e.g., DALVIK .dex executable code). The translator 414 translates the executable/object code up to a higher more abstract code level (e.g., in the case of .dex, a .dex application is translated up to a RISC-like version of Java byte code which contemplates fewer instructions in the instruction set architecture than pure Java byte code). The higher level code representation of the application is then provided to the application representation generation unit 415 which studies the application's internal structures, code flows, etc. to generate a representation of the application, such as a control flow graph, that defines specific states of the application and various stimuli needed to cause a transition from one application state to another application state. The representation of the application is then provided to the explorer component 103_2 of the central intelligence engine 103.
The explorer portion 103_2 of the central intelligence engine 103 analyzes the application representation to identify what parts of the application may correspond to improperly behaving code (a “region of interest” within the code), and, what set of stimuli are needed to reach that code and activate it. Identification of a region of interest may be based on any of the user provided rules, machine learned rules, hard-coded rules or observations made by the behavior and logic engine 103_1 that are reported to the explorer 103_2. In an embodiment, one or more of, the identities of the types of regions of interest found in the application, the types of stimuli needed to reach such code and the types of stimuli that might activate it are shared with the behavior and logic engine 103_1. The behavior and logic engine 103_1 utilizes this information to establish a next set of acts/stimuli to be performed on the application (e.g., a next “conflict set”) and establish, e.g., at least partially, specific behaviors of the application to be monitored.
As part of the definition of the next set of stimuli to be generated and/or next set of behaviors to be monitored, certain ones of the run time environment monitoring and/or stimuli functions 313_1 to 313_M, 314_1 to 314_P of
In response, the instrumentation unit 414 instruments the abstracted/translated version of the application's code with the desired monitoring and/or stimuli functions. In cases where the application has already been instrumented with other static monitoring/stimuli functions, in an embodiment, the application's state within the run time environment 102 (e.g., specific data values) is externally saved outside the application and the application is returned to the static instrumentation engine 401. The static instrumentation engine 401 retranslates the application with the translator unit 414 and then instruments it with the new monitoring/stimuli functions with the instrumentation unit 416. The retranslator 417 retranslates the newly instrumented code to a lower level of code and provides it to the run time environment 102. The previously saved application state information is reloaded into the application.
In one embodiment, a new application that has not yet entered the run time environment is instrumented with default static monitoring/stimuli functions. In this case, the new application is retranslated with translator 414, a representation of the new application is generated with representation generation unit 415 and presented to the explorer engine, the explorer engine 103_2 identifies where the default static monitoring/stimuli functions should be placed in the translated application's code structure and communicates these locations to the instrumentation unit 416, the instrumentation unit 416 instruments the translated application at the identified location(s), the re-translation unit 417 retranslates the statically instrumented application to a lower level code, and, the lower level code instance of the instrumented application is sent to the run time environment.
In an embodiment, the application instrumentation unit 416 can embed any of a system calls monitoring function, a data tracking monitoring function and a device operation monitoring function (as discussed above with respect to
Moreover, in an embodiment, the application instrumentation unit 416 can implement two additional types of monitoring/stimulation into an application. The additional types of monitoring include: i) dynamic load monitoring; and, ii) application API call/event monitoring. The additional stimuli function includes application API call stimulation.
In the case of dynamic load monitoring, the application is modified to track the effects of any code that the application dynamically loads. Here, as is understood in the art, an application may not initially include all of the code that it could execute. Instead, the application includes references to network and/or file locations containing additional code that the application will “load” under certain circumstances (such as the application needs to execute it). An application typically executes the code it dynamically loads.
In the case of dynamic load monitoring, the explorer engine 103_2 of the central intelligence unit 103 analyses the representation of the application's internal structures/flows looking for program code constructs that correspond to dynamic code loading. In a typical circumstance, the application refers to dynamically loaded code with a character string. As such, simplistically, the explorer unit 103_2 looks for a character string associated with a dynamic load operation and causes the application instrumentation unit 415 to add monitoring code into the application that will detect if the string is invoked as a dynamic load reference as well as monitor the behavior of any code that is dynamically loaded from the string and subsequently executed.
The instrumented monitoring code is also configured to report pertinent observations to the central intelligence engine 103. Such observations include whether code has been dynamically loaded; where dynamically loaded code was loaded from; whether dynamically loaded code is being (or has been) executed; and various behaviors of the executing code. The reported behaviors can include any of the behaviors described above with respect to the system call, data tracking and device monitoring functions (whether tracked within the application or beneath it).
In the case of application API call/event monitoring, the instrumentation code that is inserted into the application monitors calls made to the application during runtime and/or events or other “output” generated from the API. Here, as is known in the art, an application is “used” by making a call to the application's application programming interface (API) (e.g., by a user acting through a graphical user interface (GUI)). The API call/event monitoring function detects such calls/events/output and reports them to the central intelligence engine 103. Here, the application itself may contain improperly behaving code that artificially invokes the application's API.
For example, the improperly behaving code may artificially generate application API related actions to cause the application to believe a user is invoking the application for a specific use. The application API call monitoring function would detect any calls made to the API and report them. Knowing what precise user inputs were actually generated, if any, the central intelligence unit 103 could determine that the API calls are malicious.
The application API stimulation function provides stimuli to the application through its API. Here, the central intelligence engine can ask the application to perform certain tasks it was designed to perform. By logging the stimuli applied to the application by way of the application API stimulation function and comparing these stimuli to reports received from the application API tracking function, the central intelligence unit 103 will be able to detect any API invocations made by malicious code. That is, any detected API call that was not purposefully stimulated by the API stimulation function may be the act of malicious code.
The explorer component therefore: i) identifies sections of the application's code that are “of interest”; ii) identifies paths through the application's code that can be used to reach a particular region of interest within the code; and, iii) identifies input stimuli that may be necessary to trigger one or more state transitions of the application along any such paths to the identified code regions of interest.
In performing these tasks, the explorer is provided with monitored information from one or more of the above described monitor functions within the run time environment. The reports from the monitoring functions permit the explorer to identify the application's current state. For example, based on the reported monitor information, the explorer may determine that the application is currently within state 631_3. Notably, in order to receive this monitored information the explorer may have previously requested (e.g., for a previous inference engine recursion) that certain monitors be enabled and/or that certain previously non-existent static monitors be embedded in the application. Further still, the explorer may have requested such a particular set of monitors because the explorer could not identify the application's state and needed to add the additional monitoring capability to determine it.
With the application's current state eventually recognized at state 613_3, the explorer is next able to identify a section of the application's code as being “of interest”. In the present example, assume the explorer identifies code region 632 as being “of interest.” Here, the ability to identify a section of code as being of interest may be derived from any of the aforementioned rules. For example, the aforementioned user provided rules 107 may identify an item of data as being particularly sensitive. In this case, the explorer might recognize that basic blocks of code region 632 are written to process or otherwise use this item of data. As another example, which may work in combination with the aforementioned example, the aforementioned machine learning rules 106 and/or platform specific rules 128 may identify a specific combination of states and associated basic blocks that correspond to the operation of improperly behaving code. Additionally or in the alternative, the behavior and logic engine 103_1 may determine that, based on its observations of the application, that it may contain certain types of improperly performing code and notify the explorer component of these determinations. In response the explorer engine can look for corresponding region(s) of interest. That is, the explorer component can look for code structure(s)/profile(s) that correspond to the type(s) of improper code identified by the behavior and logic engine 103_1.
With a region of interest 632 having been identified, the explorer next begins the process of determining a path 633 through the code from the current state 631_3 to the region of interest 632. The exemplary path 633 of
According to one approach, referred to as symbolic execution, the explorer reduces each basic block of each state to one or more logical expressions and their associated variables 640. Here, ultimately, each of the instructions of a basic block can be expressed as a logical axiom of some kind. The logical axioms of the basic block's instructions can be combined to form one or more logical expressions that express the processing of the basic block as a function of the data values (now expressed as variables) that are processed. The expression(s) are presented to a solver 641 which determines whether a “solution” exists to the expression(s) and, if so, what the constraints are. Here, typically, the constraints correspond to limited ranges of the variables/data values that are processed by the basic block's instructions.
Thus, at this point, the explorer has reduced the application's data values to specific limited combinations thereof that have the potential to cause the application to transition to a desired state. In an embodiment, the explorer causes these solutions to be crafted as appropriate input stimuli to input stimuli embedded in the run time environment 642. Conceivably, certain input stimuli functions will need to be enabled or instrumented into the application. Eventually, e.g., through a limited trial-and-error approach, the specific set of variables that lead to the correct state transition are realized. Repeating the process for each state eventually leads program execution to the region of interest 632.
Through these kinds of processes the explorer is able to efficiently direct program execution to regions of interest.
Upon reaching a region of interest, the monitors within the runtime environment are set to observe the kinds of behaviors that will help determine whether the region of interest corresponds to improperly behaving code or not. Here, the behavior and logic engine 103_1 receives the reported information from the monitors and can begin the process of determining whether the region of interest corresponds to improper behavior.
Thus, in this fashion, the explorer 103_2 is able to efficiently bring the application to various regions of interest and the behavior and logic engine 103_1 can determine whether the regions of interest correspond to improperly behaving code. Here, thorough examination of the application can be achieved by repeatedly using the explorer 103_2 to bring the application to a “next” region of interest and the behavior and logic engine 103_1 to characterize the next region of interest. That is, the overall behavior of the central intelligence 103 can be somewhat recursive in nature where the explorer engine 103_2 repeatedly identifies respective regions of interest and what is needed to bring the code's execution to the region of interest. The explorer engine 103_2 and/or behavior and logic engine 103_1 then instrument and/or enable appropriate monitors and bring the application's execution state to the region of interest. The behavior and logic engine then receives the monitoring data and executes a series of inference engine recursions to reach a conclusion about the region of interest and/or application. The overall process then repeats with the explorer engine 103_2 identifying a next region of interest. Throughout the process the explorer engine may also receive reported information from various monitors so it can determine/confirm the present state of the application.
Notably, in an embodiment, comprehending the application's state includes the explorer engine 103_2 maintaining the state of the application's GUI so it can determine how the GUI triggers certain application acts to be performed (e.g., the application representation utilized by the explorer engine 103_2 provides information that links specific GUI feature activations to specific processes performed by the application). With this information the explorer engine 103_2 can set input conditions that effectively “use” the GUI to bring the application's state to (or at least closer to) a desired region of interest within the application. Additionally, the explorer engine, e.g., with reference to applicable rules and behavior and logic engine notifications, detects the presence of possible improper behaving code. Here, certain types of improperly behaving code will attempt to trigger processes of an application by “pretending” to be a user that is using the application through the GUI. That is, improperly behaving code within the application (or external code that is in communication with) will attempt to cause certain application actions by accessing various GUI triggers.
Apart from just the GUI, more generally, the explorer engine, e.g., by reference to particular rules, may also identify improper “low-level” application behavior (such as any improper state transition). This detected behavior can likewise be reported to the behavior and logic engine 103_1 which incorporates this information into a following inference engine recursion.
The explorer function also analyzes the application representation and, based on characterization information from the behavior and logic engine and/or one or more hard coded rules, machine learned rules and/or user provided rules, identifies a region of interest within the application 707. The explorer engine determines stimuli that can be applied to the application to drive its execution to the region of interest 708. Based on the identified region of interest and/or the determined stimuli, the explorer and/or behavior and logic engine determine what monitoring and stimuli functions (and associated stimuli) should be enabled 709. This may optionally include instrumenting the application itself with additional monitoring and/or stimuli functions 710.
The determined stimuli are applied and the enabled monitoring functions report respective monitoring information 711. The behavior and logic engine uses the reported information to characterize the application's behavior and the explorer engine uses the reported information to track the state of the application 712. New stimuli and/or monitoring functions may be determined (which may require additional instrumentation of the application itself) that are enabled and/or otherwise applied 713. The process repeats until the region of interest is reached and characterized as safe or unsafe 714. Upon the region of interest having been characterized as safe or unsafe, the explorer function re-analyzes the representation 707 to determine a next region of interest. When all identified regions of interest are identified the coverage analysis of the application is complete.
Although not shown in
Referring back to
Hardcoded rules 128 typically provide generic or well known/public rules and/or rules that have been written manually. For example, certain viruses and other forms of mis-behavioral code have signatures or other behaviors/features that are widely known and rules to address them can be scripted by humans. For example, rules that encapsulate the signature or behavior of a well known “trojan horse” virus may be hand written and added to database 104. Here, for example, database 104 is a store that keeps rules for all known forms of mis-behaving code and/or handwritten rules. Upon bring-up of the framework 101, these rules 128 are made accessible to the framework. Typically, the hard coded rules 128 are not provided by the user but are instead largely created or otherwise accessed by a software security entity (e.g., a corporation that provides software security products) that provides the framework of
In a further embodiment, database 104 also provides platform specific information to the monitoring functions and/or stimuli functions which are themselves generically written. For example, in an embodiment, the OS monitoring function is originally written around a set of generic OS calls (e.g., save file, read file, etc.). These generic calls, however, have specific forms in a particular environment/platform (e.g., an iOS “save file” call has a certain syntax that is different than the syntax of an ANDROID “save file” call). Database 104 therefore additionally provides platform specific information for the generic monitoring functions so they can detect events within a particular environment/platform. Similarly, database 104 additionally provides platform specific information for generic stimuli functions that are used to generate stimuli that are particular to a specific environment/platform (e.g., a generic event generated according to its specific iOS form or ANDROID form).
In the case of the machine learning function 105 and rules generated therefrom 106, as is known in the art, multiple (e.g., millions of) software instances and/or environments, some of which may be similar to the application 108 being observed, many of others of which may be nothing like the application 108 being observed, have been previously “studied” (e.g., over the course of years) by a machine learning system 105 that has deduced from its observations of these software instances/environments that certain behaviors can be characterized as improper and reduced to a set of rules which are then provided in rule set 106. For example, a machine learning system 105 could be trained on email messages to learn to distinguish between malware code and non-malware code. After this learning, it can then establish a set of rules that classify code as either malware or non-malware.
In general, a machine learning system 105 will typically be given a task (e.g., identify malware) and gain experience attempting to satisfy that task with commensurate feedback as to its successes and failures. Over time, with automated modification to the manner in which attempts to accomplish the task, the machine learning system 105 may recognize improvement in its ability to accomplish the task. In this sense, the machine can be said to have learned. Eventually, e.g., if its success in accomplishing the task crosses some threshold of success, the machine learning system 105 may identify rules for rule set 106 that essentially “educate” the framework of
Because of the automated nature of machine learning, the machine learning system 105 can have a massive history of experience in terms of the number of software instances and environments it had observed and the amount of time over which it has been able to observe them. Here, the machine learned rules 106 provide details as to the specific behaviors of various improperly behaving forms of code that used by the behavior and logic engine 103_1 to determine whether the application's behavior corresponds to such mis-behavior. Additionally, the machine learned rules 106 may provide details as to specific low level code structures of improperly behaving code that are used by the explorer engine 103_2 to identify “regions of interest” within the application.
The custom rules 107, 807 are entered through the user interface 850 and incorporated into the set of rules that are referred to by the behavior and logic engine 103_1, 803_1 and explorer engine 103_2, 803_2 of the central intelligence engine 103, 803 discussed at length in the preceding discussion(s).
In a typical scenario, the user rules will identify sensitive items of data that are operated on or otherwise processed by the application being analyzed. For example, if the mobile application is designed to operate on information from a corporate database, the custom rules 807 will identify sensitive items of information from the database (e.g., confidential and/or highly confidential information). In response to these rules, possibly in combination with other rules or input by the behavior and logic engine 803_1, the explorer engine 803_2 will identify as a “region of interest” any application code that operates on this information and cause execution of the application to be brought to any such region of interest.
The behavior and logic engine 803_1 will understand acceptable versus unacceptable uses of this information by the application and monitor the application's use of the information accordingly. For example, the behavior and logic engine 803_1 may cause the application or its underlying platform in the runtime environment to perform data tracking on the information. Upon data tracking being enabled for one or more of the sensitive data items and the application having moved its execution to regions of interest that use the information (through the influence of the explorer engine 803_2), the behavior and logic engine 803_1 will track locations where the data is actually stored and/or sent and compare these locations against acceptable register, system memory and non volatile memory storage locations where the sensitive information can be stored as well as acceptable network destinations (e.g., network address locations) where the information can be sent. These acceptable storage and/or network address locations may be defined partially or entirely by the user through the user interface 850 (likewise, unacceptable storage locations and/or network destinations may also be identified).
Alternatively or in combination, because data tracking may involve low level insight into the application, the explorer engine may likewise be configured to detect improper low level movements of the data via detected improper state transitions within the application. Definitions of such improper movements/transitions may additionally be provided to the explorer engine by the user through the custom user rules 807
In another typical scenario, the user identifies improper behaviors (e.g., an attempt to engage in a communication session with a particular location, machine or database within a protected corporate intranet or attempts to access information within a protected or private region of system memory and/or register space of the application's run time environment). Again, the explorer engine 803_2 can attempt to identify regions of code that will perform the user identified improper action and bring the application's execution to such code. Either or both of the explorer engine 803_2 and behavior and logic engine 803_1 cause the application to be monitored appropriately. The explorer engine 803_2 causes the application's execution state to be brought to the region of interest and the behavior and logic engine 803_1 receives the monitoring data, implements further points of analysis and ultimately reaches a conclusion whether the region of interest is malicious. If the improper behaviors are defined at the application state transition level, the explorer engine can detect such improper behavior as well and report it to the behavior and logic engine.
As observed in
Publically available plug-ins are often downloaded from the Internet. They may be procured or free. MDM plug-ins may be publically available or may be private code. They are typically used to manage the software environment of a smartphone or other mobile device. For example, with an MDM plug-in an IS department may be able to remotely configure, monitor, install/un-install and/or enable or disable, various functions and/or software applications on any of its issued smartphones.
Here, through the user interface 850 a user is able to plug-in any such plug-in to the application before it is submitted to the static instrumentation engine 801 for translation and representation generation. Here, the representation generation function generates a representation of the application with any of the plug-ins that the user has defined should be plugged into the application.
Applications may also be analyzed on their respective devices. In the case of a typical smartphone, which does not contain a large scale virtual machine monitor layer, the runtime environment discussed above with respect to
Here, feature 930 corresponds to an actual mobile device and run time environment 902 corresponds to the run time environment of the mobile device 930. The aforementioned possibilities for the locations and functions of various monitor functions 913 and stimuli functions 914 are as discussed in the applicable preceding sections above. In some implementations, a user may not have the ability to change, modify, re-install or replace the virtual machine layer 910 or operating system 909 in the device 930, in which case, all monitors and stimuli functions may be located within the application 908 by way of instrumentation.
In a typical usage case, a device 930 with an application 908 is communicatively coupled to the overall framework depicted in
In the case of typical network connectivity, applications may be screened for safety on mobile devices that are “in the field”. For instance, as just one example, an application may be installed for actual use on mobile device that is active in the field but the application has been fully instrumented with all monitoring and stimuli functions. Should a need arise to confirm that the application is still safe while it is in the field (e.g., because of a suspicious event or as matter of periodic check-ups), the rest of the framework can communicate with these instrumented functions as well as any monitoring and/or stimuli functions that were embedded into the software layers beneath the application (e.g., virtual machine, operating system) before the device was released into the field for actual use. With any standard communication session between the device and the framework (e.g., over the Internet) the application can be fully screened. That is, monitors can send their respective monitoring information over the network to the framework and the framework can send commands to the stimuli functions and monitors over the network. Thus the application itself can be screened while it is
Hardwired communicative coupling may be achieved through the mobile device's standard hardware I/O interface (e.g., a USB interface). Here, commands to the monitoring functions 913 and stimuli functions 914 are submitted by the framework to the mobile device 930 through the interface 940. Likewise, information from the monitoring functions 913 are reported to the framework through the interface.
In the case of instrumentation of the application 908, an instrumented application is created according to the processes discussed above in preceding sections and installed on the mobile device 930. If an instrumented application 908 that is installed on the device 930 needs to be instrumented with additional monitoring and/or stimuli functions, in an embodiment, the state of the application 908 is externally saved (e.g., through the interface 940 and into storage associated with the framework), a new instance of the application having the new instrumentation set is created and installed on the device 930. The application state is then reloaded onto the mobile device through the interface 940 and analysis of the application continues. Alternatively, the application state could conceivably be stored on the mobile device 930 rather than being externally stored. Further still, in an embodiment, because of possible difficulties associated with the saving of state information of an application that is installed on a mobile device, as a default, an application to be analyzed may be fully instrumented with a complete suite of monitors and stimuli functions before its initial installation on the mobile device. In this case, no new instance of the application 908 with additional instrumentation would ever need to be created.
Apart from analyzing an application, the framework discussed above may also be used to modify application software so as to specifically prevent it from improper behavior. Here, the instrumentation unit 414 discussed above serves to insert additional code into an application so that it is specifically prevented from performing unwanted actions. As just one example, an application may be retrofitted with software that prevents certain, specific sensitive information from being transmitted from the smartphone that the application will be installed on.
The application is then translated by the application translation unit 414 to, e.g., create a higher level object code instance of the application 1002. A representation of the application, such as a control flow graph or other structure that describes the application's states and state transitions is created by the application representation generation unit 415 from the abstracted application instance and submitted to the explorer component 1003.
The explorer component studies the specified unwanted action(s) to be prevented and the code's representation and defines changes to be made to the application's code to remove from the application any ability to perform the unwanted action(s) 1004. For example, if the unwanted action is the sending of certain sensitive information outside the application, the explorer may define all possible “exit points” of information from the application. The explorer may further determine that a data monitoring function is to be embedded in the application that is configured to track the information. The explorer may further determine that additional code needs to be added to the application that will prevent the execution of any exit point if it uses information from a tainted source (e.g., a tainted register location, system memory location and/or non volatile storage location). Alternatively or in combination the explorer may simply remove certain blocks of code from the application in order to remove the unwanted function from the application.
The explorer's determinations are then communicated to the instrumentation unit which instruments the application with code designed to effect the functions mandated by the explorer 1005. The application is then retranslated to its original code level by the re-translator 417 and installed on a mobile device 1006.
The types of unwanted behaviors that can be specified and prevented through the instrumentation process described above are too numerous to detail in full here. However, some basic applications of the above described sequence are discussed immediately below.
In a first embodiment, certain device functions are disabled. For example, the audio function (e.g., the ability of an application to “turn-on” the microphone of a mobile device so it can internally process the audio information near it (such as a conversation)) of a mobile device may be disabled. According to one approach, the explorer determines any states within the application that could cause a command to be sent to the hardware and/or OS to turn on the device's audio function and determines that such states should be modified to remove or otherwise squelch this ability.
In a further embodiment, the disablement of the function is made conditional. For example, the specific unwanted behavior may be that the audio device should be disabled whenever the device is within range of a certain wireless network, out of range of a certain wireless network, whenever the device is within one or more specific GPS location(s) or outside one or more specific GPS location(s). Here, the instrumentation code that disables the audio is written to only take effect if the stated condition is detected. To support this ability, the explorer identifies the parts of the application code that are sensitive to the conditions needed to determine whether to enable/disable the function. Apart from an audio device, a network interface, camera or video device may similarly be disabled as discussed above.
In a second embodiment, an application's ability to engage in communication with an external system (e.g., a packet exchange with another computer over a network) is tightly controlled. Here permissible and/or unwanted actions may be specified such the external communication is permitted only through specific networks, not permitted over specific networks or types of networks (e.g., public networks), permitted only with specific systems (e.g., servers or other computers), not permitted with specific systems or types of systems.
In a third embodiment, an application's ability to access data, either external from the mobile device or internal to the mobile device is tightly controlled. For example, network communications/sessions with specific external computing systems may be prevented, and/or access to certain files or system memory regions within the mobile device may be prevent.
a and 11b show various uses of the framework 100 of
b shows another use case where the entire framework 1100b is implemented as a cloud service. Here, the user or customer submits an application and any user rules 1107 through interface 1150 at user location 1180b and over network 1170 to the cloud service 1160b. The cloud service 1160b then performs safety screening on the application.
Other usage models of the framework are also possible where various parts of the framework (other than just the machine learning and hard coded rules portions as in
Other usage models may direct applications for screening to the framework (however it is implemented) as part of their normal download and installation process. For example a user may chose to download an application from the Internet, however, before the application is allowed to be downloaded and installed on the user's device it is instead routed to the framework which analyzes it. Here, the application is only permitted to be installed on the device if it is deemed safe by the framework.
Although the above discussion has been directed to the security analysis of application software on mobile devices, it is pertinent to point out that the above described framework could also be applied to application software to larger systems such as a personal computers (e.g., laptop and desktop systems) and server systems.
The various components of the framework described above can be performed on any number of computing systems. At one extreme all of the components of the framework 100 could be implemented on a single computing system (e.g., on a large server system). Alternatively, each of the components of the framework could be implemented on its own respective computer system apart from the other framework components and their respective computer systems. A single framework component could be implemented with multiple computer systems. A single computer system could contain some but not all of the components of the framework. A single framework component may be implemented on more than one computing system. Different combinations of various ones of these possibilities may be used to create a single framework. To the extent different computing systems are used to implemented the framework they may be communicatively coupled with one or more networks.
Processes taught by the discussion above may be performed with program code such as machine-executable instructions which cause a machine (such as a “virtual machine”, a general-purpose CPU processor disposed on a semiconductor chip or special-purpose processor disposed on a semiconductor chip) to perform certain functions. Alternatively, these functions may be performed by specific hardware components that contain hardwired logic for performing the functions, or by any combination of programmed computer components and custom hardware components.
A storage medium may be used to store program code. A storage medium that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, CD-ROMs, DVD ROMs, EPROMs, EEPROMs, magnetic or optical cards or other type of machine-readable media suitable for storing electronic instructions. Program code may also be downloaded from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a propagation medium (e.g., via a communication link (e.g., a network connection)).
The applicable storage medium may include one or more fixed components (such as non volatile storage component 1202 (e.g., a hard disk drive, FLASH drive or non volatile memory) or system memory 1205) and/or various movable components such as a CD ROM 1203, a compact disc, a magnetic tape, etc. operable with removable media drive 1204. In order to execute the program code, typically instructions of the program code are loaded into the Random Access Memory (RAM) system memory 1205; and, the processing core 1206 then executes the instructions. The processing core 1206 may include one or more CPU processors or CPU processing cores.
It is believed that processes taught by the discussion above can be practiced within various software environments such as, for example, object-oriented and non-object-oriented programming environments, Java based environments (such as a Java 2 Enterprise Edition (J2EE) environment or environments defined by other releases of the Java standard), or other environments.
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
4292580 | Ott et al. | Sep 1981 | A |
5175732 | Hendel et al. | Dec 1992 | A |
5440723 | Arnold et al. | Aug 1995 | A |
5657473 | Killean et al. | Aug 1997 | A |
5842002 | Schnurer et al. | Nov 1998 | A |
5978917 | Chi | Nov 1999 | A |
6088803 | Tso et al. | Jul 2000 | A |
6094677 | Capek et al. | Jul 2000 | A |
6269330 | Cidon et al. | Jul 2001 | B1 |
6272641 | Ji | Aug 2001 | B1 |
6279113 | Vaidya | Aug 2001 | B1 |
6298445 | Shostack et al. | Oct 2001 | B1 |
6357008 | Nachenberg | Mar 2002 | B1 |
6424627 | Sorhaug et al. | Jul 2002 | B1 |
6484315 | Ziese | Nov 2002 | B1 |
6487666 | Shanklin et al. | Nov 2002 | B1 |
6493756 | O'Brien et al. | Dec 2002 | B1 |
6550012 | Villa et al. | Apr 2003 | B1 |
6775657 | Baker | Aug 2004 | B1 |
6832367 | Choi et al. | Dec 2004 | B1 |
6895550 | Kanchirayappa et al. | May 2005 | B2 |
6898632 | Gordy et al. | May 2005 | B2 |
6907396 | Muttik et al. | Jun 2005 | B1 |
6981279 | Arnold et al. | Dec 2005 | B1 |
7007107 | Ivchenko et al. | Feb 2006 | B1 |
7028179 | Anderson et al. | Apr 2006 | B2 |
7043757 | Hoefelmeyer et al. | May 2006 | B2 |
7069316 | Gryaznov | Jun 2006 | B1 |
7080408 | Pak et al. | Jul 2006 | B1 |
7093002 | Wolff et al. | Aug 2006 | B2 |
7093239 | van der Made | Aug 2006 | B1 |
7100201 | Izatt | Aug 2006 | B2 |
7159149 | Spiegel et al. | Jan 2007 | B2 |
7231667 | Jordan | Jun 2007 | B2 |
7240364 | Branscomb et al. | Jul 2007 | B1 |
7240368 | Roesch et al. | Jul 2007 | B1 |
7287278 | Liang | Oct 2007 | B2 |
7308716 | Danford et al. | Dec 2007 | B2 |
7346486 | Ivancic et al. | Mar 2008 | B2 |
7356736 | Natvig | Apr 2008 | B2 |
7386888 | Liang et al. | Jun 2008 | B2 |
7392542 | Bucher | Jun 2008 | B2 |
7418729 | Szor | Aug 2008 | B2 |
7428300 | Drew et al. | Sep 2008 | B1 |
7441272 | Durham et al. | Oct 2008 | B2 |
7448084 | Apap et al. | Nov 2008 | B1 |
7458098 | Judge et al. | Nov 2008 | B2 |
7464404 | Carpenter et al. | Dec 2008 | B2 |
7464407 | Nakae et al. | Dec 2008 | B2 |
7467408 | O'Toole, Jr. | Dec 2008 | B1 |
7480773 | Reed | Jan 2009 | B1 |
7487543 | Arnold et al. | Feb 2009 | B2 |
7496960 | Chen et al. | Feb 2009 | B1 |
7496961 | Zimmer et al. | Feb 2009 | B2 |
7519990 | Xie | Apr 2009 | B1 |
7523493 | Liang et al. | Apr 2009 | B2 |
7530104 | Thrower et al. | May 2009 | B1 |
7540025 | Tzadikario | May 2009 | B2 |
7565550 | Liang et al. | Jul 2009 | B2 |
7584455 | Ball | Sep 2009 | B2 |
7603715 | Costa et al. | Oct 2009 | B2 |
7607171 | Marsden et al. | Oct 2009 | B1 |
7639714 | Stolfo et al. | Dec 2009 | B2 |
7644441 | Schmid et al. | Jan 2010 | B2 |
7657419 | van der Made | Feb 2010 | B2 |
7676841 | Sobchuk et al. | Mar 2010 | B2 |
7698548 | Shelest et al. | Apr 2010 | B2 |
7707633 | Danford et al. | Apr 2010 | B2 |
7779463 | Stolfo et al. | Aug 2010 | B2 |
7784097 | Stolfo et al. | Aug 2010 | B1 |
7832008 | Kraemer | Nov 2010 | B1 |
7849506 | Dansey et al. | Dec 2010 | B1 |
7869073 | Oshima | Jan 2011 | B2 |
7877803 | Enstone et al. | Jan 2011 | B2 |
7904959 | Sidiroglou et al. | Mar 2011 | B2 |
7908660 | Bahl | Mar 2011 | B2 |
7930738 | Petersen | Apr 2011 | B1 |
7937761 | Benett | May 2011 | B1 |
7996556 | Raghavan et al. | Aug 2011 | B2 |
7996836 | McCorkendale et al. | Aug 2011 | B1 |
7996905 | Arnold et al. | Aug 2011 | B2 |
8006305 | Aziz | Aug 2011 | B2 |
8010667 | Zhang et al. | Aug 2011 | B2 |
8020206 | Hubbard et al. | Sep 2011 | B2 |
8028338 | Schneider et al. | Sep 2011 | B1 |
8045094 | Teragawa | Oct 2011 | B2 |
8045458 | Alperovitch et al. | Oct 2011 | B2 |
8069484 | McMillan et al. | Nov 2011 | B2 |
8087086 | Lai et al. | Dec 2011 | B1 |
8171553 | Aziz et al. | May 2012 | B2 |
8201246 | Wu et al. | Jun 2012 | B1 |
8204984 | Aziz et al. | Jun 2012 | B1 |
8220055 | Kennedy | Jul 2012 | B1 |
8225288 | Miller et al. | Jul 2012 | B2 |
8225373 | Kraemer | Jul 2012 | B2 |
8233882 | Rogel | Jul 2012 | B2 |
8234709 | Viljoen et al. | Jul 2012 | B2 |
8239944 | Nachenberg et al. | Aug 2012 | B1 |
8286251 | Eker et al. | Oct 2012 | B2 |
8291499 | Aziz et al. | Oct 2012 | B2 |
8307435 | Mann et al. | Nov 2012 | B1 |
8307443 | Wang et al. | Nov 2012 | B2 |
8312545 | Tuvell et al. | Nov 2012 | B2 |
8321936 | Green et al. | Nov 2012 | B1 |
8321941 | Tuvell et al. | Nov 2012 | B2 |
8365286 | Poston | Jan 2013 | B2 |
8370938 | Daswani et al. | Feb 2013 | B1 |
8370939 | Zaitsev et al. | Feb 2013 | B2 |
8375444 | Aziz et al. | Feb 2013 | B2 |
8381299 | Stolfo et al. | Feb 2013 | B2 |
8402529 | Green et al. | Mar 2013 | B1 |
8510827 | Leake et al. | Aug 2013 | B1 |
8510842 | Amit et al. | Aug 2013 | B2 |
8516593 | Aziz | Aug 2013 | B2 |
8528086 | Aziz | Sep 2013 | B1 |
8539582 | Aziz et al. | Sep 2013 | B1 |
8549638 | Aziz | Oct 2013 | B2 |
8561177 | Aziz et al. | Oct 2013 | B1 |
8566946 | Aziz et al. | Oct 2013 | B1 |
8584094 | Dahdia et al. | Nov 2013 | B2 |
8584234 | Sobel et al. | Nov 2013 | B1 |
8584239 | Aziz et al. | Nov 2013 | B2 |
8595834 | Xie et al. | Nov 2013 | B2 |
8627476 | Satish et al. | Jan 2014 | B1 |
8635696 | Aziz | Jan 2014 | B1 |
8682054 | Xue et al. | Mar 2014 | B2 |
8713631 | Pavlyushchik | Apr 2014 | B1 |
8713681 | Silberman et al. | Apr 2014 | B2 |
8726392 | McCorkendale et al. | May 2014 | B1 |
8782792 | Bodke | Jul 2014 | B1 |
8793787 | Ismael et al. | Jul 2014 | B2 |
8805947 | Kuzkin et al. | Aug 2014 | B1 |
8806647 | Daswani et al. | Aug 2014 | B1 |
9009822 | Ismael et al. | Apr 2015 | B1 |
9009823 | Ismael et al. | Apr 2015 | B1 |
20010005889 | Albrecht | Jun 2001 | A1 |
20010047326 | Broadbent et al. | Nov 2001 | A1 |
20020018903 | Kokubo et al. | Feb 2002 | A1 |
20020038430 | Edwards et al. | Mar 2002 | A1 |
20020091819 | Melchione et al. | Jul 2002 | A1 |
20020144156 | Copeland, III | Oct 2002 | A1 |
20020162015 | Tang | Oct 2002 | A1 |
20020166063 | Lachman et al. | Nov 2002 | A1 |
20020184528 | Shevenell et al. | Dec 2002 | A1 |
20020188887 | Largman et al. | Dec 2002 | A1 |
20020194490 | Halperin et al. | Dec 2002 | A1 |
20030074578 | Ford et al. | Apr 2003 | A1 |
20030084318 | Schertz | May 2003 | A1 |
20030115483 | Liang | Jun 2003 | A1 |
20030188190 | Aaron et al. | Oct 2003 | A1 |
20030200460 | Morota et al. | Oct 2003 | A1 |
20030208744 | Amir et al. | Nov 2003 | A1 |
20030212902 | Van Der Made | Nov 2003 | A1 |
20030237000 | Denton et al. | Dec 2003 | A1 |
20040003323 | Bennett et al. | Jan 2004 | A1 |
20040015712 | Szor | Jan 2004 | A1 |
20040019832 | Arnold et al. | Jan 2004 | A1 |
20040047356 | Bauer | Mar 2004 | A1 |
20040083408 | Spiegel et al. | Apr 2004 | A1 |
20040093513 | Cantrell et al. | May 2004 | A1 |
20040111531 | Staniford et al. | Jun 2004 | A1 |
20040165588 | Pandya | Aug 2004 | A1 |
20040236963 | Danford et al. | Nov 2004 | A1 |
20040243349 | Greifeneder et al. | Dec 2004 | A1 |
20040249911 | Alkhatib et al. | Dec 2004 | A1 |
20040255161 | Cavanaugh | Dec 2004 | A1 |
20040268147 | Wiederin et al. | Dec 2004 | A1 |
20050021740 | Bar et al. | Jan 2005 | A1 |
20050033960 | Vialen et al. | Feb 2005 | A1 |
20050033989 | Poletto et al. | Feb 2005 | A1 |
20050050148 | Mohammadioun et al. | Mar 2005 | A1 |
20050086523 | Zimmer et al. | Apr 2005 | A1 |
20050091513 | Mitomo et al. | Apr 2005 | A1 |
20050091533 | Omote et al. | Apr 2005 | A1 |
20050091652 | Ross et al. | Apr 2005 | A1 |
20050114663 | Cornell et al. | May 2005 | A1 |
20050125195 | Brendel | Jun 2005 | A1 |
20050157662 | Bingham et al. | Jul 2005 | A1 |
20050183143 | Anderholm et al. | Aug 2005 | A1 |
20050201297 | Peikari | Sep 2005 | A1 |
20050210533 | Copeland et al. | Sep 2005 | A1 |
20050238005 | Chen et al. | Oct 2005 | A1 |
20050265331 | Stolfo | Dec 2005 | A1 |
20060010495 | Cohen et al. | Jan 2006 | A1 |
20060015715 | Anderson | Jan 2006 | A1 |
20060021054 | Costa et al. | Jan 2006 | A1 |
20060031476 | Mathes et al. | Feb 2006 | A1 |
20060047665 | Neil | Mar 2006 | A1 |
20060070130 | Costea et al. | Mar 2006 | A1 |
20060075496 | Carpenter et al. | Apr 2006 | A1 |
20060095968 | Portolani et al. | May 2006 | A1 |
20060101516 | Sudaharan et al. | May 2006 | A1 |
20060101517 | Banzhof et al. | May 2006 | A1 |
20060117385 | Mester et al. | Jun 2006 | A1 |
20060123477 | Raghavan et al. | Jun 2006 | A1 |
20060143709 | Brooks et al. | Jun 2006 | A1 |
20060150249 | Gassen et al. | Jul 2006 | A1 |
20060161983 | Cothrell et al. | Jul 2006 | A1 |
20060161987 | Levy-Yurista | Jul 2006 | A1 |
20060161989 | Reshef et al. | Jul 2006 | A1 |
20060164199 | Gilde et al. | Jul 2006 | A1 |
20060173992 | Weber et al. | Aug 2006 | A1 |
20060179147 | Tran et al. | Aug 2006 | A1 |
20060184632 | Marino et al. | Aug 2006 | A1 |
20060191010 | Benjamin | Aug 2006 | A1 |
20060221956 | Narayan et al. | Oct 2006 | A1 |
20060236393 | Kramer et al. | Oct 2006 | A1 |
20060242709 | Seinfeld et al. | Oct 2006 | A1 |
20060248519 | Jaeger et al. | Nov 2006 | A1 |
20060251104 | Koga | Nov 2006 | A1 |
20060288417 | Bookbinder et al. | Dec 2006 | A1 |
20070006288 | Mayfield et al. | Jan 2007 | A1 |
20070006313 | Porras et al. | Jan 2007 | A1 |
20070011174 | Takaragi et al. | Jan 2007 | A1 |
20070016951 | Piccard et al. | Jan 2007 | A1 |
20070033645 | Jones | Feb 2007 | A1 |
20070038943 | FitzGerald et al. | Feb 2007 | A1 |
20070064689 | Shin et al. | Mar 2007 | A1 |
20070074169 | Chess et al. | Mar 2007 | A1 |
20070094730 | Bhikkaji et al. | Apr 2007 | A1 |
20070143827 | Nicodemus et al. | Jun 2007 | A1 |
20070156895 | Vuong | Jul 2007 | A1 |
20070157180 | Tillmann et al. | Jul 2007 | A1 |
20070157306 | Elrod et al. | Jul 2007 | A1 |
20070168988 | Eisner et al. | Jul 2007 | A1 |
20070171824 | Ruello et al. | Jul 2007 | A1 |
20070174915 | Gribble et al. | Jul 2007 | A1 |
20070192500 | Lum | Aug 2007 | A1 |
20070192858 | Lum | Aug 2007 | A1 |
20070198275 | Malden et al. | Aug 2007 | A1 |
20070240218 | Tuvell et al. | Oct 2007 | A1 |
20070240219 | Tuvell et al. | Oct 2007 | A1 |
20070240220 | Tuvell et al. | Oct 2007 | A1 |
20070240222 | Tuvell et al. | Oct 2007 | A1 |
20070250930 | Aziz et al. | Oct 2007 | A1 |
20070271446 | Nakamura | Nov 2007 | A1 |
20080005782 | Aziz | Jan 2008 | A1 |
20080066179 | Liu | Mar 2008 | A1 |
20080072326 | Danford et al. | Mar 2008 | A1 |
20080077793 | Tan et al. | Mar 2008 | A1 |
20080080518 | Hoeflin et al. | Apr 2008 | A1 |
20080098476 | Syversen | Apr 2008 | A1 |
20080120722 | Sima et al. | May 2008 | A1 |
20080134178 | Fitzgerald et al. | Jun 2008 | A1 |
20080134334 | Kim et al. | Jun 2008 | A1 |
20080141376 | Clausen et al. | Jun 2008 | A1 |
20080184373 | Traut et al. | Jul 2008 | A1 |
20080189787 | Arnold et al. | Aug 2008 | A1 |
20080215742 | Goldszmidt et al. | Sep 2008 | A1 |
20080222729 | Chen et al. | Sep 2008 | A1 |
20080263665 | Ma et al. | Oct 2008 | A1 |
20080295172 | Bohacek | Nov 2008 | A1 |
20080301810 | Lehane et al. | Dec 2008 | A1 |
20080307524 | Singh et al. | Dec 2008 | A1 |
20080320556 | Lee et al. | Dec 2008 | A1 |
20080320594 | Jiang | Dec 2008 | A1 |
20090007100 | Field et al. | Jan 2009 | A1 |
20090013408 | Schipka | Jan 2009 | A1 |
20090031423 | Liu et al. | Jan 2009 | A1 |
20090036111 | Danford et al. | Feb 2009 | A1 |
20090044024 | Oberheide et al. | Feb 2009 | A1 |
20090044274 | Budko et al. | Feb 2009 | A1 |
20090083369 | Marmor | Mar 2009 | A1 |
20090083855 | Apap et al. | Mar 2009 | A1 |
20090089879 | Wang et al. | Apr 2009 | A1 |
20090094697 | Provos et al. | Apr 2009 | A1 |
20090113425 | Ports et al. | Apr 2009 | A1 |
20090125976 | Wassermann et al. | May 2009 | A1 |
20090126015 | Monastyrsky et al. | May 2009 | A1 |
20090126016 | Sobko et al. | May 2009 | A1 |
20090133125 | Choi et al. | May 2009 | A1 |
20090158430 | Borders | Jun 2009 | A1 |
20090187992 | Poston | Jul 2009 | A1 |
20090193293 | Stolfo et al. | Jul 2009 | A1 |
20090199296 | Xie et al. | Aug 2009 | A1 |
20090228233 | Anderson et al. | Sep 2009 | A1 |
20090241187 | Troyansky | Sep 2009 | A1 |
20090241190 | Todd et al. | Sep 2009 | A1 |
20090265692 | Godefroid et al. | Oct 2009 | A1 |
20090271867 | Zhang | Oct 2009 | A1 |
20090300761 | Park et al. | Dec 2009 | A1 |
20090328185 | Berg et al. | Dec 2009 | A1 |
20090328221 | Blumfield et al. | Dec 2009 | A1 |
20100017546 | Poo et al. | Jan 2010 | A1 |
20100043073 | Kuwamura | Feb 2010 | A1 |
20100054278 | Stolfo et al. | Mar 2010 | A1 |
20100058474 | Hicks | Mar 2010 | A1 |
20100064044 | Nonoyama | Mar 2010 | A1 |
20100077481 | Polyakov et al. | Mar 2010 | A1 |
20100083376 | Pereira et al. | Apr 2010 | A1 |
20100115621 | Staniford et al. | May 2010 | A1 |
20100132038 | Zaitsev | May 2010 | A1 |
20100154056 | Smith et al. | Jun 2010 | A1 |
20100180344 | Malyshev et al. | Jul 2010 | A1 |
20100192223 | Ismael et al. | Jul 2010 | A1 |
20100251104 | Massand | Sep 2010 | A1 |
20100281102 | Chinta et al. | Nov 2010 | A1 |
20100281541 | Stolfo et al. | Nov 2010 | A1 |
20100281542 | Stolfo et al. | Nov 2010 | A1 |
20100287260 | Peterson et al. | Nov 2010 | A1 |
20110025504 | Lyon et al. | Feb 2011 | A1 |
20110041179 | Stahlberg | Feb 2011 | A1 |
20110047594 | Mahaffey et al. | Feb 2011 | A1 |
20110047620 | Mahaffey et al. | Feb 2011 | A1 |
20110078794 | Manni et al. | Mar 2011 | A1 |
20110093951 | Aziz | Apr 2011 | A1 |
20110099633 | Aziz | Apr 2011 | A1 |
20110113231 | Kaminsky | May 2011 | A1 |
20110145918 | Jung et al. | Jun 2011 | A1 |
20110145920 | Mahaffey et al. | Jun 2011 | A1 |
20110167494 | Bowen et al. | Jul 2011 | A1 |
20110219449 | St. Neitzel et al. | Sep 2011 | A1 |
20110247072 | Staniford et al. | Oct 2011 | A1 |
20110265182 | Peinado et al. | Oct 2011 | A1 |
20110302587 | Nishikawa et al. | Dec 2011 | A1 |
20110307954 | Melnik et al. | Dec 2011 | A1 |
20110307955 | Kaplan et al. | Dec 2011 | A1 |
20110307956 | Yermakov et al. | Dec 2011 | A1 |
20110314546 | Aziz et al. | Dec 2011 | A1 |
20120079596 | Thomas et al. | Mar 2012 | A1 |
20120084859 | Radinsky et al. | Apr 2012 | A1 |
20120117652 | Manni et al. | May 2012 | A1 |
20120121154 | Xue et al. | May 2012 | A1 |
20120174186 | Aziz et al. | Jul 2012 | A1 |
20120174218 | McCoy et al. | Jul 2012 | A1 |
20120198279 | Schroeder | Aug 2012 | A1 |
20120210423 | Friedrichs et al. | Aug 2012 | A1 |
20120222121 | Staniford et al. | Aug 2012 | A1 |
20120255015 | Sahita et al. | Oct 2012 | A1 |
20120266244 | Green et al. | Oct 2012 | A1 |
20120278886 | Luna | Nov 2012 | A1 |
20120297489 | Dequevy | Nov 2012 | A1 |
20120330801 | McDougal et al. | Dec 2012 | A1 |
20130036472 | Aziz | Feb 2013 | A1 |
20130047257 | Aziz | Feb 2013 | A1 |
20130086684 | Mohler | Apr 2013 | A1 |
20130097706 | Titonis et al. | Apr 2013 | A1 |
20130117855 | Kim et al. | May 2013 | A1 |
20130160130 | Mendelev et al. | Jun 2013 | A1 |
20130160131 | Madou et al. | Jun 2013 | A1 |
20130185798 | Saunders et al. | Jul 2013 | A1 |
20130196649 | Paddon et al. | Aug 2013 | A1 |
20130227691 | Aziz et al. | Aug 2013 | A1 |
20130246370 | Bartram et al. | Sep 2013 | A1 |
20130263260 | Mahaffey et al. | Oct 2013 | A1 |
20130291109 | Staniford et al. | Oct 2013 | A1 |
20130298243 | Kumar et al. | Nov 2013 | A1 |
20140053260 | Gupta et al. | Feb 2014 | A1 |
20140053261 | Gupta et al. | Feb 2014 | A1 |
20140179360 | Jackson et al. | Jun 2014 | A1 |
20140337836 | Ismael | Nov 2014 | A1 |
20150096025 | Ismael | Apr 2015 | A1 |
Number | Date | Country |
---|---|---|
2439806 | Jan 2008 | GB |
WO-0206928 | Jan 2002 | WO |
WO-0223805 | Mar 2002 | WO |
WO-2007-117636 | Oct 2007 | WO |
WO-2008041950 | Apr 2008 | WO |
WO-2011084431 | Jul 2011 | WO |
WO-2012145066 | Oct 2012 | WO |
Entry |
---|
High-performance server systems and the next generation of online games, D'Amora, B.; Nanda, A.; Magerlein, K.; Binstock, A.; Yee, B., IBM Systems Journal Year: 2006, vol. 45, Issue: 1 pp. 103-118, DOI: 10.1147/sj.451.0103. |
Use of Role Based Access Control for Security-Purpose Hypervisors, Hirano, M.; Chadwick, D.W.; Yamaguchi, S. Trust, Security and Privacy in Computing and Communications (TrustCom), 2013 12th IEEE International Conference on Year: 2013 pp. 1613-1619, DOI: 10.1109/TrustCom.2013.199. |
Towards Understanding Malware Behaviour by the Extraction of API Calls, Alazab, M.; Venkataraman, S.; Watters, P. Cybercrime and Trustworthy Computing Workshop (CTC), 2010 Second Year: 2010 pp. 52-59, DOI: 10.1109/CTC.2010.8. |
IEEE Xplore Digital Library Sear Results for “detection of unknown computer worms”. Http//ieeexplore.ieee.org/searchresult.jsp?SortField=Score&SortOrder=desc&ResultC . . . , (Accessed on Aug. 28, 2009). |
AltaVista Advanced Search Results. “Event Orchestrator”. Http://www.altavista.com/web/results?ltag=ody&pg=aq&aqmode=aqa=Event+Orchestrator . . . , (Accessed on Sep. 3, 2009). |
AltaVista Advanced Search Results. “attack vector identifier”. Http://www.altavista.com/web/results?ltag=ody&pg=aq&aqmode=aqa=Event+Orchestrator . . . , (Accessed on Sep. 15, 2009). |
Cisco, Configuring the Catalyst Switched Port Analyzer (SPAN) (“Cisco”), (1992-2003). |
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doorn, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”). |
Excerpt regarding First Printing Date for Merike Kaeo, Designing Network Security (“Kaeo”), (2005). |
The Sniffers's Guide to Raw Traffic available at: yuba.stanford.edu/˜cascado/pcap/section1.html, (Jan. 6, 2014). |
“Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003). |
“Packet”, Microsoft Computer Dictionary, Microsoft Press, (Mar. 2002), 1 page. |
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=990073, (Dec. 7, 2013). |
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108. |
Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003). |
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006. |
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184. |
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77. |
Boubalos, Chris , “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003). |
Chaudet, C. , et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82. |
Cohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120. |
Costa, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005). |
Crandall, J.R. , et al., “Minos:Control Data Attack Prevention Orthogonal to Memory Model”, 37th International Symposium on Microarchitecture, Portland, Oregon, (Dec. 2004). |
Deutsch, P. , ““Zlib compressed data format specification version 3.3” RFC 1950, (1996)”. |
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007). |
Dunlap, George W. , et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002). |
Filiol, Eric , et al., “Combinatorial Optimisation of Worm Propagation on an Unknown Network”, International Journal of Computer Science 2.2 (2007). |
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28. |
Hjelmvik, Erik , “Passive Network Security Analysis with NetworkMiner”, (In)Secure, Issue 18, (Oct. 2008), pp. 1-100. |
Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003). |
Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286. |
King Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”). |
Krasnyansky, Max , et al., Universal TUN/TAP driver, available at https://kernel.org/doc/Documentation/networking/tuntap.txt (2002) (“Krasnyansky”). |
Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003). |
Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages. |
Liljenstam, Michael , et al., “Simulating Realistic Network Traffic for Worm Warning System Design and Testing”, Institute for Security Technology studies, Dartmouth College, (“Liljenstam”), (Oct. 27, 2003). |
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001). |
Margolis, P.E. , “Random House Webster's ‘Computer & Internet Dictionary 3rd Edition’”, ISBN 0375703519, (Dec. 1998). |
Moore, D. , et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910. |
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34. |
Natvig, Kurt , “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002). |
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987. |
Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005). |
Newsome, J. , et al., “Polygraph: Automatically Generating Signatures for Polymorphic Worms”, In Proceedings of the IEEE Symposium on Security and Privacy, (May 2005). |
Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302. |
Peter M. Chen, and Brian D. Noble , “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”). |
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25. |
Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004). |
Spitzner, Lance , “Honeypots: Tracking Hackers”, (“Spizner”), (Sep. 17, 2002). |
Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998). |
Venezia, Paul , “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003). |
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium , (Feb. 2005), 15 pages. |
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9. |
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711. |
U.S. Appl. No. 13/775,169, Non Final Office Action, mailed Mar. 12, 2014. |
U.S. Appl. No. 13/775,169, Notice of Allowance, mailed Jan. 13, 2015. |
U.S. Appl. No. 13/775,172, Final Office Action, mailed Dec. 9, 2014. |
U.S. Appl. No. 13/775,171, Final Office Action, mailed Dec. 12, 2014. |
U.S. Appl. No. 13/775,166, Final Office Action, mailed Oct. 22, 2014. |
U.S. Appl. No. 13/775,166, Non Final Office Action, mailed Mar. 21, 2014. |
U.S. Appl. No. 13/775,171, Non Final Office Action, mailed Mar. 27, 2014. |
U.S. Appl. No. 13/775,172, Non Final Office Action, mailed Apr. 22, 2014. |
U.S. Appl. No. 13/775,166, Notice of Allowance, mailed Jan. 29, 2015. |
U.S. Appl. No. 13/775,168, Final Office Action, mailed Nov. 14, 2014. |
U.S. Appl. No. 13/775,168, Non Final Office Action, mailed Mar. 16, 2015. |
U.S. Appl. No. 13/775,168, Non Final Office Action, mailed Jun. 13, 2014. |
U.S. Appl. No. 13/775,171 filed Feb. 23, 2013, Non-Final Office Action, dated Aug. 5, 2015. |
U.S. Pat. No. 8,171,553 filed Apr. 20, 2006, Inter Parties Review Decision dated Jul. 10, 2015. |
U.S. Pat. No. 8,291,499 filed Mar. 16, 2012, Inter Parties Review Decision dated Jul. 10, 2015. |
Hu, Cuixiong, and Lulian Neamtiu. “Automating GUI testing for Android applications.” Proceedings of the 6th International Workshop on Automation of Software Test. ACM, 2011. |
Gilbert, Peter, et al. “Vision: automated security validation of mobile apps at app markets.” Proceedings of the second international workshop on Mobile cloud computing and services. ACM, 2011. |