Framework for efficient security coverage of mobile software applications

Information

  • Patent Grant
  • 10296437
  • Patent Number
    10,296,437
  • Date Filed
    Monday, October 16, 2017
    7 years ago
  • Date Issued
    Tuesday, May 21, 2019
    5 years ago
Abstract
A method is described that includes receiving an application and generating a representation of the application that describes specific states of the application and specific state transitions of the application. The method further includes identifying a region of interest of the application based on rules and observations of the application's execution. The method further includes determining specific stimuli that will cause one or more state transitions within the application to reach the region of interest. The method further includes enabling one or more monitors within the application's run time environment and applying the stimuli. The method further includes generating monitoring information from the one or more monitors. The method further includes applying rules to the monitoring information to determine a next set of stimuli to be applied to the application in pursuit of determining whether the region of interest corresponds to improperly behaving code.
Description
FIELD OF INVENTION

The field of invention relates generally to computing systems, and, more specifically, to a framework for efficient security coverage of mobile software applications.


BACKGROUND

With the emergence of mobile and/or handheld computing, e.g., as embodied by the prevalence of tablet computers and smart phones, the security of the application software that runs on these devices has become a matter of concern. The concern is becoming particularly acute as more powerful mobile platforms are supporting more capable and important application software applications. With increased capability and importance the applications and underlying platforms are handling more sensitive information more intensively.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be more fully understood with reference to the following detailed description in conjunction with the drawings, of which:



FIG. 1 shows a framework for efficiently determining whether an application is safe or unsafe;



FIG. 2 shows a first embodiment of a run-time environment;



FIG. 3 shows a second embodiment of a run-time environment;



FIG. 4 shows an embodiment of a static instrumentation unit;



FIG. 5 shows a third embodiment of a run time environment;



FIG. 6 shows an example of an explorer component's operation;



FIG. 7 shows a methodology for the framework of FIG. 1;



FIG. 8 pertains to a user interface to the framework of FIG. 1;



FIG. 9 pertains to charactering an application while it is installed on a mobile device;



FIG. 10 pertains to instrumenting applications to effect security policies in the field;



FIGS. 11a and 11b pertain to different use cases of the framework of FIG. 1;



FIG. 12 provides an embodiment of a computer system.





DETAILED DESCRIPTION


FIG. 1 shows a high level depiction of a framework 100 for testing the safety/security of application software including mobile device application software. As observed in FIG. 1, the framework includes: 1) a static instrumentation engine 101; 2) a dynamic run time test and observation environment 102_1; 3) a central intelligence engine 103 that includes a behavior and logic engine 103_1 and an explorer engine 103_2; 3) a platform-specific knowledge base 104 and “hard coded” rules provided therefrom 128; 4) a machine learning platform 105 and associated rules generated therefrom 106; and, 5) customer or user provided rules 107.


The central intelligence engine 103 controls the testing strategy of an application under test 108 that is executing within the dynamic runtime environment 102. Essentially, the central intelligence engine 103 identifies “regions of interest” within the code of the application 108, determines specific stimuli to reach these regions of interest, causes such stimuli to be applied to the application 108, monitors the behavior of the application 108 in response to these stimuli and determines whether or not the application is “safe” or “unsafe” in view of the application's observed behavior.


As observed in FIG. 1, the central intelligence engine 103 includes a behavior and logic engine (BALE) 103_1 and an explorer engine 103_2. The behavior and logic engine 103_1 observes the application 108 to ultimately reach a conclusion, in view of its observations, whether or not the application 108 is “safe”. The behavior and logic engine 103_1 in response to its observations and in pursuit of its ultimate objective of determining whether or not the application is safe, may identify stimuli to be applied to the application, identify specific monitoring functions that should be enabled to detect certain application behaviors and/or identify possible “regions of interest” that may exist in the application based on its observed behavior and notify the explorer engine 103_2 of such possible regions of interest.


The explorer engine 103_2 assists the behavior and logic engine by “studying” the internal structure and operation of the application 108 looking for and identifying “regions of interest” within the application code (i.e., portions of the code that correspond to unsafe operations as opposed to benign/safe operations). Besides being notified of the possibility that certain regions of interest may exist in the application based on the behavior and logic engine's 103_1 observations of the application's behavior, the explorer engine 103_2 may also look for certain kinds of “regions of interest” based on one or more rules provided by the user, one or more rules gleaned from a machine learning platform and/or one or more “hardcoded” rules. Subsequent applied stimuli and observations of the application 108 are focused on the identified regions of interest. By focusing the stimuli and observations on the regions of interest the overall testing and characterization of the application is more streamlined and efficient because execution of large portions of benign application code is largely avoided.


According to one approach, the behavior and logic engine 103_1 is implemented as an inference engine. As is understood in the art, an inference engine recursively collects information (“facts”) describing the current state of a system under observance and matches them against applicable “rules” for the system. “Acts” are defined for specific patterns/sets of facts that match the applicable rules. The set of available acts for a particular set of matching facts and rules corresponds to an “agenda”. The engine then performs “conflict resolution” which is the process of determining which acts from the “agenda” should be performed (the “conflict set”) and in what order they should be performed in. After conflict resolution, the selected acts are then performed on the system in the determined order (here, the conflict set may define a next set of stimuli to be applied to the application). The acts generate a new state of the system being observed, which corresponds to the generation of new facts. The process described above then repeats in a recursive fashion until some conclusion about the system is reached. That is, the engine's recursive conflict resolution actions are generally aimed at reaching some conclusion about the system (in this case, ideally, whether the application software is “safe” or “unsafe”).


The explorer engine 103_2 analyzes a control flow graph or other representation of the application's internal structure/operation that defines different states of the application and the stimuli needed to cause a transition from one particular application state to another particular application state (including multiple transitions through multiple states). In an embodiment, the representation of the application that is analyzed by the explorer engine 103_2 is generated by the static instrumentation engine 101 with one or more various software analysis techniques (e.g., control flow analysis, data flow analysis, value set analysis, event analysis, etc.). The explorer engine 103_2, e.g., through reference to various rules that describe appropriate and/or inappropriate code structures and/or code, and/or by way of notification from the behavior and logic engine 103_1 that certain inappropriate code structures and/or code may exist in the application based on its observed behavior, identifies “regions of interest” within the representation of the application. The explorer engine 103_2 then attempts to identify specific stimuli that may be applied to the application to cause it to transition to an identified region of interest.


The corpus of rules available to the behavior and logic and explorer engines 103_1, 103_2 are provided from: i) rules 128 provided from the platform specific knowledge base 104; ii) rules 106 generated from a machine learning platform 105; and, iii) customer/user provided rules 107. More discussion concerning the use of these rules is provided in more detail further below.


The run-time test engine and observation environment 102 includes the application software being observed 108 and an instance 109 of the type of operating system the application is expected to later run-on if ultimately deemed safe. In various embodiments, the run-time test environment 102 may include a first virtual machine 110 between the application under test 108 and the operating system instance 109. Here, the application software 108 is typically received as abstract executable code (e.g., Java byte code) or other CPU hardware agnostic code. The first virtual machine 110 converts the application's generic executable code into appropriate instructions for the underlying hardware platform 111. The first virtual machine 110 and application under test 108 can together be referred to as a “process”.


The operating system instance 109 may also run on a second virtual machine 112 that itself runs on a virtual machine monitor (VMM) layer 120 that exists between the second virtual machine 112 and hardware platform 111.



FIG. 2 shows a more detailed embodiment of the run time environment 102. Notably, there are multiple “second” virtual machines 212_1 through 212_N running on the VMM layer 220, where, each second virtual machine 212_1 through 212_N supports its own respective operating system instance 209_1 through 209_N. In turn, each operating system instance 209_1 through 209_N supports its own respective “first” virtual machine 210_1 through 210_N and application instance 208_1 through 208_N under observance.


As is known in the art, a VMM layer 220 is responsible for partitioning/allocating the resources of the underlying hardware platform 211 (e.g., system memory, CPU threads, non volatile storage space, etc.) amongst the various second virtual machines 212_1 through 212_N. Essentially, each of the second virtual machines 212_1 through 212_N attempts to present the image of an entire computing system and its resources to its respective operating system instance 209_1 through 209_N. The VMM layer 220 and its virtual machines 212_1 through 212_N largely hide from the operating system instances 109_1 through 109_N the perspective that they are actually sharing a single underlying computing system 211.


The existence of multiple second virtual machines 212_1 through 212_N essentially permits the instantiation of multiple run time test processes 222_1 through 222_N that are isolated from one another. The concurrent existence of multiple, isolated run time test processes 222_1 through 222_N permits different types of coverage and observation sequences to be concurrently run on a single application.


That is, different instances of the same application may be provided in different run time processes so that different types of coverage and observance sequences can be concurrently performed on the same application. Alternatively or in combination, different applications can be concurrently observed in the multiple run-time processes. For instance, a first application may be observed in a first run time test process (e.g., process 222_1) while a second, different application is observed in a second run time test process (e.g., process 222_2). The second different application may be a different version of the first or a different application entirely.


Additionally, instances of different operating system types may support different run time processes. For example, an ANDROID operating system instance may support a first run time process while an iOS operating system instance may support a second run time process. Elsewise, different versions of same operating instance may concurrently execute on a virtual machine monitor layer to support two different run time processes. Concurrent testing of multiple application instances (whether different instances of the same application or respective instances of different applications or different versions of a same application or some combination thereof) enhances the overall performance of the system.


The central intelligence engine 103, returning to FIG. 1, is therefore capable of concurrently analyzing multiple application instances. This includes maintaining parallel inference engine recursions and internal analyses of the multiple application instances. In cases where multiple instances of a same application are being concurrently tested, communications/information sharing may exist amongst the testing schemes of the two or more of the application instances to help formulate a single global “multi-threaded” testing strategy for the application. For simplicity, the remainder of the application largely refers to a single run time test process.



FIG. 3 shows a more detailed view of a single run time environment 302. In an embodiment, each run time process 222_1 through 222_N of FIG. 2 and associated operating system instances 209_1 through 209_N and second virtual machines 212_1 through 212_N may include the different components illustrated in FIG. 3. As observed in FIG. 3, and as discussed above, the run time environment 302 includes an instance of an application being observed 308, a first virtual machine 310, an operating system instance 309, and a second virtual machine 312. For simplicity the VMM layer and hardware platform is not shown.


Here, each of the first virtual machine 310, the operating system instance 309 and the second virtual machine 312 is retrofitted with various monitoring functions 313_1 through 313_M that the central intelligence engine 103 is able to enable/disable. For example, the central intelligence function 103 may enable certain monitoring functions (e.g., monitoring functions 313_1 and 313_3) while disabling the remaining monitoring functions (e.g., functions 313_2 and 313_4 (not shown) through 313_M). In an embodiment, the monitoring functions at least include: i) a system calls monitoring function 313_1; ii) a data tracking monitoring function 313_2; and, iii) a device operation monitoring function 313_3.


As observed in FIG. 3, the system calls monitoring function 313_1 is embedded in the operating system instance 309, the data tracking monitoring function 313_2 is embedded in the first virtual machine 310 and the device operation monitoring function is embedded in the second virtual machine 312. Notably, these monitoring function locations are exemplary as they could likewise be located in different locations. For example, the data tracking monitoring function 313_2 could be located in the second virtual machine 312 and the system call monitoring function 313_1 could be located in the first virtual machine 310.


The system calls monitoring function 313_1 monitors the run time execution of the application's executable code and flags any system calls. Here, a system call is essentially any invocation 315 of the underlying operating system instance 309 made by the application under test 308 or its virtual machine 310. As is understood in the art, an operating system provides services for basic uses of the hardware platform. An application's request to use such a service corresponds to a system call. The types of system calls an application can make typically include process control system calls (e.g., load, execute, create process, terminate process, wait (e.g., for an event), allocate or free a system memory range), file management system calls (e.g., create/delete file, open/close file, get/set file attributes), information maintenance system calls (e.g., get/set time or date) and I/O system calls such as communication system calls (e.g., create/delete network connection, send/receive messages, attach/detach remote devices) and user interface operating system (OS) calls.


In order to flag any system calls made by the application 308 or virtual machine 310, in an embodiment, monitoring function 313_1 detects a system call (such as any of or a masked subset of any of the system calls mentioned above) and reports the event to the central intelligence engine 103 along with any parameters associated with the call to the central intelligence function. For example, if an application seeks to open a network connection to a particular network address, the system call monitoring function 313_1 will report both the request to open the connection and the network address to the central intelligence engine 103. The monitoring function may intercept system calls by “hooking” the system calls to capture the passed parameter.


The data tracking monitoring function 313_2 tracks specific items of data within the application 308. As is understood by those of ordinary skill, data is usually identified by the memory location and/or register location where it is stored. The executable code of an application 308 specifically calls out and identifies these locations. Frequently, data will move from one memory/register location to another. The data tracking monitoring function 313_2 tracks the movement of a specific item of data and reports any suspicious activity to the central intelligence engine 103. More specifically, in an embodiment, the data tracking monitoring function 313_2 is provided with the identity of a specific “sensitive” (e.g., highly confidential) data item, and, reports to the central intelligence engine any attempt by the application to cause the data to be directed out of the run time environment (such as attempting to send the data over a network connection), or, stored in a file or other storage (e.g., register and/or memory) location other than an approved location.


In an embodiment, the data tracking monitoring function 313_2 maintains internal tables having an entry for register and system memory addresses referred to by the application code. Each entry also identifies whether its corresponding register/memory address is “tainted”. The data tracking monitoring function 313_2 marks as tainted any register/memory location where the sensitive information is kept. Additionally, the data tracking monitoring function 313_2 marks as tainted any register or memory location to which a tainted register/memory location's content is moved. The data tracking monitoring function 313_2 will also clear a tainted register/memory location (i.e., mark it as no longer tainted) if it is overwritten with the contents of a non-tainted register/memory location or is otherwise erased (e.g., by being reset to cleared to all zeroes).


By so doing, all locations where the sensitive information resides are known. Any attempt by the application 308 to direct data from a tainted location outside the run time environment 302 or to an “unapproved” register, memory or file location is reported to the central intelligence engine 103. The report includes pertinent ancillary information associated with the attempt (such as the network address to where a data transmission was attempted, or the unapproved file location where an attempted store was made). In the case of unapproved network destinations and/or storage locations, the data tracking monitoring function 313_2 is informed beforehand of at least one of the data item's approved or unapproved data destinations/locations by the central intelligence function 103. In many cases, the identity of the sensitive information to the central intelligence engine 103 is made by way of the user provided rules 107.


The device operation monitoring function 313_3 monitors calls 316 made by the application 308 or virtual machine 310 to the underlying hardware platform directly (i.e., not through an OS system call). Here, a “device” is generally understood to be any attachment or peripheral (attachments/peripherals are typically coupled to a hardware system's I/O control hub (ICH) or system memory control hub (MCH)). Attachments/peripherals typically include non volatile storage devices (e.g., disk drives, SSD devices), network interfaces (e.g., SMS functions, HTTP functions), keyboards, displays and mouse/touchpad/control stick devices, integrated camera devices, integrated audio devices (both input (e.g., microphone) and output (e.g., speaker system) and printers among other possible devices. In the context of the device monitoring function 313_3, however, the term “device” is understood to be broader than just peripherals. For example, if an application attempts to directly write to control register space (such as model specific register space) of a CPU core or a memory controller within the hardware platform the device operation monitoring function 313_3 will track these operations as well.


Here, depending on system implementation, various devices within the underlying hardware may be manipulated by the application 308 or virtual machine 310 through direct communication to the underlying hardware without involvement of the operating system (e.g., by writing to the underlying platform's register space). These operations are tracked by the device operation monitoring function 313_3. By contrast, the application's behavior with respect to those devices or functions called thereon that are not directly communicated to the hardware are typically manipulated through the operating system 309. These calls are therefore tracked with the system call monitoring function 313_1.


When a call is made to a device directly through the hardware, the device operation monitoring function 313_3 reports the call to the central intelligence engine 103 identifying both the targeted device and the type of call that was made.


As mentioned above, in one approach, a monitoring function will not monitor and/or report out an event it is designed to detect unless it is specifically enabled (e.g., by the central intelligence engine 103) beforehand.


In many cases the application instance 308 is a mobile application that is effected with abstract executable code (e.g., Java bytecode) that needs to be converted into the object code of a particular type of CPU by the first virtual machine 310. In cases where the application instance 308 is provided as object code that is already targeted for a specific CPU type (i.e., the first virtual machine 310 is not needed) the monitoring functions 313_1 to 313_M may nevertheless be integrated into the run time environment so as observe the interface between the application 308 and the operating system instance 309. For example, as stated earlier, the data tracking monitoring function 313_1 can be integrated into the second virtual machine 312 instead.


Along with the monitoring functions 313_1 through 313_M, stimuli functions 314_1 through 314_P are also integrated into the run time environment 302. Whereas the monitoring functions 313_1 through 313_M are designed to report observed behaviors of the application 308 to the central intelligence engine 103, by contrast, the stimuli functions 314_1 through 314_P are designed to apply specific input values and/or signals to the application 308 (e.g., to drive the application's execution to a region of interest, to observe the application's behavioral response to these inputs, etc.). The specific input values and/or signals to be applied are provided by the central intelligence engine 103.


As observed in FIG. 3, the stimuli functions include: i) a data value stimuli function 314_1; ii) as OS event/state stimuli function 314_2; and, iii) a hardware event/state stimuli function 314_3.


The data value stimuli function 314_1 is able to set specific low level data values of the application's code. The data value may be specific data that is processed by the application or control data used to control the application. For example, the data value stimuli function 314_1 may be used to set an instruction pointer to a specific value to begin or jump application execution to a specific point in the application's code. Likewise, the data value stimuli function 314_1 may be used to create/change/delete any data value within the register or system memory space that is processed by the application 308. This capability may be used, for instance, to change the state of the application 308 to any particular state so the application's behavior in response to the artificially set state can be observed.


The OS event/state stimuli function 314_2 is used to create any event that the OS might report to the application 308 (e.g., incoming call, incoming packet, etc.) or present any OS state that is observable to the application 308 (e.g., such as the state of various devices within the system). Here, the OS event/state stimuli 314_2 is essentially used to manipulate the OS portion of the application's environment. Likewise, the hardware event/state stimuli function 314_3 is used to create any event that the hardware might report to the application 308 (e.g., an incoming call for an SMS device that does not communicate to the application through the OS, etc.) or present any state of the hardware observable to the application 308 (e.g., such as the state of various control registers within the system). Here, the hardware event/state stimuli 314_3 is essentially used to manipulate the hardware portion of the application's environment.


Whereas the run time environment has standard monitoring and stimuli functions embedded in the software platform beneath the application under test, the static instrumentation engine 101, returning to FIG. 1, by contrast, modifies the application code itself to include within itself special monitoring functions and/or special stimuli functions. The monitoring functions report their results to the central intelligence engine 103 and the stimuli functions are told what stimuli to generate by the central intelligence engine 103. As such, the static instrumentation engine 101 adds additional monitoring and stimuli functions to the application apart from the monitoring/stimuli functions that are embedded in the application's underlying platform.


Notably, mobile applications written for ANDROID of Google, Inc. as well as applications written in Java (whether mobile or desktop) conform very well to the framework outlined in FIG. 3 because they are normally executed with a first virtual machine 310 (e.g., a DALVIK virtual machine for ANDROID, a Java Virtual Machine for Java). Applications written for iOS from Apple, Inc. include an operative layer between the application and operating system (“objective C runtime in iOS”) that dynamically converts abstract code to a lower level code and therefore can be regarded as a first virtual machine for purposes of the present discussion.


Other applications, e.g., for other systems, may not normally use an available first virtual machine. In one approach, applications that normally use an available first virtual machine are stimulated/monitored in the dynamic runtime environment with one or more functions (e.g., functions 313_1 and 313_M−2 among others) being embedded in the first virtual machine level 310, whereas applications that are not normally written to run on an available first virtual machine level (e.g., an application that has been compiled to run on its underlying hardware CPU) may have these monitoring functions embedded in the underlying OS instance 309 or second virtual machine level 310 of the run time environment 302. Alternatively one or more of these stimulation/monitoring functions may be statically added to the applications themselves by the static instrumentation engine 101 of FIG. 1. In the case of applications received in an encrypted form (e.g., at least some iOS applications), static instrumentation may not be an option.



FIG. 4 shows an embodiment 401 of the static instrumentation engine 101 of FIG. 1. As observed in FIG. 4, the static instrumentation engine 401 includes an application translator 414, an application representation generation unit 415; an instrumentation unit 416 and an application re-translator 417.


According to one embodiment of the process flow of the static instrumentation engine 401, an application to be observed 408 is provided to the translator 414 in a first low level form (e.g., DALVIK .dex executable code). The translator 414 translates the executable/object code up to a higher more abstract code level (e.g., in the case of .dex, a .dex application is translated up to a RISC-like version of Java byte code which contemplates fewer instructions in the instruction set architecture than pure Java byte code). The higher level code representation of the application is then provided to the application representation generation unit 415 which studies the application's internal structures, code flows, etc. to generate a representation of the application, such as a control flow graph, that defines specific states of the application and various stimuli needed to cause a transition from one application state to another application state. The representation of the application is then provided to the explorer component 103_2 of the central intelligence function 103.


The explorer portion 103_2 of the central intelligence function 103 analyzes the application representation to identify what parts of the application may correspond to improperly behaving code (a “region of interest” within the code), and, what set of stimuli are needed to reach that code and activate it. Identification of a region of interest may be based on any of the user provided rules, machine learned rules, hard-coded rules or observations made by the behavior and logic engine 103_1 that are reported to the explorer 103_2. In an embodiment, one or more of, the identities of the types of regions of interest found in the application, the types of stimuli needed to reach such code and the types of stimuli that might activate it are shared with the behavior and logic engine 103_1. The behavior and logic engine 103_1 utilizes this information to establish a next set of acts/stimuli to be performed on the application (e.g., a next “conflict set”) and establish, e.g., at least partially, specific behaviors of the application to be monitored.


As part of the definition of the next set of stimuli to be generated and/or next set of behaviors to be monitored, certain ones of the run time environment monitoring and/or stimuli functions 313_1 to 313_M, 314_1 to 314_P of FIG. 3 may be enabled or disabled. Additionally, certain already existing monitoring and/or stimuli functions within the application itself may be enabled or disabled. Further still, certain monitoring and/or stimuli functions that do not exist in the application but should be are also identified. In the later situation, the application needs to be instrumented with additional code that corresponds to the desired monitoring and/or stimuli functions. Notably, typically, not all recursions of the behavior and logic engine 103_1 will necessarily require fresh instrumentation of the application. In those cases where instrumentation of the application is deemed appropriate, however, the central intelligence component 103 communicates which specific monitoring and/or stimuli functions need to be instrumented in the application.


In response, the instrumentation unit 414 instruments the abstracted/translated version of the application's code with the desired monitoring and/or stimuli functions. In cases where the application has already been instrumented with other static monitoring/stimuli functions, in an embodiment, the application's state within the run time environment 102 (e.g., specific data values) is externally saved outside the application and the application is returned to the static instrumentation engine 401. The static instrumentation engine 401 retranslates the application with the translator unit 414 and then instruments it with the new monitoring/stimuli functions with the instrumentation unit 416. The retranslator 417 retranslates the newly instrumented code to a lower level of code and provides it to the run time environment 102. The previously saved application state information is reloaded into the application.


In one embodiment, a new application that has not yet entered the run time environment is instrumented with default static monitoring/stimuli functions. In this case, the new application is retranslated with translator 414, a representation of the new application is generated with representation generation unit 415 and presented to the explorer engine, the explorer engine 103_2 identifies where the default static monitoring/stimuli functions should be placed in the translated application's code structure and communicates these locations to the instrumentation unit 416, the instrumentation unit 416 instruments the translated application at the identified location(s), the re-translation unit 417 retranslates the statically instrumented application to a lower level code, and, the lower level code instance of the instrumented application is sent to the run time environment.


In an embodiment, the application instrumentation unit 416 can embed any of a system calls monitoring function, a data tracking monitoring function and a device operation monitoring function (as discussed above with respect to FIG. 3) into the program code of an application. Again, this approach may be particularly useful if the application does not normally run on a first virtual machine level. Likewise, the application instrumentation unit 416 can embed any of a data value stimulus function, an OS event/state stimulus function and a hardware event/state stimulus function (also as described above with respect to FIG. 3) into the application instead of relying on such functions existing within the software platform beneath the application within the run time environment.


Moreover, in an embodiment, the application instrumentation unit 416 can implement two additional types of monitoring/stimulation into an application. The additional types of monitoring include: i) dynamic load monitoring; and, ii) application API call/event monitoring. The additional stimuli function includes application API call stimulation.


In the case of dynamic load monitoring, the application is modified to track the effects of any code that the application dynamically loads. Here, as is understood in the art, an application may not initially include all of the code that it could execute. Instead, the application includes references to network and/or file locations containing additional code that the application will “load” under certain circumstances (such as the application needs to execute it). An application typically executes the code it dynamically loads.


In the case of dynamic load monitoring, the explorer engine 103_2 of the central intelligence unit 103 analyses the representation of the application's internal structures/flows looking for program code constructs that correspond to dynamic code loading. In a typical circumstance, the application refers to dynamically loaded code with a character string. As such, simplistically, the explorer unit 103_2 looks for a character string associated with a dynamic load operation and causes the application instrumentation unit 415 to add monitoring code into the application that will detect if the string is invoked as a dynamic load reference as well as monitor the behavior of any code that is dynamically loaded from the string and subsequently executed.


The instrumented monitoring code is also configured to report pertinent observations to the central intelligence function 103. Such observations include whether code has been dynamically loaded; where dynamically loaded code was loaded from; whether dynamically loaded code is being (or has been) executed; and various behaviors of the executing code. The reported behaviors can include any of the behaviors described above with respect to the system call, data tracking and device monitoring functions (whether tracked within the application or beneath it).


In the case of application API call/event monitoring, the instrumentation code that is inserted into the application monitors calls made to the application during runtime and/or events or other “output” generated from the API. Here, as is known in the art, an application is “used” by making a call to the application's application programming interface (API) (e.g., by a user acting through a graphical user interface (GUI)). The API call/event monitoring function detects such calls/events/output and reports them to the central intelligence function 103. Here, the application itself may contain improperly behaving code that artificially invokes the application's API.


For example, the improperly behaving code may artificially generate application API related actions to cause the application to believe a user is invoking the application for a specific use. The application API call monitoring function would detect any calls made to the API and report them. Knowing what precise user inputs were actually generated, if any, the central intelligence unit 103 could determine that the API calls are malicious.


The application API stimulation function provides stimuli to the application through its API. Here, the central intelligence function can ask the application to perform certain tasks it was designed to perform. By logging the stimuli applied to the application by way of the application API stimulation function and comparing these stimuli to reports received from the application API tracking function, the central intelligence unit 103 will be able to detect any API invocations made by malicious code. That is, any detected API call that was not purposefully stimulated by the API stimulation function may be the act of malicious code.



FIG. 5 shows another embodiment of the run time environment 502 enhanced to include monitoring functions 525_1 to 525_R and stimuli functions 526_1 through 526_S within the application 508. Here as observed in FIG. 5, it is not uncommon to execute within the runtime environment an application 508 that has been instrumented by the static instrumentation engine 101 with one or more monitoring functions 525_1 to 525_R and/or stimuli functions 526_1 through 526_S, where, the application is running in a run time environment that itself has been enhanced to include monitoring functions 513_1 to 513_N and stimulating functions 514_1 to 514_M. FIG. 5 shows multiple monitoring and stimuli functions at each level (application, first virtual machine, OS instance and second virtual machine). Although such an arrangement is possible, in various inference engine recursions, one or no monitoring and/or stimuli function may be present at any level.



FIG. 6 provides a more detailed explanation of the operation of the explorer component 603_2 of the central intelligence function 103. As explained above, the explorer function receives a representation 630 of the application that describes the application's different states 631_1 to 631_28 and various input stimuli needed to transition the application from one state to another state. Each state typically has an associated “basic block” of the application's instructions that ends with a branch of some kind (e.g., conditional branches, unconditional jumps, etc.). Thus, the stimuli needed to transition from a first state to a second state corresponds to the specific data values processed by the first state' basic block that will cause a program branch to the second state. In reality, depending on implementation, the representation 630 as received from the application representation unit may not contain a description of all or even any stimuli needed to cause a particular state transition.


The explorer component therefore: i) identifies sections of the application's code that are “of interest”; ii) identifies paths through the application's code that can be used to reach a particular region of interest within the code; and, iii) identifies input stimuli that may be necessary to trigger one or more state transitions of the application along any such paths to the identified code regions of interest.


In performing these tasks, the explorer is provided with monitored information from one or more of the above described monitor functions within the run time environment. The reports from the monitoring functions permit the explorer to identify the application's current state. For example, based on the reported monitor information, the explorer may determine that the application is currently within state 631_3. Notably, in order to receive this monitored information the explorer may have previously requested (e.g., for a previous inference engine recursion) that certain monitors be enabled and/or that certain previously non-existent static monitors be embedded in the application. Further still, the explorer may have requested such a particular set of monitors because the explorer could not identify the application's state and needed to add the additional monitoring capability to determine it.


With the application's current state eventually recognized at state 613_3, the explorer is next able to identify a section of the application's code as being “of interest”. In the present example, assume the explorer identifies code region 632 as being “of interest.” Here, the ability to identify a section of code as being of interest may be derived from any of the aforementioned rules. For example, the aforementioned user provided rules 107 may identify an item of data as being particularly sensitive. In this case, the explorer might recognize that basic blocks of code region 632 are written to process or otherwise use this item of data. As another example, which may work in combination with the aforementioned example, the aforementioned machine learning rules 106 and/or platform specific rules 128 may identify a specific combination of states and associated basic blocks that correspond to the operation of improperly behaving code. Additionally or in the alternative, the behavior and logic engine 103_1 may determine that, based on its observations of the application, that it may contain certain types of improperly performing code and notify the explorer component of these determinations. In response the explorer engine can look for corresponding region(s) of interest. That is, the explorer component can look for code structure(s)/profile(s) that correspond to the type(s) of improper code identified by the behavior and logic engine 103_1.


With a region of interest 632 having been identified, the explorer next begins the process of determining a path 633 through the code from the current state 631_3 to the region of interest 632. The exemplary path 633 of FIG. 6 essentially corresponds to the proper sequence of state transitions from state 631_3 to state 631_24. As such, the explorer attempts to understand and then cause the creation of the set of stimuli that will help cause this sequence of state transitions to transpire.


According to one approach, referred to as symbolic execution, the explorer reduces each basic block of each state to one or more logical expressions and their associated variables 640. Here, ultimately, each of the instructions of a basic block can be expressed as a logical axiom of some kind. The logical axioms of the basic block's instructions can be combined to form one or more logical expressions that express the processing of the basic block as a function of the data values (now expressed as variables) that are processed. The expression(s) are presented to a solver 641 which determines whether a “solution” exists to the expression(s) and, if so, what the constraints are. Here, typically, the constraints correspond to limited ranges of the variables/data values that are processed by the basic block's instructions.


Thus, at this point, the explorer has reduced the application's data values to specific limited combinations thereof that have the potential to cause the application to transition to a desired state. In an embodiment, the explorer causes these solutions to be crafted as appropriate input stimuli to input stimuli embedded in the run time environment 642. Conceivably, certain input stimuli functions will need to be enabled or instrumented into the application. Eventually, e.g., through a limited trial-and-error approach, the specific set of variables that lead to the correct state transition are realized. Repeating the process for each state eventually leads program execution to the region of interest 632.


Through these kinds of processes the explorer is able to efficiently direct program execution to regions of interest.


Upon reaching a region of interest, the monitors within the runtime environment are set to observe the kinds of behaviors that will help determine whether the region of interest corresponds to improperly behaving code or not. Here, the behavior and logic engine 103_1 receives the reported information from the monitors and can begin the process of determining whether the region of interest corresponds to improper behavior.


Thus, in this fashion, the explorer 103_2 is able to efficiently bring the application to various regions of interest and the behavior and logic engine 103_1 can determine whether the regions of interest correspond to improperly behaving code. Here, thorough examination of the application can be achieved by repeatedly using the explorer 103_2 to bring the application to a “next” region of interest and the behavior and logic engine 103_1 to characterize the next region of interest. That is, the overall behavior of the central intelligence 103 can be somewhat recursive in nature where the explorer engine 103_2 repeatedly identifies respective regions of interest and what is needed to bring the code's execution to the region of interest. The explorer engine 103_2 and/or behavior and logic engine 103_1 then instrument and/or enable appropriate monitors and bring the application's execution state to the region of interest. The behavior and logic engine then receives the monitoring data and executes a series of inference engine recursions to reach a conclusion about the region of interest and/or application. The overall process then repeats with the explorer engine 103_2 identifying a next region of interest. Throughout the process the explorer engine may also receive reported information from various monitors so it can determine/confirm the present state of the application.


Notably, in an embodiment, comprehending the application's state includes the explorer engine 103_2 maintaining the state of the application's GUI so it can determine how the GUI triggers certain application acts to be performed (e.g., the application representation utilized by the explorer engine 103_2 provides information that links specific GUI feature activations to specific processes performed by the application). With this information the explorer engine 103_2 can set input conditions that effectively “use” the GUI to bring the application's state to (or at least closer to) a desired region of interest within the application. Additionally, the explorer engine, e.g., with reference to applicable rules and behavior and logic engine notifications, detects the presence of possible improper behaving code. Here, certain types of improperly behaving code will attempt to trigger processes of an application by “pretending” to be a user that is using the application through the GUI. That is, improperly behaving code within the application (or external code that is in communication with) will attempt to cause certain application actions by accessing various GUI triggers.


Apart from just the GUI, more generally, the explorer engine, e.g., by reference to particular rules, may also identify improper “low-level” application behavior (such as any improper state transition). This detected behavior can likewise be reported to the behavior and logic engine 103_1 which incorporates this information into a following inference engine recursion.



FIG. 7 shows an embodiment of a methodology for the overall framework of FIG. 1. As observed in FIG. 7, an application's code is received and instantiated in a run time environment for execution 701. The application's code is also translated to a higher level code 702. The higher level code is analyzed for purposes of generating a representation of the application that describes states of the application and state transitions of the application 703. The representation of the application is provided to the explorer engine 704. Monitoring functions within the run time environment (which may include monitoring functions instrumented within the application itself), report information to the behavior and logic engine and/or explorer engine as the application is executing 705. The behavior and logic engine uses the reported information to characterize the application's behavior and the explorer engine uses the reported information to track the state of the application 706.


The explorer function also analyzes the application representation and, based on characterization information from the behavior and logic engine and/or one or more hard coded rules, machine learned rules and/or user provided rules, identifies a region of interest within the application 707. The explorer engine determines stimuli that can be applied to the application to drive its execution to the region of interest 708. Based on the identified region of interest and/or the determined stimuli, the explorer and/or behavior and logic engine determine what monitoring and stimuli functions (and associated stimuli) should be enabled 709. This may optionally include instrumenting the application itself with additional monitoring and/or stimuli functions 710.


The determined stimuli are applied and the enabled monitoring functions report respective monitoring information 711. The behavior and logic engine uses the reported information to characterize the application's behavior and the explorer engine uses the reported information to track the state of the application 712. New stimuli and/or monitoring functions may be determined (which may require additional instrumentation of the application itself) that are enabled and/or otherwise applied 713. The process repeats until the region of interest is reached and characterized as safe or unsafe 714. Upon the region of interest having been characterized as safe or unsafe, the explorer function re-analyzes the representation 707 to determine a next region of interest. When all identified regions of interest are identified the coverage analysis of the application is complete.


Although not shown in FIG. 7 (for simplicity), in an embodiment, the framework is also able to “back out” of its current analysis of a specific region of interest to redefine region(s) of interest and/or jump to another region of interest (e.g., because a newly suspected region of interest takes priority over the current region of interest).


Referring back to FIG. 1, along with or apart from the customer provided rules 107, the behavior and logic engine 103_1 and/or explorer engine 103_2 may refer to hardcoded rules 128 and/or machine learned rules 106.


Hardcoded rules 128 typically provide generic or well known/public rules and/or rules that have been written manually. For example, certain viruses and other forms of mis-behavioral code have signatures or other behaviors/features that are widely known and rules to address them can be scripted by humans. For example, rules that encapsulate the signature or behavior of a well known “trojan horse” virus may be hand written and added to database 104. Here, for example, database 104 is a store that keeps rules for all known forms of mis-behaving code and/or handwritten rules. Upon bring-up of the framework 101, these rules 128 are made accessible to the framework. Typically, the hard coded rules 128 are not provided by the user but are instead largely created or otherwise accessed by a software security entity (e.g., a corporation that provides software security products) that provides the framework of FIG. 1.


In a further embodiment, database 104 also provides platform specific information to the monitoring functions and/or stimuli functions which are themselves generically written. For example, in an embodiment, the OS monitoring function is originally written around a set of generic OS calls (e.g., save file, read file, etc.). These generic calls, however, have specific forms in a particular environment/platform (e.g., an iOS “save file” call has a certain syntax that is different than the syntax of an ANDROID “save file” call). Database 104 therefore additionally provides platform specific information for the generic monitoring functions so they can detect events within a particular environment/platform. Similarly, database 104 additionally provides platform specific information for generic stimuli functions that are used to generate stimuli that are particular to a specific environment/platform (e.g., a generic event generated according to its specific iOS form or ANDROID form).


In the case of the machine learning function 105 and rules generated therefrom 106, as is known in the art, multiple (e.g., millions of) software instances and/or environments, some of which may be similar to the application 108 being observed, many of others of which may be nothing like the application 108 being observed, have been previously “studied” (e.g., over the course of years) by a machine learning system 105 that has deduced from its observations of these software instances/environments that certain behaviors can be characterized as improper and reduced to a set of rules which are then provided in rule set 106. For example, a machine learning system 105 could be trained on email messages to learn to distinguish between malware code and non-malware code. After this learning, it can then establish a set of rules that classify code as either malware or non-malware.


In general, a machine learning system 105 will typically be given a task (e.g., identify malware) and gain experience attempting to satisfy that task with commensurate feedback as to its successes and failures. Over time, with automated modification to the manner in which attempts to accomplish the task, the machine learning system 105 may recognize improvement in its ability to accomplish the task. In this sense, the machine can be said to have learned. Eventually, e.g., if its success in accomplishing the task crosses some threshold of success, the machine learning system 105 may identify rules for rule set 106 that essentially “educate” the framework of FIG. 1 with whatever the machine learning system 105 has so far learned.


Because of the automated nature of machine learning, the machine learning system 105 can have a massive history of experience in terms of the number of software instances and environments it had observed and the amount of time over which it has been able to observe them. Here, the machine learned rules 106 provide details as to the specific behaviors of various improperly behaving forms of code that used by the behavior and logic engine 103_1 to determine whether the application's behavior corresponds to such mis-behavior. Additionally, the machine learned rules 106 may provide details as to specific low level code structures of improperly behaving code that are used by the explorer engine 103_2 to identify “regions of interest” within the application.



FIG. 8 provides more details concerning the user experience of the system described above. As observed in FIG. 8, a user interface 850 permits a user to provide one or more custom rules 107, 807 for an application to be analyzed. That is, the rules can be custom crafted by the user for the particular application to be analyzed. Here, a user is any entity that seeks to analyze an application. For example, the user may be the IS department of a large corporation that seeks to ensure that the mobile applications that are installed on their corporate smartphones do not contain improperly behaving code. A user interface is essentially the set of functions that are presented to a user that enables a user to use the system of FIG. 1.


The custom rules 107, 807 are entered through the user interface 850 and incorporated into the set of rules that are referred to by the behavior and logic engine 103_1, 803_1 and explorer engine 103_2, 803_2 of the central intelligence function 103, 803 discussed at length in the preceding discussion(s).


In a typical scenario, the user rules will identify sensitive items of data that are operated on or otherwise processed by the application being analyzed. For example, if the mobile application is designed to operate on information from a corporate database, the custom rules 807 will identify sensitive items of information from the database (e.g., confidential and/or highly confidential information). In response to these rules, possibly in combination with other rules or input by the behavior and logic engine 803_1, the explorer engine 803_2 will identify as a “region of interest” any application code that operates on this information and cause execution of the application to be brought to any such region of interest.


The behavior and logic engine 803_1 will understand acceptable versus unacceptable uses of this information by the application and monitor the application's use of the information accordingly. For example, the behavior and logic engine 803_1 may cause the application or its underlying platform in the runtime environment to perform data tracking on the information. Upon data tracking being enabled for one or more of the sensitive data items and the application having moved its execution to regions of interest that use the information (through the influence of the explorer engine 803_2), the behavior and logic engine 803_1 will track locations where the data is actually stored and/or sent and compare these locations against acceptable register, system memory and non volatile memory storage locations where the sensitive information can be stored as well as acceptable network destinations (e.g., network address locations) where the information can be sent. These acceptable storage and/or network address locations may be defined partially or entirely by the user through the user interface 850 (likewise, unacceptable storage locations and/or network destinations may also be identified).


Alternatively or in combination, because data tracking may involve low level insight into the application, the explorer engine may likewise be configured to detect improper low level movements of the data via detected improper state transitions within the application. Definitions of such improper movements/transitions may additionally be provided to the explorer engine by the user through the custom user rules 807


In another typical scenario, the user identifies improper behaviors (e.g., an attempt to engage in a communication session with a particular location, machine or database within a protected corporate intranet or attempts to access information within a protected or private region of system memory and/or register space of the application's run time environment). Again, the explorer engine 803_2 can attempt to identify regions of code that will perform the user identified improper action and bring the application's execution to such code. Either or both of the explorer engine 803_2 and behavior and logic engine 803_1 cause the application to be monitored appropriately. The explorer engine 803_2 causes the application's execution state to be brought to the region of interest and the behavior and logic engine 803_1 receives the monitoring data, implements further points of analysis and ultimately reaches a conclusion whether the region of interest is malicious. If the improper behaviors are defined at the application state transition level, the explorer engine can detect such improper behavior as well and report it to the behavior and logic engine.


As observed in FIG. 8, an application 808 and/or its operating environment can also be retro-fitted with various plug-ins 810, 811. Here, plug-in 810 corresponds to one or more publically available plug-ins, and, plug-in 811 corresponds to one or more mobile device management (MDM) plug-ins. A plug-in is essentially program code that implements an additional set of functions to the main application and/or its underlying operating platform (such as its virtual machine or operating system) 808. For example, an application 808 or operating system may receive a plug-in to process a new type of file format or perform a new function. In the later case, a new GUI feature may appear after installation of the plug-in.


Publically available plug-ins are often downloaded from the Internet. They may be procured or free. MDM plug-ins may be publically available or may be private code. They are typically used to manage the software environment of a smartphone or other mobile device. For example, with an MDM plug-in an IS department may be able to remotely configure, monitor, install/un-install and/or enable or disable, various functions and/or software applications on any of its issued smartphones.


Here, through the user interface 850 a user is able to plug-in any such plug-in to the application before it is submitted to the static instrumentation engine 801 for translation and representation generation. Here, the representation generation function generates a representation of the application with any of the plug-ins that the user has defined should be plugged into the application.


Applications may also be analyzed on their respective devices. In the case of a typical smartphone, which does not contain a large scale virtual machine monitor layer, the runtime environment discussed above with respect to FIG. 2 (showing multiple concurrent application instances executing on a virtual machine monitor layer) may not be applicable. Instead, the runtime environment will be as observed in FIG. 9. FIG. 9 is similar to FIG. 5, except that lower (second) virtual machine layer is missing.


Here, feature 930 corresponds to an actual mobile device and run time environment 902 corresponds to the run time environment of the mobile device 930. The aforementioned possibilities for the locations and functions of various monitor functions 913 and stimuli functions 914 are as discussed in the applicable preceding sections above. In some implementations, a user may not have the ability to change, modify, re-install or replace the virtual machine layer 910 or operating system 909 in the device 930, in which case, all monitors and stimuli functions may be located within the application 908 by way of instrumentation.


In a typical usage case, a device 930 with an application 908 is communicatively coupled to the overall framework depicted in FIG. 1 through an interface 940 to the framework. The coupling may take various forms such as wireless and/or hardwired. The wireless communication may be over any wireless network (e.g., WWAN or WLAN) or wireless point-to-point link (e.g., Bluetooth).


In the case of typical network connectivity, applications may be screened for safety on mobile devices that are “in the field”. For instance, as just one example, an application may be installed for actual use on mobile device that is active in the field but the application has been fully instrumented with all monitoring and stimuli functions. Should a need arise to confirm that the application is still safe while it is in the field (e.g., because of a suspicious event or as matter of periodic check-ups), the rest of the framework can communicate with these instrumented functions as well as any monitoring and/or stimuli functions that were embedded into the software layers beneath the application (e.g., virtual machine, operating system) before the device was released into the field for actual use. With any standard communication session between the device and the framework (e.g., over the Internet) the application can be fully screened. That is, monitors can send their respective monitoring information over the network to the framework and the framework can send commands to the stimuli functions and monitors over the network. Thus the application itself can be screened while it is


Hardwired communicative coupling may be achieved through the mobile device's standard hardware I/O interface (e.g., a USB interface). Here, commands to the monitoring functions 913 and stimuli functions 914 are submitted by the framework to the mobile device 930 through the interface 940. Likewise, information from the monitoring functions 913 are reported to the framework through the interface.


In the case of instrumentation of the application 908, an instrumented application is created according to the processes discussed above in preceding sections and installed on the mobile device 930. If an instrumented application 908 that is installed on the device 930 needs to be instrumented with additional monitoring and/or stimuli functions, in an embodiment, the state of the application 908 is externally saved (e.g., through the interface 940 and into storage associated with the framework), a new instance of the application having the new instrumentation set is created and installed on the device 930. The application state is then reloaded onto the mobile device through the interface 940 and analysis of the application continues. Alternatively, the application state could conceivably be stored on the mobile device 930 rather than being externally stored. Further still, in an embodiment, because of possible difficulties associated with the saving of state information of an application that is installed on a mobile device, as a default, an application to be analyzed may be fully instrumented with a complete suite of monitors and stimuli functions before its initial installation on the mobile device. In this case, no new instance of the application 908 with additional instrumentation would ever need to be created.


Apart from analyzing an application, the framework discussed above may also be used to modify application software so as to specifically prevent it from improper behavior. Here, the instrumentation unit 414 discussed above serves to insert additional code into an application so that it is specifically prevented from performing unwanted actions. As just one example, an application may be retrofitted with software that prevents certain, specific sensitive information from being transmitted from the smartphone that the application will be installed on.



FIG. 10 shows an embodiment of a method flow for instrumenting an application so as to prevent it from performing unwanted actions. As observed in FIG. 10, one or more unwanted actions are identified and presented to the explorer component 1001.


The application is then translated by the application translation unit 414 to, e.g., create a higher level object code instance of the application 1002. A representation of the application, such as a control flow graph or other structure that describes the application's states and state transitions is created by the application representation generation unit 415 from the abstracted application instance and submitted to the explorer component 1003.


The explorer component studies the specified unwanted action(s) to be prevented and the code's representation and defines changes to made to the application's code to remove from the application any ability to perform the unwanted action(s) 1004. For example, if the unwanted action is the sending of certain sensitive information outside the application, the explorer may define all possible “exit points” of information from the application. The explorer may further determine that a data monitoring function is to be embedded in the application that is configured to track the information. The explorer may further determine that additional code needs to be added to the application that will prevent the execution of any exit point if it uses information from a tainted source (e.g., a tainted register location, system memory location and/or non volatile storage location). Alternatively or in combination the explorer may simply remove certain blocks of code from the application in order to remove the unwanted function from the application.


The explorer's determinations are then communicated to the instrumentation unit which instruments the application with code designed to effect the functions mandated by the explorer 1005. The application is then retranslated to its original code level by the re-translator 417 and installed on a mobile device 1006.


The types of unwanted behaviors that can be specified and prevented through the instrumentation process described above are too numerous to detail in full here. However, some basic applications of the above described sequence are discussed immediately below.


In a first embodiment, certain device functions are disabled. For example, the audio function (e.g., the ability of an application to “turn-on” the microphone of a mobile device so it can internally process the audio information near it (such as a conversation)) of a mobile device may be disabled. According to one approach, the explorer determines any states within the application that could cause a command to be sent to the hardware and/or OS to turn on the device's audio function and determines that such states should be modified to remove or otherwise squelch this ability.


In a further embodiment, the disablement of the function is made conditional. For example, the specific unwanted behavior may be that the audio device should be disabled whenever the device is within range of a certain wireless network, out of range of a certain wireless network, whenever the device is within one or more specific GPS location(s) or outside one or more specific GPS location(s). Here, the instrumentation code that disables the audio is written to only take effect if the stated condition is detected. To support this ability, the explorer identifies the parts of the application code that are sensitive to the conditions needed to determine whether to enable/disable the function. Apart from an audio device, a network interface, camera or video device may similarly be disabled as discussed above.


In a second embodiment, an application's ability to engage in communication with an external system (e.g., a packet exchange with another computer over a network) is tightly controlled. Here permissible and/or unwanted actions may be specified such the external communication is permitted only through specific networks, not permitted over specific networks or types of networks (e.g., public networks), permitted only with specific systems (e.g., servers or other computers), not permitted with specific systems or types of systems.


In a third embodiment, an application's ability to access data, either external from the mobile device or internal to the mobile device is tightly controlled. For example, network communications/sessions with specific external computing systems may be prevented, and/or access to certain files or system memory regions within the mobile device may be prevent.



FIGS. 11a and 11b show various uses of the framework 100 of FIG. 1. As observed in FIG. 11a, most of the entire framework 1100a is implemented at a user's location 1160, such as by an IS department of a large corporation. Here, for instance, the large corporation issues mobile devices to its employees and desires that the applications that run on these devices be safe. The IS department of the corporation therefore maintains the framework at its own location so it can screen the corporation's application software on its own premises 1180a. Notably, however, the machine learned rules 106 and at least some of the hard coded rules 1128 are provided by a cloud service 1160a that is separated from the framework by a network 1170 (e.g., the Internet). Here, cloud service 1160a may be provided, for example, by a security software company that initially provided the framework to the corporation. Over time, the rules 1106, 1128 are updated as the service provider generates more rules. Note the machine learning 1105 part of the framework resides in the cloud service 1160a in this instance.



FIG. 11b shows another use case where the entire framework 1100b is implemented as a cloud service. Here, the user or customer submits an application and any user rules 1107 through interface 1150 and over network 1170 to the cloud service 1160a. The cloud service 1160b then performs safety screening on the application.


Other usage models of the framework are also possible where various parts of the framework (other than just the machine learning and hard coded rules portions as in FIG. 11a or the entire framework as in FIG. 11b) are provided as a cloud service 1160.


Other usage models may direct applications for screening to the framework (however it is implemented) as part of their normal download and installation process. For example a user may chose to download an application from the Internet, however, before the application is allowed to be downloaded and installed on the user's device it is instead routed to the framework which analyzes it. Here, the application is only permitted to be installed on the device if it is deemed safe by the framework.


Although the above discussion has been directed to the security analysis of application software on mobile devices, it is pertinent to point out that the above described framework could also be applied to application software to larger systems such as a personal computers (e.g., laptop and desktop systems) and server systems.


The various components of the framework described above can be performed on any number of computing systems. At one extreme all of the components of the framework 100 could be implemented on a single computing system (e.g., on a large server system). Alternatively, each of the components of the framework could be implemented on its own respective computer system apart from the other framework components and their respective computer systems. A single framework component could be implemented with multiple computer systems. A single computer system could contain some but not all of the components of the framework. A single framework component may be implemented on more than one computing system. Different combinations of various ones of these possibilities may be used to create a single framework. To the extent different computing systems are used to implemented the framework they may be communicatively coupled with one or more networks.


Processes taught by the discussion above may be performed with program code such as machine-executable instructions which cause a machine (such as a “virtual machine”, a general-purpose CPU processor disposed on a semiconductor chip or special-purpose processor disposed on a semiconductor chip) to perform certain functions. Alternatively, these functions may be performed by specific hardware components that contain hardwired logic for performing the functions, or by any combination of programmed computer components and custom hardware components.


A storage medium may be used to store program code. A storage medium that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, CD-ROMs, DVD ROMs, EPROMs, EEPROMs, magnetic or optical cards or other type of machine-readable media suitable for storing electronic instructions. Program code may also be downloaded from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a propagation medium (e.g., via a communication link (e.g., a network connection)).



FIG. 12 is a Nock diagram of a computing system 1200 that can execute program code stored by a storage medium. It is important to recognize that the computing system block diagram of FIG. 12 is just one of various computing system architectures. Different types of computing systems include mobile and/or handheld computing devices (e.g., smartphones, cell-phones, personal digital assistances), laptop personal computers, desktop personal computers, servers, etc.


The applicable storage medium may include one or more fixed components (such as non volatile storage component 1202 (e.g., a hard disk drive, FLASH drive or non volatile memory) or system memory 1205) and/or various movable components such as a CD ROM 1203, a compact disc, a magnetic tape, etc operable with removable media drive 1204. In order to execute the program code, typically instructions of the program code are loaded into the Random Access Memory (RAM) system memory 1205; and, the processing core 1206 then executes the instructions. The processing core 1206 may include one or more CPU processors or CPU processing cores.


It is believed that processes taught by the discussion above can be practiced within various software environments such as, for example, object-oriented and non-object-oriented programming environments, Java based environments (such as a Java 2 Enterprise Edition (J2EE) environment or environments defined by other releases of the Java standard), or other environments.


In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A system for automatically analyzing an application instance for improperly behaving code, the system comprising: one or more hardware processors; anda memory coupled to the one or more hardware processors, the memory including a central intelligence engine that, when executed by the one or more hardware processors, (a) identifies a region of interest of the application instance by identifying a portion of code of the application instance that, when executed by the one or more hardware processors, triggers one or more processes of the application instance and causes the one or more processes to appear to be invoked by a user, (b) determines specific stimuli that causes transitions within the application instance to reach the region of interest, and (c) applies the stimuli to the application instance and performs subsequent monitoring of one or more behaviors resulting from execution of the portion of the code of the application instance at the region of interest.
  • 2. The system of claim 1, wherein the central intelligence engine, when executed by the one or more hardware processors, identifies the region of interest by at least analyzing the portion of code of the application instance to determine if the portion of the code would artificially generate Application Programming Interface (API) related actions to appear as if user invoked.
  • 3. The system of claim 1, wherein the central intelligence engine, when executed by the one or more hardware processors, further identifies the region of interest by at least analyzing the portion of code of the application instance to determine if the portion of the code will attempt to cause data to be read out of a predetermined storage location.
  • 4. The system of claim 1, wherein the central intelligence engine, when executed by the one or more hardware processors, determines the specific stimuli by at least determining a data value that, when processed by the application instance, drives execution of the application instance within a run-time environment including one or more virtual machines to the portion of code of the application instance representing the region of interest.
  • 5. The system of claim 1, wherein the central intelligence engine, when executed by the one or more hardware processors, identifying whether the portion of code of the application instance would attempt to cause certain application actions by accessing various graphic user interface (GUI) triggers.
  • 6. The system of claim 1, wherein the central intelligence engine, when executed by the one or more hardware processors, further enables or disables monitoring logic that is configured to monitor the one or more behaviors during processing of the portion of the code of the application instance.
  • 7. The system of claim 6, wherein the monitoring logic is deployed in at least one of (i) the one or more virtual machines and (ii) an operating system operating with the application instance.
  • 8. The system of claim 6, wherein the monitoring logic includes at least one monitor in any of: a virtual machine of the one or more virtual machines; andan operating system instance that operates in combination with the application instance.
  • 9. The system of claim 1, wherein the memory further comprises the one or more virtual machines that including monitoring logic that, when enabled by the central intelligence engine and during execution of the portion of the code of the application software at the region of interest, monitors one or more behaviors occurring during the execution of the portion of the code to determine if the portion of the code is improperly behaving code.
  • 10. The system of claim 8, wherein the enabling of the monitoring logic includes enabling a first monitor operating in cooperation with the virtual machine or a second monitor operating in cooperation with the operating system instance.
  • 11. The system of claim 8, wherein the enabling of the monitoring logic includes enabling a system call monitoring function implemented within the operating system instance.
  • 12. The system of claim 11, wherein the region of interest is determined from one or more machine learned rules produced from data by a machine learning system and the data provides details as to specific low level code structures of the improperly behaving code in order to identify the region of interest of the application.
  • 13. A method for automatically analyzing an application instance by one or more hardware processors executing software that perform operations comprising: identifying a region of interest of the application instance based on an analysis of code of the application instance in response to execution of the software by the one or more hardware processors, the region of interest corresponds to one or more parts of the code of the application instance that are considered to potentially include improperly behaving code that triggers one or more processes of the application instance and causes the one or more processes to appear to be invoked by a user;determining, during execution of the software by the one or more hardware processors, specific stimuli and applying, the stimuli to the application instance so that the application instance commences processing of the one or more parts of code of the application instance that is associated with the region of interest;monitoring, during execution of the software by the one or more hardware processors, one or more behaviors of the application instance during processing of the one or more parts of code of the application instance that is associated with the region of interest within one or more virtual machines in response to the applied stimuli; anddetermining, during execution of the software by the one or more hardware processors, whether the one or more behaviors identify that the region of interest corresponds to improperly behaving code.
  • 14. The method of claim 13, wherein the identifying of the region of interest includes an analysis of a first part of the one or more parts of code of the application instance to determine if the first part of the code generates Application Programming Interface (API) related actions to appear as if user invoked.
  • 15. The method of claim 13, wherein the identifying of the region of interest further includes an analysis of a first part of the one or more parts of code of the application instance to determine if the first part of the code will attempt to cause data to be read out of a predetermined storage location.
  • 16. The method of claim 13, wherein the determining the specific stimuli comprises determining a data value that, when processed by the application instance, drives processing of the application instance by the one or more virtual machines to a first part of the one or more parts of code of the application instance representing the region of interest.
  • 17. The method of claim 13, wherein the determining the specific stimuli comprises determining an event that causes an operating system used in the processing of the application instance to report the event to the application instance or provide state information to hardware observable by the application instance.
  • 18. The method of claim 13, wherein prior to monitoring the one or more behaviors, the method further comprises enabling one or more monitors associated with the application instance; andgenerating monitoring information from the one or more monitors.
  • 19. The method of claim 18 wherein the enabling of said one or more monitors includes enabling at least one monitor embedded within or operating in association with an operating system instance.
  • 20. The method of claim 19, wherein the enabling of the one or more monitors includes enabling a system call monitoring function embedded within or operating in association with the operating system instance.
  • 21. A method for automatically analyzing an application instance for improperly behaving code, the method comprising: identifying a region of interest of an application instance by identifying whether a portion of code of the application instance that, when executed, triggers one or more processes of the application instance and causes the one or more processes to appear to be invoked by a user;determining specific stimuli that causes transitions within the application instance to reach the region of interest; andapplying the stimuli to the application instance and performing subsequent monitoring of one or more behaviors resulting from execution of the portion of the code of the application instance at the region of interest.
  • 22. The method of claim 21, wherein the region of interest corresponds to one or more parts of the code of the application instance that are considered to potentially include improperly behaving code.
  • 23. The method of claim 21, wherein the identifying of the region of interest includes an analysis of a first part of one or more parts of code of the application instance to determine if the first part of the code generates Application Programming Interface (API) related actions to appear as if user invoked.
  • 24. The method of claim 21, wherein the identifying of the region of interest further includes an analysis of a first part of one or more parts of code of the application instance to determine if the first part of the code will attempt to cause data to be read out of a predetermined storage location.
  • 25. The method of claim 21, wherein the determining the specific stimuli comprises determining a data value that, when processed by the application instance, drives processing of the application instance by one or more virtual machines to a first part of one or more parts of code of the application instance representing the region of interest.
  • 26. The method of claim 21, wherein the determining the specific stimuli comprises determining an event that causes an operating system used in the processing of the application instance to report the event to the application instance.
  • 27. The method of claim 21, wherein the determining the specific stimuli comprises determining an event that causes an operating system used in the processing of the application instance to provide state information to hardware observable by the application instance.
  • 28. The method of claim 21, further comprising: determining, in response to applying the stimuli, whether the one or more behaviors identify that the region of interest corresponds to improperly behaving code based on the monitoring of the one or more behaviors of the application instance resulting from the execution of the portion of the code of the application instance at the region of interest.
Parent Case Info

This is a Continuation application claiming the benefit of priority on U.S. patent application Ser. No. 14/930,385 filed Nov. 2, 2015 and U.S. patent application Ser. No. 13/775,168 filed Feb. 23, 2013, which is incorporated by reference in its entirety.

US Referenced Citations (567)
Number Name Date Kind
4292580 Ott et al. Sep 1981 A
5175732 Hendel et al. Dec 1992 A
5440723 Arnold et al. Aug 1995 A
5490249 Miller Feb 1996 A
5657473 Killean et al. Aug 1997 A
5842002 Schnurer et al. Nov 1998 A
5978917 Chi Nov 1999 A
5983348 Ji Nov 1999 A
6088803 Tso et al. Jul 2000 A
6094677 Capek et al. Jul 2000 A
6108799 Boulay et al. Aug 2000 A
6118382 Hibbs et al. Sep 2000 A
6269330 Cidon et al. Jul 2001 B1
6272641 Ji Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6298445 Shostack et al. Oct 2001 B1
6357008 Nachenberg Mar 2002 B1
6417774 Hibbs et al. Jul 2002 B1
6424627 Sørhaug et al. Jul 2002 B1
6442696 Wray et al. Aug 2002 B1
6484315 Ziese Nov 2002 B1
6487666 Shanklin et al. Nov 2002 B1
6493756 O'Brien et al. Dec 2002 B1
6550012 Villa et al. Apr 2003 B1
6700497 Hibbs et al. Mar 2004 B2
6775657 Baker Aug 2004 B1
6831893 Ben Nun et al. Dec 2004 B1
6832367 Choi et al. Dec 2004 B1
6895550 Kanchirayappa et al. May 2005 B2
6898632 Gordy et al. May 2005 B2
6907396 Muttik et al. Jun 2005 B1
6941348 Petry et al. Sep 2005 B2
6971097 Wallman Nov 2005 B1
6981279 Arnold et al. Dec 2005 B1
6995665 Appelt et al. Feb 2006 B2
7007107 Ivchenko et al. Feb 2006 B1
7028179 Anderson et al. Apr 2006 B2
7043757 Hoefelmeyer et al. May 2006 B2
7069316 Gryaznov Jun 2006 B1
7080407 Zhao Jul 2006 B1
7080408 Pak et al. Jul 2006 B1
7093002 Wolff et al. Aug 2006 B2
7093239 van der Made Aug 2006 B1
7096498 Judge Aug 2006 B2
7100201 Izatt Aug 2006 B2
7107617 Hursey et al. Sep 2006 B2
7159149 Spiegel et al. Jan 2007 B2
7213260 Judge May 2007 B2
7231667 Jordan Jun 2007 B2
7240364 Branscomb et al. Jul 2007 B1
7240368 Roesch et al. Jul 2007 B1
7243371 Kasper et al. Jul 2007 B1
7249175 Donaldson Jul 2007 B1
7287278 Liang Oct 2007 B2
7308716 Danford et al. Dec 2007 B2
7328453 Merkle, Jr. et al. Feb 2008 B2
7346486 Ivancic et al. Mar 2008 B2
7356736 Natvig Apr 2008 B2
7386888 Liang et al. Jun 2008 B2
7392542 Bucher Jun 2008 B2
7418729 Szor Aug 2008 B2
7428300 Drew et al. Sep 2008 B1
7441272 Durham et al. Oct 2008 B2
7448084 Apap et al. Nov 2008 B1
7458098 Judge et al. Nov 2008 B2
7464404 Carpenter et al. Dec 2008 B2
7464407 Nakae et al. Dec 2008 B2
7467408 O'Toole, Jr. Dec 2008 B1
7478428 Thomlinson Jan 2009 B1
7480773 Reed Jan 2009 B1
7487543 Arnold et al. Feb 2009 B2
7496960 Chen et al. Feb 2009 B1
7496961 Zimmer et al. Feb 2009 B2
7519990 Xie Apr 2009 B1
7523493 Liang et al. Apr 2009 B2
7530104 Thrower et al. May 2009 B1
7540025 Tzadikario May 2009 B2
7565550 Liang et al. Jul 2009 B2
7568233 Szor et al. Jul 2009 B1
7584455 Ball Sep 2009 B2
7603715 Costa et al. Oct 2009 B2
7607171 Marsden et al. Oct 2009 B1
7639714 Stolfo et al. Dec 2009 B2
7644441 Schmid et al. Jan 2010 B2
7657419 van der Made Feb 2010 B2
7676841 Sobchuk et al. Mar 2010 B2
7698548 Shelest et al. Apr 2010 B2
7707633 Danford et al. Apr 2010 B2
7712136 Sprosts et al. May 2010 B2
7730011 Deninger et al. Jun 2010 B1
7739740 Nachenberg et al. Jun 2010 B1
7779463 Stolfo et al. Aug 2010 B2
7784097 Stolfo et al. Aug 2010 B1
7832008 Kraemer Nov 2010 B1
7836502 Zhao et al. Nov 2010 B1
7849506 Dansey et al. Dec 2010 B1
7854007 Sprosts et al. Dec 2010 B2
7869073 Oshima Jan 2011 B2
7877803 Enstone et al. Jan 2011 B2
7904959 Sidiroglou et al. Mar 2011 B2
7908660 Bahl Mar 2011 B2
7930738 Petersen Apr 2011 B1
7937761 Bennett May 2011 B1
7949849 Lowe et al. May 2011 B2
7996556 Raghavan et al. Aug 2011 B2
7996836 McCorkendale et al. Aug 2011 B1
7996904 Chiueh et al. Aug 2011 B1
7996905 Arnold et al. Aug 2011 B2
8006305 Aziz Aug 2011 B2
8010667 Zhang et al. Aug 2011 B2
8020206 Hubbard et al. Sep 2011 B2
8028338 Schneider et al. Sep 2011 B1
8042184 Batenin Oct 2011 B1
8045094 Teragawa Oct 2011 B2
8045458 Alperovitch et al. Oct 2011 B2
8069484 McMillan et al. Nov 2011 B2
8087086 Lai et al. Dec 2011 B1
8171553 Aziz et al. May 2012 B2
8176049 Deninger et al. May 2012 B2
8176480 Spertus May 2012 B1
8201246 Wu et al. Jun 2012 B1
8204984 Aziz et al. Jun 2012 B1
8214905 Doukhvalov et al. Jul 2012 B1
8220055 Kennedy Jul 2012 B1
8225288 Miller et al. Jul 2012 B2
8225373 Kraemer Jul 2012 B2
8233882 Rogel Jul 2012 B2
8234640 Fitzgerald et al. Jul 2012 B1
8234709 Viljoen et al. Jul 2012 B2
8239944 Nachenberg et al. Aug 2012 B1
8260914 Ranjan Sep 2012 B1
8266091 Gubin et al. Sep 2012 B1
8286251 Eker et al. Oct 2012 B2
8291499 Aziz et al. Oct 2012 B2
8307435 Mann et al. Nov 2012 B1
8307443 Wang et al. Nov 2012 B2
8312545 Tuvell et al. Nov 2012 B2
8321936 Green et al. Nov 2012 B1
8321941 Tuvell et al. Nov 2012 B2
8332571 Edwards, Sr. Dec 2012 B1
8365286 Poston Jan 2013 B2
8365297 Parshin et al. Jan 2013 B1
8370938 Daswani et al. Feb 2013 B1
8370939 Zaitsev et al. Feb 2013 B2
8375444 Aziz et al. Feb 2013 B2
8381299 Stolfo et al. Feb 2013 B2
8402529 Green et al. Mar 2013 B1
8464340 Ahn et al. Jun 2013 B2
8479174 Chiriac Jul 2013 B2
8479276 Vaystikh et al. Jul 2013 B1
8479291 Bodke Jul 2013 B1
8510827 Leake et al. Aug 2013 B1
8510828 Guo et al. Aug 2013 B1
8510842 Amit et al. Aug 2013 B2
8516478 Edwards et al. Aug 2013 B1
8516590 Ranadive et al. Aug 2013 B1
8516593 Aziz Aug 2013 B2
8522348 Chen et al. Aug 2013 B2
8528086 Aziz Sep 2013 B1
8533824 Hutton et al. Sep 2013 B2
8539582 Aziz et al. Sep 2013 B1
8549638 Aziz Oct 2013 B2
8555391 Demir et al. Oct 2013 B1
8561177 Aziz et al. Oct 2013 B1
8566946 Aziz et al. Oct 2013 B1
8584094 Dadhia et al. Nov 2013 B2
8584234 Sobel et al. Nov 2013 B1
8584239 Aziz et al. Nov 2013 B2
8595834 Xie et al. Nov 2013 B2
8627476 Satish et al. Jan 2014 B1
8635696 Aziz Jan 2014 B1
8682054 Xue et al. Mar 2014 B2
8682812 Ranjan Mar 2014 B1
8689333 Aziz Apr 2014 B2
8695096 Zhang Apr 2014 B1
8713631 Pavlyushchik Apr 2014 B1
8713681 Silberman et al. Apr 2014 B2
8726392 McCorkendale et al. May 2014 B1
8739280 Chess et al. May 2014 B2
8776229 Aziz Jul 2014 B1
8782792 Bodke Jul 2014 B1
8789172 Stolfo et al. Jul 2014 B2
8789178 Kejriwal et al. Jul 2014 B2
8793787 Ismael et al. Jul 2014 B2
8805947 Kuzkin et al. Aug 2014 B1
8806647 Daswani et al. Aug 2014 B1
8832829 Manni et al. Sep 2014 B2
8839432 Patil Sep 2014 B1
8850570 Ramzan Sep 2014 B1
8850571 Staniford et al. Sep 2014 B2
8881234 Narasimhan et al. Nov 2014 B2
8881282 Aziz et al. Nov 2014 B1
8898788 Aziz et al. Nov 2014 B1
8935779 Manni et al. Jan 2015 B2
8984638 Aziz et al. Mar 2015 B1
8990939 Staniford et al. Mar 2015 B2
8990944 Singh et al. Mar 2015 B1
8997219 Staniford et al. Mar 2015 B2
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9027135 Aziz May 2015 B1
9071638 Aziz et al. Jun 2015 B1
9104867 Thioux et al. Aug 2015 B1
9106694 Aziz et al. Aug 2015 B2
9118715 Staniford et al. Aug 2015 B2
9159035 Ismael et al. Oct 2015 B1
9171160 Vincent et al. Oct 2015 B2
9176843 Ismael et al. Nov 2015 B1
9189627 Islam Nov 2015 B1
9195829 Goradia et al. Nov 2015 B1
9197664 Aziz et al. Nov 2015 B1
9223972 Vincent et al. Dec 2015 B1
9225740 Ismael et al. Dec 2015 B1
9241010 Bennett et al. Jan 2016 B1
9251343 Vincent et al. Feb 2016 B1
9262635 Paithane et al. Feb 2016 B2
9282109 Aziz et al. Mar 2016 B1
9294501 Mesdaq et al. Mar 2016 B2
9300686 Pidathala et al. Mar 2016 B2
9306960 Aziz Apr 2016 B1
9306974 Aziz et al. Apr 2016 B1
9311479 Manni et al. Apr 2016 B1
9367681 Ismael et al. Jun 2016 B1
9424427 Chiriac et al. Aug 2016 B1
9594905 Ismael et al. Mar 2017 B1
9792196 Ismael et al. Oct 2017 B1
20010005889 Albrecht Jun 2001 A1
20010047326 Broadbent et al. Nov 2001 A1
20020018903 Kokubo et al. Feb 2002 A1
20020038430 Edwards et al. Mar 2002 A1
20020091819 Melchione et al. Jul 2002 A1
20020095607 Lin-Hendel Jul 2002 A1
20020116627 Tarbotton et al. Aug 2002 A1
20020144156 Copeland Oct 2002 A1
20020162015 Tang Oct 2002 A1
20020166063 Lachman et al. Nov 2002 A1
20020169952 DiSanto et al. Nov 2002 A1
20020184528 Shevenell et al. Dec 2002 A1
20020188887 Largman et al. Dec 2002 A1
20020194490 Halperin et al. Dec 2002 A1
20030074578 Ford et al. Apr 2003 A1
20030084318 Schertz May 2003 A1
20030101381 Mateev et al. May 2003 A1
20030115483 Liang Jun 2003 A1
20030188190 Aaron et al. Oct 2003 A1
20030191957 Hypponen et al. Oct 2003 A1
20030200460 Morota et al. Oct 2003 A1
20030212902 van der Made Nov 2003 A1
20030229801 Kouznetsov et al. Dec 2003 A1
20030237000 Denton et al. Dec 2003 A1
20040003323 Bennett et al. Jan 2004 A1
20040015712 Szor Jan 2004 A1
20040019832 Arnold et al. Jan 2004 A1
20040047356 Bauer Mar 2004 A1
20040083408 Spiegel et al. Apr 2004 A1
20040088581 Brawn May 2004 A1
20040093513 Cantrell et al. May 2004 A1
20040111531 Staniford et al. Jun 2004 A1
20040117478 Triulzi et al. Jun 2004 A1
20040117624 Brandt et al. Jun 2004 A1
20040128355 Chao et al. Jul 2004 A1
20040165588 Pandya Aug 2004 A1
20040236963 Danford et al. Nov 2004 A1
20040243349 Greifeneder et al. Dec 2004 A1
20040249911 Alkhatib et al. Dec 2004 A1
20040255161 Cavanaugh Dec 2004 A1
20040268147 Vviederin et al. Dec 2004 A1
20050005159 Oliphant Jan 2005 A1
20050021740 Bar et al. Jan 2005 A1
20050033960 Vialen et al. Feb 2005 A1
20050033989 Poletto et al. Feb 2005 A1
20050050148 Mohammadioun et al. Mar 2005 A1
20050086523 Zimmer et al. Apr 2005 A1
20050091513 Mitomo et al. Apr 2005 A1
20050091533 Omote et al. Apr 2005 A1
20050091652 Ross et al. Apr 2005 A1
20050108562 Khazan May 2005 A1
20050114663 Cornell et al. May 2005 A1
20050125195 Brendel Jun 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050157662 Bingham et al. Jul 2005 A1
20050183143 Anderholm et al. Aug 2005 A1
20050201297 Peikari Sep 2005 A1
20050210533 Copeland et al. Sep 2005 A1
20050238005 Chen et al. Oct 2005 A1
20050240781 Gassoway Oct 2005 A1
20050262562 Gassoway Nov 2005 A1
20050265331 Stolfo Dec 2005 A1
20050283839 Cowburn Dec 2005 A1
20060010495 Cohen et al. Jan 2006 A1
20060015416 Hoffman et al. Jan 2006 A1
20060015715 Anderson Jan 2006 A1
20060015747 Van de Ven Jan 2006 A1
20060021029 Brickell et al. Jan 2006 A1
20060021054 Costa et al. Jan 2006 A1
20060031476 Mathes et al. Feb 2006 A1
20060047665 Neil Mar 2006 A1
20060070130 Costea et al. Mar 2006 A1
20060075496 Carpenter et al. Apr 2006 A1
20060095968 Portolani et al. May 2006 A1
20060101516 Sudaharan et al. May 2006 A1
20060101517 Banzhof et al. May 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060123477 Raghavan et al. Jun 2006 A1
20060143709 Brooks et al. Jun 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060161983 Cothrell et al. Jul 2006 A1
20060161987 Levy-Yurista Jul 2006 A1
20060161989 Reshef et al. Jul 2006 A1
20060164199 Gilde et al. Jul 2006 A1
20060173992 Weber et al. Aug 2006 A1
20060179147 Tran et al. Aug 2006 A1
20060184632 Marino et al. Aug 2006 A1
20060191010 Benjamin Aug 2006 A1
20060221956 Narayan et al. Oct 2006 A1
20060236393 Kramer et al. Oct 2006 A1
20060242709 Seinfeld et al. Oct 2006 A1
20060248519 Jaeger et al. Nov 2006 A1
20060248582 Panjwani et al. Nov 2006 A1
20060251104 Koga Nov 2006 A1
20060288417 Bookbinder et al. Dec 2006 A1
20070006288 Mayfield et al. Jan 2007 A1
20070006313 Porras et al. Jan 2007 A1
20070011174 Takaragi et al. Jan 2007 A1
20070016951 Piccard et al. Jan 2007 A1
20070033645 Jones Feb 2007 A1
20070038943 FitzGerald et al. Feb 2007 A1
20070064689 Shin et al. Mar 2007 A1
20070074169 Chess et al. Mar 2007 A1
20070094730 Bhikkaji et al. Apr 2007 A1
20070101435 Konanka et al. May 2007 A1
20070128855 Cho et al. Jun 2007 A1
20070142030 Sinha et al. Jun 2007 A1
20070143827 Nicodemus et al. Jun 2007 A1
20070156895 Vuong Jul 2007 A1
20070157180 Tillmann et al. Jul 2007 A1
20070157306 Elrod et al. Jul 2007 A1
20070168988 Eisner et al. Jul 2007 A1
20070171824 Ruello et al. Jul 2007 A1
20070174915 Gribble et al. Jul 2007 A1
20070192500 Lum Aug 2007 A1
20070192858 Lum Aug 2007 A1
20070198275 Malden et al. Aug 2007 A1
20070208822 Wang et al. Sep 2007 A1
20070220607 Sprosts et al. Sep 2007 A1
20070240215 Flores et al. Oct 2007 A1
20070240218 Tuvell et al. Oct 2007 A1
20070240219 Tuvell et al. Oct 2007 A1
20070240220 Tuvell et al. Oct 2007 A1
20070240222 Tuvell et al. Oct 2007 A1
20070250930 Aziz et al. Oct 2007 A1
20070256132 Oliphant Nov 2007 A2
20070271446 Nakamura Nov 2007 A1
20080005782 Aziz Jan 2008 A1
20080028463 Dagon et al. Jan 2008 A1
20080032556 Schreier Feb 2008 A1
20080040710 Chiriac Feb 2008 A1
20080046781 Childs et al. Feb 2008 A1
20080066179 Liu Mar 2008 A1
20080072326 Danford et al. Mar 2008 A1
20080077793 Tan et al. Mar 2008 A1
20080080518 Hoeflin et al. Apr 2008 A1
20080086720 Lekel Apr 2008 A1
20080098476 Syversen Apr 2008 A1
20080120722 Sima et al. May 2008 A1
20080134178 Fitzgerald et al. Jun 2008 A1
20080134334 Kim et al. Jun 2008 A1
20080141376 Clausen et al. Jun 2008 A1
20080184367 McMillan et al. Jul 2008 A1
20080184373 Traut et al. Jul 2008 A1
20080189787 Arnold et al. Aug 2008 A1
20080201778 Guo et al. Aug 2008 A1
20080209557 Herley et al. Aug 2008 A1
20080215742 Goldszmidt et al. Sep 2008 A1
20080222729 Chen et al. Sep 2008 A1
20080263665 Ma et al. Oct 2008 A1
20080295172 Bohacek Nov 2008 A1
20080301810 Lehane et al. Dec 2008 A1
20080307524 Singh et al. Dec 2008 A1
20080313738 Enderby Dec 2008 A1
20080320556 Lee et al. Dec 2008 A1
20080320594 Jiang Dec 2008 A1
20090003317 Kasralikar et al. Jan 2009 A1
20090007100 Field et al. Jan 2009 A1
20090013408 Schipka Jan 2009 A1
20090031423 Liu et al. Jan 2009 A1
20090036111 Danford et al. Feb 2009 A1
20090037835 Goldman Feb 2009 A1
20090044024 Oberheide et al. Feb 2009 A1
20090044274 Budko et al. Feb 2009 A1
20090064332 Porras et al. Mar 2009 A1
20090077666 Chen et al. Mar 2009 A1
20090083369 Marmor Mar 2009 A1
20090083855 Apap et al. Mar 2009 A1
20090089879 Wang et al. Apr 2009 A1
20090094697 Provos et al. Apr 2009 A1
20090113425 Ports et al. Apr 2009 A1
20090125976 Wassermann et al. May 2009 A1
20090126015 Monastyrsky et al. May 2009 A1
20090126016 Sobko et al. May 2009 A1
20090133125 Choi et al. May 2009 A1
20090144823 Lamastra et al. Jun 2009 A1
20090158430 Borders Jun 2009 A1
20090172815 Gu et al. Jul 2009 A1
20090187992 Poston Jul 2009 A1
20090193293 Stolfo et al. Jul 2009 A1
20090198651 Shiffer et al. Aug 2009 A1
20090198670 Shiffer et al. Aug 2009 A1
20090198689 Frazier et al. Aug 2009 A1
20090199274 Frazier et al. Aug 2009 A1
20090199296 Xie et al. Aug 2009 A1
20090228233 Anderson et al. Sep 2009 A1
20090241187 Troyansky Sep 2009 A1
20090241190 Todd et al. Sep 2009 A1
20090265692 Godefroid et al. Oct 2009 A1
20090271867 Zhang Oct 2009 A1
20090300415 Zhang et al. Dec 2009 A1
20090300761 Park et al. Dec 2009 A1
20090328185 Berg et al. Dec 2009 A1
20090328221 Blumfield et al. Dec 2009 A1
20100005146 Drako et al. Jan 2010 A1
20100011205 McKenna Jan 2010 A1
20100017546 Poo et al. Jan 2010 A1
20100030996 Butler, II Feb 2010 A1
20100031353 Thomas et al. Feb 2010 A1
20100037314 Perdisci et al. Feb 2010 A1
20100043073 Kuwamura Feb 2010 A1
20100054278 Stolfo et al. Mar 2010 A1
20100058474 Hicks Mar 2010 A1
20100064044 Nonoyama Mar 2010 A1
20100077481 Polyakov et al. Mar 2010 A1
20100083376 Pereira et al. Apr 2010 A1
20100115621 Staniford et al. May 2010 A1
20100132038 Zaitsev May 2010 A1
20100154056 Smith et al. Jun 2010 A1
20100180344 Malyshev et al. Jul 2010 A1
20100192223 Ismael et al. Jul 2010 A1
20100220863 Dupaquis et al. Sep 2010 A1
20100235831 Dittmer Sep 2010 A1
20100251000 Lyne et al. Sep 2010 A1
20100251104 Massand Sep 2010 A1
20100281102 Chinta et al. Nov 2010 A1
20100281541 Stolfo et al. Nov 2010 A1
20100281542 Stolfo et al. Nov 2010 A1
20100287260 Peterson et al. Nov 2010 A1
20100299754 Amit et al. Nov 2010 A1
20100306173 Frank Dec 2010 A1
20110004737 Greenebaum Jan 2011 A1
20110025504 Lyon et al. Feb 2011 A1
20110041179 St Hlberg Feb 2011 A1
20110047594 Mahaffey et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110055907 Narasimhan et al. Mar 2011 A1
20110078794 Manni et al. Mar 2011 A1
20110093951 Aziz Apr 2011 A1
20110099620 Stavrou et al. Apr 2011 A1
20110099633 Aziz Apr 2011 A1
20110099635 Silberman et al. Apr 2011 A1
20110113231 Kaminsky May 2011 A1
20110145918 Jung et al. Jun 2011 A1
20110145920 Mahaffey Jun 2011 A1
20110145934 Abramovici et al. Jun 2011 A1
20110167493 Song et al. Jul 2011 A1
20110167494 Bowen et al. Jul 2011 A1
20110173213 Frazier et al. Jul 2011 A1
20110173460 Ito et al. Jul 2011 A1
20110219449 St. Neitzel et al. Sep 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225624 Sawhney et al. Sep 2011 A1
20110225655 Niemela et al. Sep 2011 A1
20110247072 Staniford et al. Oct 2011 A1
20110265182 Peinado et al. Oct 2011 A1
20110289582 Kejriwal et al. Nov 2011 A1
20110302587 Nishikawa et al. Dec 2011 A1
20110307954 Melnik et al. Dec 2011 A1
20110307955 Kaplan et al. Dec 2011 A1
20110307956 Yermakov et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20120023593 Puder et al. Jan 2012 A1
20120054869 Yen et al. Mar 2012 A1
20120066698 Yanoo Mar 2012 A1
20120079596 Thomas et al. Mar 2012 A1
20120084859 Radinsky et al. Apr 2012 A1
20120110667 Zubrilin et al. May 2012 A1
20120117652 Manni et al. May 2012 A1
20120121154 Xue et al. May 2012 A1
20120124426 Maybee et al. May 2012 A1
20120174186 Aziz et al. Jul 2012 A1
20120174196 Bhogavilli et al. Jul 2012 A1
20120174218 McCoy et al. Jul 2012 A1
20120198279 Schroeder Aug 2012 A1
20120210423 Friedrichs et al. Aug 2012 A1
20120222121 Staniford et al. Aug 2012 A1
20120255015 Sahita et al. Oct 2012 A1
20120255017 Sallam Oct 2012 A1
20120260342 Dube et al. Oct 2012 A1
20120266244 Green et al. Oct 2012 A1
20120278886 Luna Nov 2012 A1
20120297489 Dequevy Nov 2012 A1
20120330801 McDougal et al. Dec 2012 A1
20120331553 Aziz et al. Dec 2012 A1
20130014259 Gribble et al. Jan 2013 A1
20130036472 Aziz Feb 2013 A1
20130047257 Aziz Feb 2013 A1
20130074185 McDougal et al. Mar 2013 A1
20130086684 Mohler Apr 2013 A1
20130097699 Balupari et al. Apr 2013 A1
20130097706 Titonis Apr 2013 A1
20130111587 Goel May 2013 A1
20130117852 Stute May 2013 A1
20130117855 Kim et al. May 2013 A1
20130139264 Brinkley et al. May 2013 A1
20130160125 Likhachev et al. Jun 2013 A1
20130160127 Jeong et al. Jun 2013 A1
20130160130 Mendelev et al. Jun 2013 A1
20130160131 Madou et al. Jun 2013 A1
20130167236 Sick Jun 2013 A1
20130174214 Duncan Jul 2013 A1
20130185789 Hagiwara et al. Jul 2013 A1
20130185795 Winn et al. Jul 2013 A1
20130185798 Saunders et al. Jul 2013 A1
20130191915 Antonakakis et al. Jul 2013 A1
20130196649 Paddon et al. Aug 2013 A1
20130227691 Aziz et al. Aug 2013 A1
20130246370 Bartram et al. Sep 2013 A1
20130247186 LeMasters Sep 2013 A1
20130263260 Mahaffey et al. Oct 2013 A1
20130291109 Staniford et al. Oct 2013 A1
20130298243 Kumar et al. Nov 2013 A1
20130318038 Shiffer et al. Nov 2013 A1
20130318073 Shiffer et al. Nov 2013 A1
20130325791 Shiffer et al. Dec 2013 A1
20130325792 Shiffer et al. Dec 2013 A1
20130325871 Shiffer et al. Dec 2013 A1
20130325872 Shiffer et al. Dec 2013 A1
20140032875 Butler Jan 2014 A1
20140053260 Gupta et al. Feb 2014 A1
20140053261 Gupta et al. Feb 2014 A1
20140130158 Wang et al. May 2014 A1
20140137180 Lukacs et al. May 2014 A1
20140169762 Ryu Jun 2014 A1
20140179360 Jackson et al. Jun 2014 A1
20140181131 Ross Jun 2014 A1
20140189687 Jung et al. Jul 2014 A1
20140189866 Shiffer et al. Jul 2014 A1
20140189882 Jung et al. Jul 2014 A1
20140237600 Silberman et al. Aug 2014 A1
20140280245 Wilson Sep 2014 A1
20140283037 Sikorski et al. Sep 2014 A1
20140283063 Thompson et al. Sep 2014 A1
20140328204 Klotsche et al. Nov 2014 A1
20140337836 Ismael Nov 2014 A1
20140344926 Cunningham et al. Nov 2014 A1
20140351935 Shao et al. Nov 2014 A1
20140380473 Bu et al. Dec 2014 A1
20140380474 Paithane et al. Dec 2014 A1
20150007312 Pidathala et al. Jan 2015 A1
20150096022 Vincent et al. Apr 2015 A1
20150096023 Mesdaq et al. Apr 2015 A1
20150096024 Haq et al. Apr 2015 A1
20150096025 Ismael Apr 2015 A1
20150180886 Staniford et al. Jun 2015 A1
20150186645 Aziz et al. Jul 2015 A1
20150220735 Paithane et al. Aug 2015 A1
20150372980 Eyada Dec 2015 A1
20160044000 Cunningham Feb 2016 A1
20160127393 Aziz et al. May 2016 A1
Foreign Referenced Citations (11)
Number Date Country
2439806 Jan 2008 GB
2490431 Oct 2012 GB
02006928 Jan 2002 WO
0223805 Mar 2002 WO
2007117636 Oct 2007 WO
2008041950 Apr 2008 WO
2011084431 Jul 2011 WO
2011112348 Sep 2011 WO
2012075336 Jun 2012 WO
2012145066 Oct 2012 WO
2013067505 May 2013 WO
Non-Patent Literature Citations (93)
Entry
Margolis, P.E. , “Random House Webster's ‘Computer & Internet Dictionary 3rd Edition’”, ISBN 0375703519, (Dec. 1998).
Moore, D. , et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
Natvig, Kurt , “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002).
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005).
Newsome, J. , et al., “Polygraph: Automatically Generating Signatures for Polymorphic Worms”, In Proceedings of the IEEE Symposium on Security and Privacy, (May 2005).
Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
Oberheide et al., CloudAV.sub.--N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security '08 Jul. 28-Aug. 1, 2008 San Jose, CA.
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doorn, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”).
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25.
Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
Spitzner, Lance , “Honeypots: Tracking Hackers”, (“Spizner”), (Sep. 17, 2002).
The Sniffers's Guide to Raw Traffic available at: yuba.stanford.edu/.about.casado/pcap/section1.html, (Jan. 6, 2014).
Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998).
U.S. Appl. No. 13/775,166, Final Office Action, dated Oct. 22, 2014.
U.S. Appl. No. 13/775,166, Non-Final Office Action, dated Mar. 21, 2014.
U.S. Appl. No. 13/775,168, Final Office Action, dated Nov. 14, 2014.
U.S. Appl. No. 13/775,168, Non-Final Office Action, dated Jun. 13, 2014.
U.S. Appl. No. 13/775,168, Non-Final Office Action, dated Mar. 16, 2015.
U.S. Appl. No. 13/775,168, Notice of Allowance, dated Aug. 24, 2015.
U.S. Appl. No. 13/775,171, Final Office Action, dated Dec. 12, 2014.
U.S. Appl. No. 13/775,171, Non-Final Office Action, dated Mar. 27, 2014.
U.S. Appl. No. 13/775,172, Final Office Action, dated Dec. 9, 2014.
U.S. Appl. No. 13/775,172, Non-Final Office Action, dated Apr. 22, 2014.
U.S. Appl. No. 13/775,172, Non-Final Office Action, dated Apr. 6, 2017.
U.S. Appl. No. 13/775,169, Non Final Office Action, dated Mar. 12, 2014.
U.S. Appl. No. 13/775,169, filed Feb. 23, 2013 Notice of Allowance dated Jan. 13, 2015.
U.S. Appl. No. 13/775,171, filed Feb. 23, 2013 Non-Final Office Action dated Aug. 5, 2015.
U.S. Appl. No. 14/495,762, filed Sep. 24, 2014 Non-Final Office Action dated Apr. 6, 2015.
U.S. Appl. No. 14/495,762, filed Sep. 24, 2014 Notice of Allowance dated Oct. 2, 2015.
U.S. Appl. No. 14/930,385, filed Nov. 2, 2015 Final Office Action dated Feb. 10, 2017.
U.S. Appl. No. 14/930,385, filed Nov. 2, 2015 Non-Final Office Action dated Jul. 22, 2016.
U.S. Pat. No. 8,171,553 filed Apr. 20, 2006, Inter Parties Review Decision dated Jul. 10, 2015.
U.S. Pat. No. 8,291,499 filed Mar. 16, 2012, Inter Parties Review Decision dated Jul. 10, 2015.
U.S. Appl. No. 13/775,170, filed Feb. 23 2013 Non-Final Office Action dated Jan. 23, 2015.
Venezia, Paul , “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003).
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350.
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.
“Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003).
“Packet”, Microsoft Computer Dictionary, Microsoft Press, (Mar. 2002), 1 page.
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.isp?reload=true&arnumbe- r=990073, (Dec. 7, 2013).
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003).
AltaVista Advanced Search Results. “attack vector identifier”. Http://www.altavista.com/web/results?Itag=ody&pg=aq&aqmode=aqa=Event+Orch- estrator . . . , (Accessed on Sep. 15, 2009).
AltaVista Advanced Search Results. “Event Orchestrator”. Http://www.altavista.com/web/results?Itag=ody&pg=aq&aqmode=aqa=Event+Orch- esrator . . . , (Accessed on Sep. 3, 2009).
Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
Baldi, Mario; Risso, Fulvio; “A Framework for Rapid Development and Portable Execution of Packet-Handling Applications”, 5th IEEE International Symposium Processing and Information Technology, Dec. 21, 2005, pp. 233-238.
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
Boubalos, Chris , “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003).
Chaudet, C. , et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82.
Chen, P. M. and Noble, B. D., “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”) (2001).
Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012).
Cisco, Configuring the Catalyst Switched Port Analyzer (SPAN) (“Cisco”), (1992).
Clark, John, Sylvian Leblanc,and Scott Knight. “Risks associated with usb hardware trojan devices used by insiders.” Systems Conference (SysCon), 2011 IEEE International. IEEE, 2011.
Cohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120.
Costa, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, Sosp '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
Crandall, J.R. , et al., “Minos:Control Data Attack Prevention Orthogonal to Memory Model”, 37th International Symposium on Microarchitecture, Portland, Oregon, (Dec. 2004).
Deutsch, P. , “Zlib compressed data format specification version 3.3” RFC 1950, (1996).
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007).
Dunlap, George W. , et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, “Dunlap”), (Dec. 9, 2002).
Excerpt regarding First Printing Date for Merike Kaeo, Designing Network Security (“Kaeo”), (2005).
Filiol, Eric , et al., “Combinatorial Optimisation of Worm Propagation on an Unknown Network”, International Journal of Computer Science 2.2 (2007).
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
Gibler, Clint, et al. AndroidLeaks: automatically detecting potential privacy leaks in android applications on a large scale. Springer Berlin Heidelberg, 2012.
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
Gregg Keizer: “Microsoft's HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-d/1035069? [retrieved on Jun. 1, 2016].
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase © CMU, Carnegie Mellon University, 2007.
Hjelmvik, Erik , “Passive Network Security Analysis with NetworkMiner”, (IN)Secure, Issue 18, (Oct. 2008), pp. 1-100.
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
IEEE Xplore Digital Library Sear Results for “detection of unknown computer worms”. Http//ieeexplore.ieee.org/searchresult.jsp?SortField=Score&SortOrder=desc- &ResultC . . . , (Accessed on Aug. 28, 2009).
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware detection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003).
Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6.
Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”) (2003).
Krasnyansky, Max , et al., Universal TUN/TAP driver, available at https://www.kernel.org/doc/Documentation/networking/tuntap.txt (2002) (“Krasnyansky”).
Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages.
Leading Colleges Select FireEye to Stop Malware-Related Data Breaches, FireEye Inc., 2009.
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
Liljenstam, Michael , et al., “Simulating Realistic Network Traffic for Worm Warning System Design and Testing”, Institute for Security Technology studies, Dartmouth College (“Liljenstam”), (Oct. 27, 2003).
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
Lok Kwong et al: “DroidScope: Seamlessly Reconstructing the OS and Dalvik Semantic Views for Dynamic Android Vlalware Analysis”, Aug. 10, 2012, XP055158513, Retrieved from the Internet: URL:https://www.usenix.org/system/files/conference/usenixsecurity12/sec12- -final107.pdf [retrieved on Dec. 15, 2014].
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001).
Related Publications (1)
Number Date Country
20180121316 A1 May 2018 US
Continuations (2)
Number Date Country
Parent 14930385 Nov 2015 US
Child 15785208 US
Parent 13775168 Feb 2013 US
Child 14930385 US