The present invention relates to the field of computer science and aims of enhancing the security of computer programs, which have access to computing resources of the computing device on which they are executed. In particular, the invention relates to sandboxing techniques, in which a computer program that is executed on a computing device has to comply with a plurality of execution rules.
NOTE: Throughout this specification, the bracketed numbers refer to the correspondingly listed references in Appendix A attached hereto.
The idea of restricting program operation to only the information and resources necessary to complete its operation goes back to the 1970s. As principle of least privilege [27], it has influenced the design of computer systems, operating systems, and information systems to improve stability, safety, security, and privacy. On the Android™ platform, least privilege is realized through sandboxing: First, no application can access the data of other applications. Second, access to shared user resources (such as location, contacts, etc.) is available through dedicated APIs only, which are guarded by permissions. Each application declares the permissions it needs; access to other APIs and resources is blocked by the operating system. In a landmark paper, Felt et al. [10] systematically tested Android™ APIs to check which permissions would refer to which API. Besides producing a map between APIs and permissions, they also found that 33% of Android™ apps investigated were overprivileged, that is, they requested more permissions than their APIs would actually require. PScout [2] uses a combination of static analysis and fuzz testing to extend this mapping to undocumented APIs, and found that 22% of permission requests were unnecessary if app developers confined themselves to documented APIs.
Android™ apps require more permissions than they need. On Android™, permissions have to be acknowledged by a user upon app installation; the Google™ play store also lists each app with the requested permissions. However, in a survey of 308 Android™ users, Felt et al. [11] found that only 17% paid attention to permissions during installation, and only 3% of all respondents could correctly answer three questions regarding permissions. This is the more worrying as in an analysis of 75,000 Android™ apps [32], 46% of all apps were asking for the phone's state permission, allowing apps to access (and potentially leak) the user's SIM card information, including the unique IMEI number.
Android™ permission warnings do not help most users make correct security decisions.
In contrast to specified rules and permissions, the alternative of extracting these from an existing system has always been compelling. In a short paper comparing the permission systems of mobile platforms [1], Au et al. call for “a tool that can automatically determine the permissions an application needs.” This question generalizes into “What does an application do?”, which is the general problem of program analysis.
Program analysis falls into two categories: static analysis of program code and dynamic analysis of executions. Static code analysis sets an upper bound to what a program can do: If static analysis determines some behavior is impossible, it can be safely excluded. The COPES tool [4] uses static analysis to eliminate unneeded permissions for a given Android™ app.
The challenge of static analysis is overapproximation: Because of the halting problem, the analysis must frequently assume that more behaviors are possible than actually would be. Furthermore, if code is deobfuscated, decrypted, interpreted, or downloaded at runtime only, all of which are common in the Android™ world, it will be inaccessible for static analysis.
Static analysis produces overapproximation. Dynamic analysis works on actual executions, and thus is not limited by code properties. In terms of program behavior, it sets a lower bound: Any (benign) behavior seen in past executions should be allowed in the future, too. Consequently, given a set of executions, one can learn program behavior from these and infer security policies. In their seminal 1996 paper [12], Forrest et al. learned “normal” behavior as short-range correlations in the system calls of a UNIX process, and were successfully able to detect common intrusions on the sendmail and lpr programs. Since then, a number of techniques has been used for automatic intrusion detection, including statistical profiling [12], neural networks [16], finite state automata [28], support vector machines [19], and rule-based systems [20]. Chandola et al. [8] provide a detailed survey on techniques used.
Since Android™ programs come in interpretable bytecode, the platform offers several opportunities to monitor dynamic behavior, including system calls (AASandbox [6]), data flow (TAINTDROID [9]), traces (CROWDROID [7]), or CPU and network activity (ANDROMALY[29]); all these platforms can be used both to monitor application behavior (and report results to the user) as well as to detect malicious behavior (as a violation of explicit rules or as determined by a trained classifier).
Neuner et al. [25] provide a comprehensive survey of available techniques. The joint problem of all these approaches is the fundamental limitation of dynamic analysis, namely incompleteness: If some behavior has not been observed so far, there is no guarantee that it may not occur in the future. Given the high cost of false alarms, this implies that a sufficiently large set of executions must be available that covers known behaviors. Such a set can either come from tests (which then typically would be written or conducted at substantial effort), or from production (which then requires a training phase, possibly involving classification by humans). In the domain of network intrusion detection, the large variation of “benign” traffic in operational “real world” settings is seen as a prime reason why machine learning is rarely employed in practice [31]. Dynamic analysis requires sufficiently many “normal” executions to be trained with.
It is an object of the present invention to alleviate at least some of the problems that exist in the prior art.
In accordance with the invention, a method of analyzing the behavior of a computer program is provided. The computer program is executable in an operating system by processing means of a computing device. The execution of predetermined parts of the computer program is triggered by events of at least one interface of the computer program, and leads the computer program to request access to computing resources, which are accessible by the computing device. The method comprises the steps of:
Steps a-d can be called “mining”, as the method explores and identifies the behavior of the computer program as it is executed on a computing device. The steps define a learning process of the method according to the invention.
The method further comprises the subsequent steps:
The processing means can for example comprise a central processing unit, CPU, which is programmable to execute the method step in accordance with the invention.
The first behavior can for example be the wanted or expected behavior of the computer program, whereas the second behavior can for example be a malicious behavior of the computer program, caused for example by the infection of the computer program by a computer virus.
A partial match can for example be identified, in the case of a binary description, if a subset of the digits forming the description of the identified computing resources matches the corresponding digits of the description of the associated, or learned, computing resources that correspond to the observed event. Descriptions can be automatically generated using known techniques.
A description can be a textual description readable and understandable by a human being.
Steps e-i can be called “sandboxing”. The behavior of a computer program is observed and compared with the behavior that has been previously mined using step a-d. A deviation of the observed behavior from the learned behavior is identified by the method.
In various instances, in step (h) the method can conclude that the computer program exhibits the first behavior only if the description of the identified computing resources matches one of the descriptions associated with the event in the memory element.
Step (i) can further comprise blocking the requested access of the computer program to the identified computing resources.
Further, step (i) can, in various instances, comprise updating the computing resources associated with the event in the memory element using the newly identified computing resources. The updating can be conditional on the approval of the user of the computer program, who is presented with a corresponding input query.
The input provided in step (e) can, in various instances, be a user input.
The computing resources can, in various instances, comprise any of a file system, file system descriptor, storage means, networking means, imaging means, processing means, display means or printing means.
In various instances, the interfaces can comprise a Graphical User Interface, GUI, and the events can advantageously comprise any of a mouse-click event, a text-entry event, a key-stroke event, a choice event, or any combination thereof.
The interfaces can further, in various instances, comprise a networking interface and/or a sensor interface.
The identification of computing resources can, in various instances, comprise identifying at least one call by the computer program to an Application Programming Interface, API, routine of the operating system, the routine providing access to a computing resource. The operating system can for example be the Android™ operating system or any other operating system. The method can alternatively use other known means for identifying an access request to a computing resource.
In step (b) the generated events can, in various instances, be randomly generated.
In various instances, the description of the computing resources, which is stored in the memory element in step (d), can comprise a binary or a textual representation.
According to a further aspect of the invention, there is provided a computer program comprising computer readable code means, which when run on a computer, causes the computer to carry out the method according to the invention.
According to another aspect of the invention, there is provided a computer program product comprising a computer-readable medium on which the computer program according to the invention is stored.
According to yet another aspect of the invention, a computer capable of carrying out the method according to the invention is provided.
According to a different aspect of the invention, there is provided a computing device comprising processing means and a memory element, the processing means being configured to execute the method according to the invention. The device can comprise first and second processing means for executing the computer program that is being monitored and for executing the method steps respectively. Alternatively, the computer program and the method according to the invention can be executed by the same processing means. The processing means can, in various instances, comprise a central processing unit, CPU, while the memory element my comprise any known memory, of volatile or persistent type, for example a hard disk drive, a solid state drive, a random access memory or a database to which the processing means have read/write access. The database can or can not be physically collocated with the processing means. The database can be accessible to the processing means using a data communication channel.
In various embodiments, the invention provides the first approach to leverage test generation to automatically extract sandbox rules from general-purpose applications. The approach has a number of advantages when compared to prior art solutions.
Preventing behavior changes. The mined sandbox detects behavior not seen during mining, reducing the attack surface for infections as well as for latent malicious behavior that otherwise would activate later.
Fully automatic. As soon as an interface for automatic test generation is available, such as a GUI, sandbox mining becomes fully automatic, too. Developers can easily mine and re-mine sandboxes at any time.
No training. In contrast to anomaly detection systems, no training is required in production, as the “normal” behavior would already be explored during testing. If an app accesses an external account, such as SNAPCHAT™, its login and password must be provided.
Detailed analysis. Mined sandboxes provide a much finer level of detail than what would normally be specified or documented in practice. As they refer to user resources and user actions, they are readable and understandable even by non-experts.
Adverse and obscure code. In contrast to static code analysis, test generation and monitoring are neither challenged by large programs nor thwarted by code that would be deobfuscated, decrypted, interpreted, or downloaded at runtime only.
Enforced completeness. The key issue with testing is that it is incomplete by construction. However, by disallowing behaviors not seen during testing, one can ensure that what behavior has been seen during testing is indeed all there is and will be.
Certification. Anyone can mine a sandbox for a given app and compare its rules against the sandboxes provided by vendors or app stores, or those of previous versions.
Further advantages of the invention are described in what follows based on exemplary embodiments of the invention, and with reference to the accompanying figures.
These steps describe the learning or mining of the behavior of the computer program 130. The program can be any computer program that is executable on the device 100 and associated operating system. As steps (c)-(d) are performed for each generated event, after completion of these steps, the memory element 120 holds for each generated event a set of observed resources Rlearn, to which the computer program has requested access as a consequence of the corresponding event. In the illustrated example, the set of observed resources Rlearn for event E1 comprises resources R1, R3, . . . .
Advantageously, the subsequent steps are defined as
Steps (e)-(i) are sandboxing steps, wherein the sandbox is provided by the behavior which has been learned by the method during earlier steps (a)-(d).
The following description provides an exemplary embodiment of the above method steps and illustrates several applications in which the method according to the invention finds its particular use. The described embodiment provides insight to the skilled person about how the method according to the invention can be implemented. However, the provided description of the embodiment is not limitative of the scope of the invention, which is defined by the appended claims.
In order to protect the execution of a computer program against malicious behavior, one way is to place the program in a sandbox, restraining its access to potentially sensitive resources and services. On the Android™ platform, for instance, developers have to declare that an app needs access to specific resources. The popular SNAPCHAT™ picture messaging application, for instance, requires access to the internet, the camera, and the user's contacts; these permissions would be reviewed and acknowledged by the user upon download and install. If an application fails to declare a permission, the operating system denies access to the respective resource; if the SNAPCHAT™ app attempted to access e-mail or text messages, the respective API call would be denied by the Android™ system.
While such permissions are transparent to users, they can be too coarse-grained to prevent misuse. One common attack vector in Android™ apps is to have an app stealthily send text messages to premium numbers. SNAPCHAT™ can send a text message to validate the user's phone number, and thus requires permission to send text messages. Consequently, an attacker could take the original SNAPCHAT™ app, add code to it to stealthily send out text messages, and replace the original with the malicious variant; the new malicious behavior would still be in the bounds mandated by the sandbox. Likewise, the sandbox would not prevent SNAPCHAT™ from continuously monitoring the audio, the current location, and send all of this information over the internet—simply because the set permissions allow it. The issue could be addressed by tightening the sandbox—for instance, by constraining the conditions under which the app can send the message. But then, someone has to specify and validate these rules—and repeat this with each change to the app, as a sandbox that is too tight could disable important functionality.
The present invention uses sandbox mining, a technique to automatically extract sandbox rules from a given program.
The core idea of the invention, illustrated in
To illustrate how the invention works in practice, a sandbox is mined from the SNAPCHAT™ example application. During systematic GUI testing, the mining phase determines that SNAPCHAT™ requires access to the camera, location, internet, and so on. These accesses are associated by the method according to the invention with the event that triggers them—that is, the individual GUI elements. Thus, only the “Send SMS” GUI button used to authenticate the phone number during setup would still actually be allowed to send a text message. The resulting sandbox then protects the user against unexpected behavior changes. One can for example assume to have a malicious SNAPCHAT™ variant, which sends out text messages to premium numbers. Replacing the original with the malicious variant (say, as part of an attack) would trigger the sandbox, as sending text messages in the background did not occur during mining. Even if an app like SNAPCHAT™ were malicious in the first place, and placed in an app store, the attacker would face a dilemma. If the app sends out text messages in the background right after the start, this would be detected in the mining phase, and thus made explicit as a sandbox rule permitting behavior; such a rule or description (“This app can send SMS messages to 1-900-PREMIUM in the background”) would raise suspicions with any user. If, however, the app stays silent during mining, it would be completely disallowed from sending text messages in production.
The advantages of the invention can be tuned by the tightness of the sandbox rules, which depends on the number of rules learned in the first phase.
To answer these questions, a prototype implementation of sandbox mining for the ANDROID™ platform has been developed. The so-called BOXMATE tool combines state-of-the-art tools for test generation and monitoring in a single, user-friendly package. In the remainder of this description, the BOXMATE embodiment to illustrate and evaluate the concept of sandbox mining in accordance with the present invention.
While various embodiments name BOXMATE is used to describe the concepts of the present invention, the invention is in no way limited to this exemplary embodiment. The skilled person will know that the described invention can be readily implemented on other operating system platforms based on the information provided by the present description.
Rather than writing tests or collect executions during production, one can also generate them. In the security domain, the main purpose of such generated executions is to find bugs. Introduced by Miller et al. [23], fuzz testing automatically exercises sensitive tools and APIs with random inputs; no interaction or annotation is required. Today, fuzz testing is one of the prime methods to find vulnerabilities: The Microsoft™ SAGE fuzzing tool [13], for instance, “has saved millions of dollars to Microsoft™, as well as to the world in time and energy, by avoiding expensive security patches to more than 1 billion PCs.” [14].
For the Android™ platform, recent years have seen a raise of powerful test generators exercising Android™ apps. MONKEY [24] is a simple fuzz tester, generating random streams of user events such as clicks, touches, or gestures; although typically used as robustness tester, it has been used to find GUI bugs [18] and security bugs [22]. While MONKEY generates pure random events, the DYNODROID tool [21] focuses on those events handled by an app, getting higher coverage while needing only 1/20 of the events. Given an app, all these tools run fully automatically; no model, code, or annotation is required.
The most recent ANDROID™ test generators have achieved high levels of robustness: PUMA [17] has run dynamic analysis on 3,600 apps from the Google™ Play store, translating Dalvik to Java bytecode and back; the ANDLANTIS system [5] is reported to be able to process over 3,000 Android™ applications per hour. The aim of these systems is to apply dynamic analysis on several applications, summarizing data and possibly detect outliers in terms of dynamic behavior.
All these testing tools still share the fundamental limitation of execution analysis: If a behavior has not been found during testing, there is no guarantee it will not occur in the future. Attackers can easily exploit this by making malicious behavior latent: For instance, the previously described assumed malicious SNAPCHAT™ variant would start sending malicious text messages only after some time, or in a specific network, or when no dynamic analysis tool is run, each of which would defeat observation during testing.
Testing alone cannot guarantee the absence of malicious behavior.
Program analysis, sandboxing, and test generation are all mature technologies that are sufficiently robust to be applied on a large scale. However, each of them has fundamental limitations—sandboxes need rules, dynamic analysis needs executions, and testing does not provide guarantees. Combining the three, as proposed in the present invention, however, not only mitigates these weaknesses—it even turns them into a strength. The argument is as follows: With modern test generators, one can generate as many executions as needed. These executions can feed dynamic analysis, providing and summarizing insights into what happens in these executions. By construction, these insights are incomplete, and other (in particular malicious) behavior is still possible. The key idea of this invention is to turn the incompleteness of dynamic analysis into a guarantee—namely by having a sandbox enforce that anything not seen yet will not happen. To the best of the inventor's knowledge, this is the first attempt to bring together test generation, dynamic analysis, and sandboxing; their combined strength is leveraged using the present invention.
As discussed in Section 2.1, a number of test generators are now available for the ANDROID™ platform. However, the setting in accordance with various embodiments of the invention differs in two points from traditional test generation, which caused the inventors to create their own tool.
Testing for normality. Traditional testing tools focus on uncovering bugs; and thus, they would strive to cover as many possible behaviors as feasible, in the hope of detecting a defect. In setting considered in the framework of the present invention, the purpose of a test generator is also to cover as many behaviors as possible; however, rather than trying to detect bugs, the aim is to explore normal behavior; the complement, abnormal behavior, would later be detected by the sandbox. Consequently, the implemented DROIDMATE test generator focuses on those user interactions that are most easy to reach, assuming that these lead to the most frequent (and thus “normal”) actions.
Third-party testing. Another assumption frequently made by testing tools is that it is the developer who tests; and thus, a certain amount of instrumentation (possibly requiring source code or byte code conversion [17]) or system modification (say, special kernels [21] or root access) could be required. In the setting of the invention, any user can generate tests for any third-party application binary on an unmodified device. DROIDMATE fulfills all these requirements.
In what follows, DROIDMATE's operation is described. Conceptually, DROIDMATE generates tests by interacting with graphical user interface elements (widgets) of the Application under Test (AuT). To this end, DROIDMATE makes use of UI AUTOMATOR [33], a recent framework introduced in ANDROID™ 4.1. At runtime, DROIDMATE extracts the set of currently visible GUI elements, and then interacts with them using UI AUTOMATOR. DROIDMATE starts the exploration by installing on an ANDROID™ device an .apk file containing the AuT and then launching its launchable activity through the ANDROID™ Debug Bridge (ADB), available in the Android™ SDK. From a user's perspective, all this is transparent; the user only has to turn on developer mode (a standard ANDROID™ setting) and then can launch DROIDMATE from a connected PC on a given app. During start, and then again after each generated interaction, DROIDMATE monitors the behavior of the AuT as sensitive APIs are concerned. Specifically, DROIDMATE monitors the sensitive APIs called, their security-relevant parameter values (e.g. ContentProvider URIs) and call stack traces, using the monitoring techniques discussed in Section 4. All interactions conducted up to this point as well as the screens seen during exploration and the monitored values can then used by an exploration strategy to decide which GUI element to interact with next or if to terminate the exploration. The data from the exploration is sufficient to replay the test, either manually or automatically.
In various embodiments, the exploration strategy is simple: one randomly clicks on GUI elements that are visible and set as “clickable” or “checkable” at a given point in time. This includes all buttons and all links. After a predefined amount n of exploration actions, DROIDMATE resets the application (i.e., quits it, if needed, forcefully), and starts anew. If n is set to n=10 interactions (plus possible initial interactions for logging in and such), after 10 clicks, the application is quit and restarted. This strategy avoids getting stuck in complicated dialog flows, and favors those GUI elements that are quickly reachable after start. DROIDMATE also resets the application if it no longer can interact with it—that is, when it quits, crashes, hangs, or starts another application.
As an example of how DROIDMATE explores application behavior, the SNAPCHAT™ application can be considered again.
Even after running DROIDMATE for several hours, no further sensitive APIs were used. So are these thirteen APIs really all sensitive APIs accessed? This is precisely the problem of testing, which does not give a guarantee of whether all has been seen; and this is why the invention uses sandboxing to exclude other, unseen behavior.
List of the thirteen APIs used by SNAPCHAT™ discovered by DROIDMATE, and the buttons that first trigger them:
login button:
1 Android.net.ConnectivityManager.getActiveNetworklnfo( )
2 org.apache.http.impl.client.AbstractHttpClient.execute( )
3 java.net.Socket( )
4 Android.hardware.Camera.open( )
5 Android.location.LocationManager.getLastKnownLocation( )
login_username_email:
6 java.net.URL.openConnection( )
7 java.net.URLConnection( )
8 java.net.Socket( )
camera_take_snap button:
9 Android.location.LocationManager.isProviderEnabled( )
picture_save_pic:
10 Android.content.ContentResolver.insert( )
uri=“content://media/external/images/media”
11 Android.content.ContentResolver.query( )
uri=“content://media/external/images/thumbnails”
12 Android.content.ContentResolver.openFileDescriptor( )
uri=“content://media/external/images/thumbnails/<n>”
13 Android.content.ContentResolver.insert( )
uri=“content://media/external/images/thumbnails”
Besides the test generator DROIDMATE, the second component used to evaluate the method according to the invention is the BOXMATE component. It implements the sandbox mechanism itself, monitoring (and possibly preventing) program behavior. Just as with test generation, the invention provides a technique that allows any user to handle any third-party application binary on an unmodified device. To this end, the APPGUARD [3] approach by Backes et al. has been followed.
APPGUARD is a fine-grained policy enforcement framework for untrusted ANDROID™ applications. It takes an untrusted app and user-defined security policies as input and embeds the security monitor into the untrusted app, thereby delivering a secured self-monitoring app. Technically, APPGUARD is built upon callee-site inline reference monitoring (IRM). The key idea of IRM is to redirect method calls to the embedded security monitor and checks whether executing the call is allowed by the security policy. Technically, IRM diverts control flow towards the security monitor by modifying references to security relevant methods in the Dalvik Virtual Machine's internal bytecode representation [34].
As the APPGUARD source code is not publicly available, BOXMATE implements APPGUARD-style IRM on Android™, monitoring all calls to sensitive APIs. While the more sophisticated APPGUARD features were not implemented, such as its automata-based security policies, its protection against forceful extraction of stored secrets, or its interactive interface, these features could easily be added by integrating the full APPGUARD approach into BOXMATE.
The BOXMATE sandbox works in two modes. During mining, it records all calls to sensitive APIs including their description; as discussed in Section 3, this recording includes the current call stack as well as security-relevant parameter values. During enforcement, it checks whether the API call is allowed by the sandbox rules; if not, it can either have the call return a mock object (simulating the absence of contacts, locations, etc.), or ask the user for permission, naming the API and possible relevant arguments. If the user declines permission, the call again fails. In APPGUARD, executions only incur a very low overhead (1-21%) for calls to a sensitive method [3]. During test runs, the inventors were not able to measure any impact on the overall runtime either. Therefore the BOXMATE sandbox be can easily used in production.
As an example of how the BOXMATE sandbox operates, again consider the SNAPCHAT™ saturation curve in
The saturation curve in
However, if SNAPCHAT™ behavior is explored for more than six minutes, DROIDMATE eventually determines that SNAPCHAT™ can access the image library; consequently, the sandbox will allow this for the future. This avoids a false alarm; however, it also brings the risk that a potentially malicious SNAPCHAT™ variant could now easily store compromising images in the background—and the resulting sandbox would completely miss this behavior. This duality of false alarms (false positives) vs. missed malicious behavior (false negatives) is a common problem in automatic classification. In the described setting, some behavior can fall into four categories, illustrated in
To address the issue of sandboxes that can be too coarse, means to provide an even finer-grained sandbox have been explored. Such a sandbox provides permissions not so much for the app as a whole, but rather to individual features of the app.
There are several ways to decompose a program into individual features. One could restrict resource access only to specific program functions, data or control flows, or conditions as they arise during execution. As it is a desirable goal for the resulting access rules/descriptions to be understandable by regular users, the principle of User-Driven Access Control [26, 30] has been adopted, namely tying access to user-owned resources to user actions in the context of an application. Specifically, during mining, the API usage of an application is associated with the user action that triggered it; the resulting sandbox then only grants access if the same user action was done. Applied to the previously described SNAPCHAT™ example, this means that its “Save image” button, which requires adding an image, is still allowed to do so, whereas all other GUI elements and background processes would not.
To implement user-driven access control, all pairs (e;m) of GUI elements e and a description of sensitive APIs m triggered by activating e are recorded and stored. During mining, these pairs are all saved and during sandboxing, a call to m is only permitted if the GUI corresponding associated element e was indeed triggered.
As discussed in Section 5, a tighter sandbox can reduce the risk of false negatives without getting too many false positives. Besides a sandbox that is too coarse, a second risk of false negatives is that the testing process simply can mine malicious behavior without recognizing it as such and thus treats it as permitted. As an example, consider an application that happily tracks your location and sends it to some server; BOXMATE will happily mine this as normal behavior and its sandbox will permit this in the future, too.
BOXMATE can detect and prevent behavior changes, but in the absence of an external specification, it cannot know whether behavior is benign or malicious—and an app that constantly tracks a location can be used for either purpose. However, the sandboxes as mined by BOXMATE can assist in several well-established techniques to assess behavior and establish trust. In particular:
Checking Behavior. Anyone can mine a sandbox from a given app, checking which APIs are being used by which functionality; this alone already gives a nice overview about what the app does and why. Since these rules come from concrete executions, one could easily assess concrete resource identifiers, such as file names, host names, or URLs accessed.
Comparing and Certifying Sandboxes. As users and experts alike can mine sandboxes, they can also publish and compare their findings. This allows for independent certification and revalidation schemes, as well as trust networks. In various embodiments of the invention, a shared sandbox repository can be provided, where users can reuse and exchange sandboxes, compare sandbox rules/descriptions, establish trust schemes, and work together in collaboratively mining and merging sandboxes, adopting the best practices from open source software development. Again, anything not detected will automatically be prohibited by the sandbox.
Mining Normal Behavior. Finally, the approach according to the invention has been designed to be easily applicable to arbitrary binaries. This allows for automatically assessing large sets of apps, extracting rules of normal behavior that can even be tied to the app description [15].
A false positive occurs if during normal interaction, users need to confirm that some (benign) API call should take place. In the described setting, this translates into an API called during user interaction, but not during mining. For an assessment, one thus needs to know or define what “normal” user interaction looks like, and which APIs would be accessed with it. To define “normal” user interaction, manually written automated tests for all applications considered have been used, with the aim of having them cover as much behavior as possible. Tests reflect typical use cases; in SNAPCHAT™, for instance, this would be use cases such as starting the app, selecting a contact, sending a picture to the contact, or sending a video. These tests were written using the same UI AUTOMATOR [33] framework that DROIDMATE uses anyway; to allow for independent assessment and comparison.
The purpose of testing always has been to detect abnormal behavior. In accordance with the invention, testing is given a new purpose, namely to extract normal behavior—a task that testing arguably is much better suited to, and even more so in the security domain. By enforcing the exclusion of behavior not seen during testing, the incompleteness of testing is turned into a guarantee that bad things not seen so far cannot happen. This guarantee proved to work well in practice.
This work was funded by an European Research Council (ERC) Advanced Grant “SPECMATE—Specification Mining and Testing”.
Number | Date | Country | Kind |
---|---|---|---|
LU92657 | Feb 2015 | LU | national |
The present invention is the US national stage under 35 U.S.C. §371 of International Application No. PCT/EP2016/053276, which was filed on Feb. 16, 2016, and which claims the priority of application LU 92657 filed on Feb. 16, 2015, the content of which (text, drawings and claims) are incorporated here by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2016/053276 | 2/16/2016 | WO | 00 |