1. Field of the Invention
The present invention relates to computers and computer network security. More specifically, it relates to ensuring that secure services on mobile devices are protected from hackers and malware when apps and other software execute in unprotected areas of the devices.
2. Description of the Related Art
As the number of mobile devices grows and their use becomes more widespread, security on such devices is becoming increasingly important. Smartphones and tablets are being used to perform more functions, such as making purchases, and the operating systems on the devices is becoming richer and sophisticated, which is also leading to their vulnerability to hackers. Rich operating systems, such as Android or iOS, have millions of lines of code and are not entirely secure or trusted. Hackers know where the weaknesses are and devise ways to root or attack the operating system which, in turn, can make more secure or trusted modules in the device do unwanted activities, for example, make unauthorized purchases using an electronic wallet (“eWallet”) type service among other activities.
There are protocols and systems in place for ensuring that secure modules in mobile devices, such as the secure operating system and secure services, are well protected. For example, the ARM Trust Zone model ensures that the near-field communications (NFC) chip in a phone or device cannot be cloned and that the private key in the NFC chip is entirely secure from hacking. However, the secure operating system, for example, may still take instructions from modules or code in the unsecure or un-trusted operating system, such as the browser, to do certain things. So, while the secure modules, services, and chips are themselves generally safe from hacking, there are still ways to send unauthorized (i.e., hacked) instructions to these modules without them being aware of it; that is, it is still possible to hack or root the device by exploiting vulnerabilities in the un-trusted and unsecured components and domains in the device.
One aspect of the present invention describes a method of disabling a secure service on a mobile device when abnormal behavior is detected in an operating system of the device, the operating system being the untrusted space or domain on the device. In one embodiment, an app executes in an operating system or in another untrusted domain in the mobile device. Functions are monitored in the operating system on the device and abnormal or rooted behavior is detected in the operating system. An alert signal is transmitted to a secure attestation module. Secure services are then disabled on the device and the extent of the disabling depends on device type and degree of attack. In one embodiment the disabling is done by the attestation module to the device hardware.
In other embodiments, the monitoring is performed using a special code monitor that is in communication with the secure attestation module. An NFC chip and an electronic wallet service are disabled if it is detected that the electronic wallet service was used to make an unauthorized purchase. In one embodiment, the disabling is caused by an attestation module. The secure services on the mobile device, such as a smart phone, may be electronic wallet services, display, enterprise access, camera, and speaker.
References are made to the accompanying drawings, which form a part of the description and in which are shown, by way of illustration, specific embodiments of the present invention:
Example embodiments of an application security process and system according to the present invention are described. These examples and embodiments are provided solely to add context and aid in the understanding of the invention. Thus, it will be apparent to one skilled in the art that the present invention may be practiced without some or all of the specific details described herein. In other instances, well-known concepts have not been described in detail in order to avoid unnecessarily obscuring the present invention. Other applications and examples are possible, such that the following examples, illustrations, and contexts should not be taken as definitive or limiting either in scope or setting. Although these embodiments are described in sufficient detail to enable one skilled in the art to practice the invention, these examples, illustrations, and contexts are not limiting, and other embodiments may be used and changes may be made without departing from the spirit and scope of the invention.
Methods and systems for preventing applications from performing in harmful or unpredictable ways, and thereby causing damage to computing device are described in the various figures. During execution, applications may be modified by external entities or hackers to execute in ways that are harmful to the computing device. Such applications, typically user applications, can be modified, for example, to download malware, obtain and transmit confidential information, install key loggers, and perform various other undesirable or malicious functions. In short, application programs are vulnerable to being modified to execute in ways that they were not intended for. Thus, a discrepancy may arise between the intended behavior of an application or function and the actual behavior of the application or function. Although there are products to prevent tampering with applications and functions by unauthorized parties, these products may not always be effective. Moreover, such products cannot prevent authorized parties from maliciously tampering with applications and functions on a computing device. The figures below describe methods and systems for preventing applications and functions that have been modified from executing and potentially doing damage to the host computing device.
When an application executes, in most cases, a given function within the application may call other functions that are also within the same application. These calls are represented by arrows 107 in
As noted earlier, applications in user space 102 may be modified to do unintentional or harmful operations. When an application is first loaded onto the computer (or at any time thereafter) when the owner or administrator is confident that the application has not been tampered with, the application will execute in its intended and beneficial manner on the computer. That is, the application will do what it is supposed to do and not harm the computer. When an application has been tampered with, the tampering typically involves changing the series of function calls or system calls made within the application. A change in a single function call or system call may cause serious harm to the computer or create vulnerabilities. In one embodiment of the present invention, the intended execution of an application or, in other words, the list of functions related to the application, is mapped or described in what is referred to as a profile.
Block 204 represents a code analyzer of the present invention. Code analyzer 204 accepts as input application and library code contained in block 202. In one embodiment, code analyzer 204 examines the application and library code 202 and creates profiles represented by block 206. Operations of code analyzer 204 are described further in the flow diagram of
As is known in the field, object code is typically run through a linker to obtain executable code. Block 306 represents “modified” object code which is the output of linker utility program 304. It is modified in the sense that functions that are being called are being replaced with a stub. In a normal scenario, a conventional linker program would have linked the object code to create normal executable code to implement the applications. However, in the present invention, linker utility 304 replaces certain functions with stubs and, therefore, creates modified object code. It is modified in that every function that calls bar( ) for example, now calls xbar( ). In one embodiment, functions that call bar( ) but are now calling xbar( ) in the modified object code, are not aware that they are now calling xbar( ). Furthermore, the original bar( ) is not aware that it is not getting calls from other functions from which it would normally get calls; that is, it does not know that it has been replaced by xbar( ). In one embodiment, the object file (containing the modified object code) also contains a “symbol table” that indicates which part of the modified object code corresponds to each function (similar to an index or a directory). Linker utility 304 adds new code (new CPU instructions), the stub (replacement function), and makes the “symbol table” entry for the function making a call point to the stub instead. In this manner, functions which want to call bar( ) will be calling xbar( ) instead. Xbar( ) has taken the identity of bar( ) in the “eyes” of all callers to bar( ). In one embodiment, the stub xbar( ) is a call to a supervisor which includes a supervisor stack and additional code to ensure that the environment does not look altered or changed in anyway.
At step 608 the code analyzer generates the set of functions that may call the primary function. In one embodiment this is done by the code analyzer examining code in all the other functions (a complete set of these other functions was determined in step 602). At step 610 the code analyzer generates a set of system calls made by the primary function. As with step 606, the code analyzer examines the code in the primary function to determine which system calls are made. As described, a system call is a call to a function or program in the kernel space. For example, most calls to the operating system are system calls since they must go through the kernel space.
At step 612 the function sets generated at steps 606, 608, and 610 are stored in a profile that corresponds to the primary function. The function sets may be arranged or configured in a number of ways. One example of a format of a profile is shown below. At step 614 the profile is stored in a secure memory by the profiler program, such as in ROM, or any other read-only memory in the computing device that may not be manipulated by external parties. This process is repeated for all or some of the functions in the user space on the computing device. Once all the profiles have been created, the process is complete.
At step 806 the stub xbar( ) notifies the supervisor that bar( ) is being called by foo( ). In one embodiment, the supervisor, including the supervisor stack and associated software, resides in the user space. In another embodiment, the supervisor resides in the kernel, in which case a system call is required by the stub. At step 808 the supervisor retrieves the profile for the calling function, foo( ) from secure memory, such as ROM. It then examines the profile and specifically checks for functions that may be called by foo( ). The profile may be stored in any suitable manner, such as a flat file, a database file, and the like. At step 810 the supervisor determines whether foo( ) is able or allowed to call bar( ) by examining the profile. If bar( ) is one of the functions that foo( ) calls at some point in its operation (as indicated accurately in the profile for fooO), control goes to step 812. If not, the supervisor may terminate the operation of foo( ) thereby terminating the application at step 811. Essentially, if bar( ) is not a function that foo( ) calls, as indicated in the profile for foo( ) (see
At step 812 the supervisor pushes bar( ) onto the supervisor stack, which already contains foo( ). Thus, the stack now has bar( ) on top of foo( ). The stub is not placed on the supervisor stack; it is essentially not tracked by the system. At step 814 bar( ) executes in a normal manner and returns results, if any, originally intended for foo( ) to the stub, xbar( ). Upon execution of bar( ) the supervisor retrieves its profile. Calls made by bar( ) are checked against its profile by the supervisor to ensure that bar( ) is operating as expected. For example, if bar( ) makes a system call to write some data to the kernel, the supervisor will first check the profile to make sure that bar( ) is allowed to make such a system call. Functions called by bar( ) are placed on the supervisor stack.
Once the stub receives the results from bar( ) for foo( ) the stub notifies the supervisor at step 816 that it has received data from bar( ). At step 818 the supervisor does another check to ensure that foo( ) called bar( ) and that, essentially, foo( ) is expecting results from bar( ). It can do this by checking the stack, which will contain bar( ) above foo( ). If the supervisor determines that foo( ) never called bar( ) the fact that bar( ) has results for foo( ) raises concern and the process may be terminated at step 820. If it is determined that foo( ) did call bar( ) control goes to step 822 where the stub returns the results to foo( ) and the process is complete. The fact that xbar( ) is returning the results is not known to foo( ) and, generally, will not affect foo( )'s operation (as long as the results from bar( ) are legitimate). The function bar( ) is then popped from the supervisor stack. In one embodiment, bar( ) is popped from the stack, its results are sent to foo( ) (by xbar( )). If foo( ) keeps executing, it may remain in the stack, and the above process repeats for other functions called by foo( ).
Below is a sample format of a profile written in the C programming language.
——attribute—— ((section(“.nfp_db”),used))
——attribute—— ((section(“.nfp_db”),used))
——attribute—— ((section(“.nfp_db”),used))
——attribute—— ((section(“.nfp_db”),used))
In other embodiments, methods and systems for causing the disablement or shutting down of secure services on a mobile device when an attack or unusual behavior is detected are described. These embodiments are described in
One way hackers can root an untrusted domain, specifically an operating system, is through apps. It is best to assume that all apps, whether those pre-installed on the device, or those downloaded from an app store, are not trustworthy (although generally the majority are good or safe apps but it is the few bad apps that can cause significant damage to a mobile device). Apps can be developed by hackers and appear safe or innocuous until they are downloaded and perform malware-type activities. For example, one app by itself may not be harmful but two apps by the same developer/hacker may operate together to root a mobile device. In another example, an app may be harmless when first downloaded but may have a timer that causes it to do harm to the device at a specific time in the future, thereby misleading the downloader/user as to the cause of any malfunctioning on the device.
Detecting whether a device has been rooted or jail broken is becoming increasingly important as mobile devices become widespread and users become more accustomed to downloading software and treating them as general computing devices for work and personal use. This is one motivation for the ARM Trust Zone model described above. This model is effective in preventing secure services, specifically the NFC chip and its private key from being cloned. However, it cannot protect a rich operating system from being modified or infiltrated. The rich operating system is part of an untrusted world in the mobile device ecosystem. It executes using the CPU. As described below, the NFC chip also talks or communicates directly with the CPU, for example, when making a purchase.
A special software code monitor 910 watches untrusted operating system module 902. This watching or monitoring is represented by unidirectional line 912. Special monitor 910 ensures that module 902 is running in a trusted manner and that generally the execution of the untrusted world is normal and not subverted. This can be done using the methods and systems described above with respect to the code analyzer, profiles, and stubs. When special software monitor 910 detects that something is not behaving correctly in module 902, it sends an alert to an attestation module 914.
Special monitor 910 may also receive an alert from monitor 908 if the monitor detects a bad input variable. In another embodiment, monitor 908 may send an alert directly to attestation module 914 if there are bad inputs. In other embodiments, monitor 908 may send alerts to both special monitor 910 and to attestation module 914. Attestation module 914 is also a secure service and has a direct connection with a secure operating system module 916.
As described below, attestation module 914 ensures that the device is running in a safe manner or mode and is able to disable, cut off, or shut down services or the entire device, as needed, in a way that makes it difficult for a user or hacker to turn back on. As noted above, secure operating system 916 is often a small amount of code (e.g., 30 KB) and has a higher CPU authority/priority (untrusted operating system or domain has a lower CPU priority). Secure operating system 916 is in communication with or contains secure services 918. Secure services 918 may contain a near-field communications (NFC) chip 920 and various other services, such as eWallet 922, display 924, camera 926, enterprise access 928, speaker 930, and so on. All these services have a higher CPU priority. In one embodiment, communication among these components (902, 908, . . . ) is through an inter-process communication (IPC) gateway.
When special monitor 910 detects that something in the untrusted world has been subverted or rooted, it informs attestation module 914 via, in one embodiment, the IPC gateway. For example, if the user of the device attempts to connect to an enterprise (e.g., for the user's work), the enterprise will perform a remote attestation with the device first. If attestation module 914 has been alerted of abnormal behavior from special monitor 910, the attestation by the enterprise will fail.
While watching the operating system at step 1004, the special monitor is inherently determining whether it is running in a trusted or normal way at step 1006. It can do this using the code analyzer, profiles, and other processes and techniques described above. Step 1006 may be described as taking place during step 1004. If the special monitor determines that the operating system is running in a normal manner, control essentially goes back to the beginning of the process and the device continues to function in a normal manner. If the code analyzer determines that the operating system is not operating in a trusted way, either from its direct observation of it or from being alerted by the monitor (i.e., detecting that inputs are potentially bad), then an alert is sent to the attestation module at step 1008 from the special monitor. In another embodiment, the monitor can send an alert directly to the attestation module. As noted, the attestation module is a secure service itself and generally cannot be hacked or compromised.
At step 1010 the attestation module causes the shut down or disablement of services. Which services are cut off may depend on several factors, such as the type of device, the extent of the attack, and the like. Based on how the device is being used, different functionality on the device can be crippled or disabled when device misbehavior is detected. For example, a military phone may have its microphone and speaker disabled, a consumer device may have the eWallet functionality, i.e., the NFC service, turned off, an enterprise or company device may have its private keys struck out to prevent access to corporate networks, and so on. In another embodiment, the operation is more binary and the device is generally shut down, i.e., few of the services are allowed to operate or the phone remains fully functional. In one embodiment, the modifications made by the attestation module are to the device hardware, which make it more difficult for the user to reset and begin using the phone or tablet. Given that once a device is rooted, there is very little if no trust in the device, especially if the device is used for work and is used to access enterprise systems. In some cases, the hardware is modified and locked, and thus cannot be reset by the user.
In other cases, only certain services may still be engaged, such as speaker, display, power, and the like. In this manner, if the unsecured operating system is somehow attacked, hacked, or modified in an unauthorized way, it cannot proceed to send instructions to the secure services, i.e., it cannot contaminate the secure world on the device with malware-sourced instructions. For example, if an eWallet app is used to make unauthorized purchases, the NFC chip and eWallet secure service on the device are immediately disabled (making it impossible to obtain the private key), possibly along with several other services and hardware on the phone, essentially making the phone unusable except for basic functions. In another example, if the phone attempts to connect to a network, such as a company or government enterprise, the enterprise will attest the security of the device by performing a remote attestation with the device. The attestation module will cause this remote attestation to fail because it has been alerted of abnormal behavior in the untrusted domain on the phone. After services (software) and hardware on the device are disabled or modified at step 1010, the process is complete.
CPU 1122 is also coupled to a variety of input/output devices such as display 1104, keyboard 1110, mouse 1112 and speakers 1130. In general, an input/output device may be any of: video displays, track balls, mice, keyboards, microphones, touch-sensitive displays, transducer card readers, magnetic or paper tape readers, tablets, styluses, voice or handwriting recognizers, biometrics readers, or other computers. CPU 1122 optionally may be coupled to another computer or telecommunications network using network interface 1140. With such a network interface, it is contemplated that the CPU might receive information from the network, or might output information to the network in the course of performing the above-described method steps. Furthermore, method embodiments of the present invention may execute solely upon CPU 1122 or may execute over a network such as the Internet in conjunction with a remote CPU that shares a portion of the processing.
In addition, embodiments of the present invention further relate to computer storage products with a computer-readable medium that have computer code thereon for performing various computer-implemented operations. The media and computer code may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those having skill in the computer software arts. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and execute program code, such as application-specific integrated circuits (ASICs), programmable logic devices (PLDs) and ROM and RAM devices. Examples of computer code include machine code, such as produced by a compiler, and files containing higher-level code that are executed by a computer using an interpreter.
Although illustrative embodiments and applications of this invention are shown and described herein, many variations and modifications are possible which remain within the concept, scope, and spirit of the invention, and these variations would become clear to those of ordinary skill in the art after perusal of this application. Accordingly, the embodiments described are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
This application is a continuation-in-part which claims priority under 35 U.S.C. §120 to U.S. patent application Ser. No. 12/246,609 filed Oct. 7, 2008, entitled “PREVENTING EXECUTION OF TAMPERED APPLICATION CODE IN A COMPUTER SYSTEM,” which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 12246609 | Oct 2008 | US |
Child | 13336322 | US |