This invention relates to a mobile wireless device with a protected file system. The protected file system forms an element in a platform security architecture.
Platform security covers the philosophy, architecture and implementation of platform defence mechanisms against malicious or badly written code. These defence mechanisms prevent such code from causing harm. Malicious code generally has two components: a payload mechanism that does the damage and a propagation mechanism to help it spread. They are usually classified as follows:
Security threats encompass (a) a potential breach of confidentiality, integrity or availability of services or data in the value chain and integrity of services and (b) compromise of service function. Security threats are classified into the following categories:
Hence, mobile wireless devices offer very considerable challenges to the designer of a security architecture. One critical aspect of this challenge is to efficiently protect the file system to prevent malicious applications reading confidential data (e.g. PINs etc) and to preserve the integrity of the file system. To date, there have however been no efficient proposals for protecting the file system of a mobile wireless device.
In a first aspect of the present invention, there is a single user mobile wireless device programmed with a file system which is partitioned into multiple root directories, in which the location of a file is enough to fully define its access policy to a process running on the device (i.e. to define which processes can access the file). The partitioning of the file system in this way ‘cages’ processes as it prevents them from seeing any files they should not have access to. (Once a program containing native executable code is loaded in memory, it becomes a ‘process’; the process is therefore the running memory image of a program containing native executable code which is stored onto the file system). Trusted components, such as a ‘Trusted Computing Base’ (which should be understood as covering architectural elements that cannot be subverted and that guarantee the integrity of the device; see Detailed Description section 1.1. for an implementation) verify whether or not a process has the required privileges or ‘capabilities’ (see Detailed Description at section 2 for an explanation of ‘capabilities’) to access root sub-trees (e.g. files in a sub-directory).
As noted above, a secure operating system must control access to the file system to ensure its own integrity, as well as user data confidentiality. With the present invention, a particular directory a file is placed into automatically determines its accessibility to different processes—i.e. a process can only access files in certain root directories. This is a light weight approach since there is no need for a process to interrogate an access control list associated with a file to determine its access rights over the file—the location of the file taken in conjunction with the access capabilities of a process intrinsically define the accessibility of the file to the process. Moving the location of a file in the file system (e.g. between root directories) can therefore modify the access policy of that file.
Each process may also have its own private area of the file system guaranteeing confidentiality and integrity of its data.
With one of the possible implementations of the present invention (called implementation A in the rest of the document), a file is placed into a location within a file system with 4 types of root directories (or their functional equivalents);
In another implementation, there is an ‘open’ mobile wireless device programmed with a software installer that is responsible for installing new applications and their files without compromising the integrity of the existing file system. It would create or update files according to their types, their location (public directories) and the secure identifier (SID) of the programs (see Detailed Description at 3.3)
In another aspect, there is an operating system comprising a file installation mechanism to allow programs to contribute to another program's private directory without compromising it (see Detailed Description at 3.3)
The present invention will be described with reference to the security architecture of the Symbian OS object oriented operating system, designed for single user wireless devices. The Symbian operating system has been developed for mobile wireless devices by Symbian Ltd, of London, United Kingdom.
The basic outline of the Symbian OS security architecture is analogous to a medieval castle's defences. In a similar fashion, it employs simple and staggered layers of security above and beyond the installation perimeter. The key threats that the model is trying to address are those that are linked with unauthorised access to user data and to system services, in particular the phone stack. The phone stack is especially important in the context of a smart phone because it will be controlling a permanent data connection to the phone network. There are two key design drivers lying behind the model:
The main concept of the capability model described below is to control what a process can do rather than what a user can do. This approach is quite different to well-known operating systems as Windows NT and Unix. The main reasons are:
A trusted computing base (TCB) is a basic architectural requirement for robust platform security. The trusted computing base consists of a number of architectural elements that cannot be subverted and that guarantee the integrity of the device. It is important to keep this base as small as possible and to apply the principle of least privilege to ensure system servers and applications do not have to be given privileges they do not need to function. On closed devices, the TCB consists of the kernel, loader and file server; on open devices the software installer is also required. All these processes are system-wide trusted and have therefore full access to the device file system. This trusted core would run with a “Root” capability not available to other platform code (see section 2.1).
There is one other important element to maintain the integrity of the trusted computing base that is out of the scope of this invention, namely the hardware. In particular, with devices that hold trusted computing base functionality in flash ROM, it is necessary to provide a secure boot loader to ensure that it is not possible to subvert the trusted computing base with a malicious ROM image.
1.2 Trusted Computing Environment
Beyond the core, other system components would be granted restricted orthogonal system capability and would constitute the Trusted Computing Environment (TCE); they would include system servers such as socket, phone and window servers. For instance the window server would not be granted the capability of phone stack access and the phone server would not be granted the capability of direct access to keyboard events. It is strongly recommended to give as few system capabilities as possible to a software component to limit potential damage by any misuse of these privileges. The TCB ensures the integrity of the full system as each element of the TCE ensures the integrity of one service. The TCE cannot exist without a TCB but the TCB can exist by itself to guarantee a safe “sand box” for each process.
2 Process Capabilities
A capability can be thought of as an access token that corresponds to a permission to undertake a sensitive action. The purpose of the capability model is to control access to sensitive system resources. The most important resource that requires access control is the kernel executive itself and a system capability (see section 2.1) is required by a client to access certain functionality through the kernel API. All other resources reside in user-side servers accessed via IPC [Inter Process Communication]. A small set of basic capabilities would be defined to police specific client actions on the servers. For example, possession of a make calls capability would allow a client to use the phone server. It would be the responsibility of the corresponding server to police client access to the resources that the capability represents. Capabilities would also be associated with each library (DLL) and program (EXE) and combined by the loader at run time to produce net process capabilities that would be held by the kernel. For open devices, third party software would be assigned capabilities either during software installation based on the certificate used to sign their installation packages or post software installation by the user. The policing of capabilities would be managed between the loader, the kernel and affected servers but would be kernel-mediated through the IPC mechanism.
The key features of the process capability model are:
The process of generating capabilities can be difficult. One has first to identify those accesses that require policing and then to map those requirements into something that is meaningful for a user. In addition, more capabilities means greater complexity and complexity is widely recognised as being the chief enemy of security. A solution based on capabilities should therefore seek to minimise the overall number deployed. The following capabilities map fairly broadly onto the main threats which are unauthorised access to system services (eg. the phone stack) and preserving the confidentiality/integrity of user data.
PhoneNetwork. “Can Access Phone Network Services and Potentially Spend User Money”
Root and system capabilities are mandatory; if not granted to an executable, the user of the device cannot decide to do it. Their strict control ensures the integrity of the Trusted Computing Platform. However the way servers check user-exposed capabilities or interpret them may be fully flexible and even user-discretionary.
2.3 Assigning Capabilities to a Process
The association of a run-time capability with a process involves the loader. In essence, it transforms the static capability settings associated with individual libraries and programs into a run-time capability that the kernel holds and can be queried through a kernel user library API.
The loader applies the following rules:
Rule 1. When creating a process from a program, the loader assigns the same set of capabilities as its program's.
Rule 2. When loading a library within an executable, the library capability set must be greater than or equal to the capability set of the loading executable. If not true, the library is not loaded into the executable.
Rule 3. An executable can load a library with higher capabilities, but does not gain capabilities by doing so.
Rule 4. The loader refuses to load any executable not in the data caged part of the file system reserved to the TCB.
It has to be noted that:
The examples below show how these rules are applied in the cases of statically and dynamically loaded libraries respectively.
2.3.1 Examples for Linked DLLs
Data partitioning involves separating code from data in the file system such that a simple trusted path is created on the platform. The central idea of the present invention is that by “caging” non-TCB processes into their own part of the file system, sensitive directories become hidden to them. In implementation A, the /system directory becomes hidden to ordinary processes; the kernel and the file server would check that a client process had Root or AllFiles capability before allowing any access to the /system sub-tree. All other clients would have their own restricted view on the file system which would consist of the sub-tree below a special private directory which would be /private/<app_SID>, /resources and /<others>. In addition, the loader would disallow any attempt to execute code not residing in /system.
SIDs (Secure Identifiers) would be used to distinguish between the sub-tree directories. Applications for example would use their sub-tree to hold their own resource files. In essence, data partitioning involves caging processes into their own small part of the file system. This means that there is default protection of per-application and per-server data from other processes. Without partitioning, it would become necessary to introduce access control lists into the file system to prevent rogue programs bypassing the capability model and simply reading databases directly from files. In addition, the scheme affords further protection against the threat of malicious replacement of system plug-ins in the TCB-reserved directory (/system in implementation A). An analogy may be made with memory management; currently only the kernel has an unrestricted view of the real machine memory. Each process has its own view of the processor's address space that is independent of all other processes. This is arranged and policed by the kernel and the memory management unit (MMU); any access outside the range of one process memory space is rejected.
In that sense, partitioning is a simple and robust solution:
For simplicity the file system is not aware of the meaning of “private user data” and “system data”. A file cannot be flagged as so. It is the responsibility of each server and application to manage ReadUserData/WriteUserData and ReadSystemData/WriteSystemData capabilities when required. For example, if a user wants to identify a word processor document as private, she can password-protect it, making ReadUserData unnecessary to access the encrypted file. Furthermore in case of databases, the file itself can contain system, user and public data and therefore trying to assign a level of privacy to a file is inappropriate.
An example of a preferred implementation is given by implementation A; each file system is partitioned in 4 types of root directories:
As described above, a process can get access to a private directory only if its SID matches the name of the private directory. It is not excluded, in order to support processes with strong coupling, for a set of processes to be given the same SID. In this case, all processes with the same SID may share the same private directory. However, it has to be noted this concept is very different from the concept of group: one process can have only one SID, it cannot be part of more than one security domain. Therefore the implementation of data caging must stay ignorant of the nature of SIDs.
3.3 Software Installer
It is important to understand that rule 4) does not run counter to the principles of data caging: the installed application will not be able to access this file once installed. The presence of an import directory in the private area of a process notifies the possibility for another application to make a contribution to this process at install time. A good example would be a font server: all fonts would be stored in the private directory of the font server. However external application packages could contribute by adding new fonts without polluting the server's private directory as they would be all under the import directory clearly stating their external origin.
Number | Date | Country | Kind |
---|---|---|---|
GB0212315.6 | May 2002 | GB | national |
This application is a continuation of U.S. application Ser. No. 10/515,759, filed Nov. 24, 2004, which claims the priority of PCT/GB03/02313 filed May 28, 2003 and British Application GB 0212315.6 filed on May 28, 2002, the contents of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 10515759 | Nov 2004 | US |
Child | 11935020 | Nov 2007 | US |