1. Field of the Invention
This invention relates generally to virtualization of computing resources and security and trust in an environment of such virtualization.
2. Description of the Related Art
The inexorable trend towards workforce mobility and the requirement for web access while mobile is driving significant new technology development and businesses in devices and infrastructure associated with mobile web access. Of significant value is the reliable access to, and utilization of, computing services and data delivered over the web, thus making the wide-area network effectively both the computing medium as well as a heterogeneous collection of databases. All of these capabilities are delivered through a diverse group of “web services.” Technically, this poses a number of challenges related to communications, security, trust, negotiations and monitoring among diverse devices, agents, and business processes. All of this currently takes place in an environment where neither the device, the communications infrastructure, nor the web servers can be trusted, and where the communications link is highly variable in quality.
In order to improve trust in the mobile device, and to create an infrastructure upon which device capabilities can be augmented or place shifted in a trustworthy manner via virtualization, there is a need to establish a foundation of security. Consider first a typical software implementation of existing mobile devices, as shown conceptually in
Consequently, even though applications are generally executed in the “user” mode, in the current architecture that intention can be subverted and it is possible for applications to run code at a higher priority level in kernel mode, or for viruses that infect an application to access kernel mode privileges. Viruses have used techniques such as introducing kernel mode VxDs or using tricks such as the call gate mechanism to run code at higher privilege levels.
Modern anti-malware software is also engineered as an application program or installed as a post hoc modification to a running operating environment. This means, to be successful, such a software must win the race with a malicious application program in terms of time when it is installed, in the observability of important system events and actions and the level of access storage and state information. Thus, if a virus “rootkits” the system by executing beneath the OS or even the kernel, it can be difficult for anti-malware software to detect it as the malware has control of system resources generally employed by the anti-malware to detect it. A rootkitted system is shown conceptually in
Furthermore, web services provide a means to expose and use programming interfaces on wide area networks, potentially with many mobile devices. By design, these interfaces are lightweight to enable portability across platforms with diverse computational capabilities. For example, HTTP is a session-free, non-transactional protocol that was originally designed for transporting documents. Later, with the advent of styling innovations and its separation from the data content, it also provided a simple, usable UI for running applications over the web. HTTP works well when the client platform can provide the computing power and form-factor necessary to render the UI in a reliable and predictable way.
The ubiquity of web servers, server software, supporting programming languages and libraries, and supporting technology such as XML has made HTTP a good protocol for distributed applications. In essence, the use of web technologies has evolved from a user-to-computer technology, to one that supports (and is widely adopted for) computer-to-computer interactions, essentially using HTTP as a transport for Remote Procedure Calls (RPC) between distinct (and often geographically separate) components of an application.
From a functional standpoint, the evolutionary changes to web services had primarily been focused around the client. Clients have gone through the following transitions:
While these changes have had an impact on the format of data served up by Web Services, the architectural drivers for Web Services and Web Server Software have remained the same. These generally are
This architecture provides widely available, large-scale Web Services that can be accessed by any standard Web-based client. It can provide for information and service requests from a large number of clients anywhere in the world. This standard architecture does not, however, address the security and privacy requirements/challenges in current mobile devices, particularly given the current trends in mobile device usage. These requirements/challenges include:
Thus, there is a need for innovations in mobile devices and/or the supporting infrastructure to address some or all of these needs.
The following disclosure describes, in part, a platform architecture that shifts the networked computing paradigm from PC+Network to a system using trusted Mobile Internet End-Point (MIEP) devices and cooperative Agents hosted on a Trusted Server. The MIEP device can participate in data flows, arbitrate authentication, and/or participate in implementing security mechanisms, all within the context of assured end-to-end security. The MIEP architecture improves platform-level capabilities by suitably (and even dynamically) partitioning what is done at the MIEP nodes, the network, and the server based infrastructure for delivering services.
The MIEP component of the mobility platform presented here is not a classic thin client. A classic enterprise thin client typically sits behind a “walled garden”—a corporate firewall on a dedicated high bandwidth high availability ethernet network. This facilitates booting over the network and significant compute offloading to corporate servers. Security tasks can also be offloaded to corporate servers and the non-mobile nature of these devices and their location behind a corporate firewall increases the feasibility of deploying and enforcing policies which minimize security vulnerabilities, including physical I/O modalities on the thin client devices. Trust issues are also mitigated with respect to the communications network and the server, since there is implied trust in the corporate server and network integrity.
In contrast, the MIEP, because it is mobile, may not sit behind a corporate firewall, and does not enjoy a dedicated reliable high bandwidth connection to any network. The MIEP device typically also operates on a limited energy budget (e.g., batteries) and under stringent form factor and budgetary constraints. These factors significantly alter system design optimization criteria. Optimizing the design of the MIEP requires an integrated systems level perspective as a systems optimization problem encompassing the device itself, unreliable wireless and wireline communications links, and supporting server(s) available over the web. To adequately address the requirements of an MIEP based computing model, it is highly beneficial that trust and security be afoundational element in the design of the overall system.
The example system described below provides a framework for distributed capabilities in a Service Framework that leverages existing OS (operating system) and application software on a new trust/security/virtualization model infrastructure. This is advantageous to carriers who, for example, want to be able to provide unique differentiated services instead of commoditized “dumb pipes.” In the following, we describe the approach using an example based on the context of current practice and emerging standards, although the invention is not limited to this context or this example.
To respond to the emerging need for security in our computing infrastructure, the industry has sponsored the Trusted Computing Group (TCG) that seeks to define hardware and software requirements for security, and to drive adoption of standards to achieve secure computing platforms. TCG is also instrumental in defining a vocabulary for describing important concepts related to security and trust in computing. We find this vocabulary useful in describing our innovations and their embodiments. Where possible, we use vocabulary that is compliant with TCG recommendations or standards. The following examples are based on the TCG model and the TCG vocabulary but the invention is not limited to these specific examples or to the TCG model or to the TCG vocabulary. The TCG model is chosen as an example for convenience and for didactic purposes.
In order to provide a far more secure system than what is currently available, including protection against rootkits, a set of additional capabilities are needed by the mobile device. In the secure hardware platform architecture proposed by the TCG, these capabilities include the following:
I.B.1 Trusted Platform Module (TPM)
One implementation of these protected capabilities and shielded-locations used to report integrity measurements is to locate them on the mobile device motherboard in a hardware based tamper-resistant module, a Hardware Root of Trust (HROT) called the Trusted Platform Module (TPM). In the TCG implementation, the TPM incorporates a number of tamper-resistant resources, including:
A block diagram of an example TPM is shown in
Note that the HROT need not be instantiated as a standalone hardware module, such as the TPM, but that the set of protected resources may also be realized in the core CPU chipset, or in the CPU itself.
I.B.2 Integrity Measurement and Reporting
Integrity measurement is the process of obtaining metrics of platform characteristics that affect the integrity (trustworthiness) of a platform; storing those metrics; and storing digests of those metrics in the TPM. Integrity reporting is the process of attesting to the contents of integrity storage.
In this example embodiment, the system state is stored as measurement digests in the TPM in a group of 20-byte registers called Platform Configuration Registers (PCRs). The values of these registers are formed by “extending” (typically exclusively ORing) the existing value by a new value, and then hashing (using the NIST standard hash function SHA-1) that extension to obtain a new digest and storing the 20-byte result back in the PCR. This mechanism creates a “running history/log” of all load events or system modifications that cannot be recreated out of order—the so called “ratcheting” feature. This has great value in the platform's ability to attest to its state (and how it got there). The digest mechanism also allows a single PCR register to record essentially an unlimited number of measurement events.
I.B.2 TCG Roots of Trust
In TCG systems, Roots of Trust are components that must be trusted as misbehavior may not be detected. There are three fundamental Roots of Trust in the TCG model:
In one embodiment, the RTM includes the initial BIOS boot code (located in protected non-volatile Flash Memory on the motherboard) executed on the main host processor—an ARM or x86 CPU in this particular example. The actual measurement code block resident in secure non-volatile memory is designated the Core Root of Trust for Measurement (CRTM), following the TCG nomenclature. The RTS and the RTR are both located in the TPM.
Transitive trust, or “inductive trust” as it is also known, is the process of securely “bootstrapping” a system, one software layer at a time, where each layer, before loading the next layer, measures the code to be loaded and, using the resources of the TPM, checks the measurement against a value held in secure storage (in the TPM in this example). An important requirement of the process is that the relationships between the components be acyclic, e.g., that the boot sequence can be described using a Directed Acyclic Graph (DAG).
Using this methodology, a trusted boot process starting at the BIOS, and proceeding up through OS or application code level can be achieved.
In addition to the use of an HROT, such as the TPM, and the implementation of a trusted boot process, our approach to platform security also takes advantage of virtualization methods, for it is when virtualization is tied to a HROT and integrated into a trusted boot and measurement process that virtualization becomes truly powerful from an isolation, provisioning, and flexibility standpoint We discuss the process of virtualization before as it relates to security before describing examples that combine TPM and VM.
Conventionally, a Virtual Machine Monitor (or Hypervisor) is a virtualization technique to abstract CPU resources that enable multiple operating systems to run simultaneously on the same host processor. There are several types of VMMs:
The former approach is generally more secure and provides better performance. It is in fact very difficult to provide strong security guarantees using a Type-2 Hypervisor. It is used in our example embodiment to:
Virtual Machine Monitors are a good place to instrument the system for behavioral monitoring purposes as all applications go through the VMM to access hardware resources. The embodiment of the VMM utilized in the following examples is a so-called “paravirtualized” VMM (but the invention is not limited to this type of VMM) in which most code runs natively on the CPU. While this VMM approach offers high performance with minimum size and minimal CPU overhead (as low as 2-3%), it typically requires that some of the low level kernel drivers of the hosted OS be “ported” to the VMM by replacing kernel calls to drivers that modify state the VMM monitors and protects with “hypercalls” to the VMM.
One weakness of a VMM from a security standpoint is that it can be still subverted by rootkit malware such as Virtual Machine Based Rootkits (VMBRs) which can be used, for example, to establish BOTnets for purposes of SPAM generation, Denial of Service (DOS) attacks, or online fraud schemes. To combat this, a VMM can leverage the protected capabilities rooted in a TPM, thus creating a Trusted VMM (TVMM—also known as a Trusted Hypervisor). The TVMM enjoys the security benefits of the TCG platform (including the Trusted Boot process) along with other improvements, including:
The ability to create “closed box” Virtual Machines (VMs) that can cryptographically identify the software they run and securely and reliably attest their state to remote parties—a capability we call “compartmented attestation”—that enables the creation of virtual trusted islands on the mobile device.
An important advantage of VMs is that it is far easier to treat them as static images (of binary representation), a static OS that can be hashed for the purposes of transitive trust and storage of VM state in a PCR digest—which ultimately allows attestation of that VM image. This is in contrast to typical OS implementations that incorporate dynamic components that can be linked/loaded/unloaded in real time.
This static, or “closed box” capability of a VM hosted OS is an important capability as it allows DRM and other transactions to occur on a VM to web based server or Peer-to-Peer (P2P) basis, and it fosters the ability of remote parties to securely and reliably provision the capabilities of VMs hosted on the mobile device.
The block diagram of
Each VM can host an Operating System (or other applications). Operating Systems in turn typically host Applications. The TPM virtualization is performed principally by the TVMM (Trusted Hypervisor). Note that the CRTM code is located directly above the CPU initialization code, and both are fetched out of protected BIOS non-volatile memory.
If the VMM itself does not contain I/O device driver code that is virtualized for the supported VMs, and the VMM is a “block box” that does not directly support TPM virtualization internally, then a modification to the system architecture can be advantageous. An embodiment for such a modification to the software architecture is shown in
Layered on top of software layer 4 are the applications hosted by the VM.
The proposed MIEP architecture preferably takes a broad view of the communication resources available to the device via multiple radios and networks. These communication links can be shared among applications or otherwise coordinated for improved secure and reliable delivery of web based services. One approach coalesces multiple wireless links (such as multiple cellular air interfaces, WiFi, and WiMAX) into a virtual communications channel. Virtualizing multiple links into a single virtual pipe improves diversity robustness as well as energy efficiency.
There are multiple ways energy efficiency can be improved: for instance, by having differentiated radios for the most energy efficient use for a given bit rate, radio range and protocol abstraction. The radios can be coordinated either as a “paging hierarchy” or as an aggregation of multiple simultaneous links. As an example of the former, a distinction can be made between a Low Power Radio (LPR) such as Bluetooth that provides low idle power consumption, and a High Power Radio (HPR) that provides high through capacity as a tradeoff against high idle power consumption (e.g., the WiFi). In one approach, the (always-on) LPR acts as a pager to the (normally-asleep or powered-down) HPR. The LPR radio, therefore, acts as a carrier of control information for the multi-radio communication link whereas data information is transmitted via LPR and/or HPR depending upon the throughput needs.
This idea can be extended across different radio abstractions (e.g., across cellular and WiFi links). For example, energy efficiency of VOIP delivery on smartphones can be improved by using the cellular channel to wakeup the WiFi radio for the VOIP call. WiFi can be more energy efficient for making the active call, but the cellular channel can be more energy efficient in quiescent/idle mode where it can be used as a wakeup or paging channel.
These and other results point to the fact that energy efficiency of radios and protocols is dependent upon the nature of the traffic and the application needs for performance and reliability. Multiple communication links open up a new dimension of system-level optimization to maximize connection robustness, maximize throughput, minimize latency, and minimize energy consumption for the MIEP.
We approach this optimization problem in a systematic manner by adding contextual awareness to the communication virtualization strategy. This contextual awareness information is biased based on parameters established by the user. Such parameters can include weightings for cost, bandwidth, latency, and connection reliability. The types of contextual awareness factors can include location, energy status of the MIEP, individual wireless channel link strength, and costs associated with any link at that moment (such as whether a wireless link is in “roaming” mode and is therefore more expensive). Additionally, based on past location history, one's future wireless link situation can be predicted and this information factored into the link virtualization strategy.
This type of virtual wireless link takes advantage of intelligent management at both ends of the virtual channel, and this can be facilitated through use of a Server based Agent acting on behalf of the MIEP. The situation is diagrammed in
In
On the MIEP side, a multi-channel link layer unification API allows apps to access the virtualized resource. Much finer grain inter-channel interactions can occur on the MIEP than at the server based Agent since it has close physical proximity to the actual communication channels.
The complete communications channel (“pipe”) virtualization subsystem is represented by the functionality contained within the dotted lined box. Note there is no reason one of the links could not be a wired link, and there is no reason that the Agent must be running in a trusted environment.
IV.A.1 TPM Resident on MIEP Motherboard
In one embodiment, the TPM and the VMM code are resident on the MIEP motherboard. This approach offers the greatest security. However, this approach has the drawback that many existing mobile devices do not have integral Hardware Roots of Trust, such as TPMs. Further, there are practical and market barriers to installing the necessary trusted boot and VMM code on these mobile device motherboards.
IV.A.2 MTM as USB Slave
There are other alternatives that are attractive from an implementation and market penetration standpoint, particularly for markets such as Enterprise. One alternative that is especially appropriate for larger form factor mobile devices such as laptops is shown diagrammatically in
There are efforts underway today to incorporate TPM type functionality onto USB memory sticks (which is yet another embodiment). However, the implementation in
IV.A.3. MTM as USB Master
In another embodiment, similar to that diagrammed above in
IV.B. Achieving Trusted Boot from the MTM
A significant percentage of mobile devices existing today, particularly portable computers, can have their BIOS configured (by an Enterprise IT department for example) to “BOOT FROM USB” in the BIOS Boot Order menu where the USB driver is BIOS ROM resident. This allows the system to boot from the MTM and a Trusted Boot process can be executed from the MTM using the previously described Transitive Trust model to install a TVMM onto the HMD as shown in the diagram of
One challenge for the Trusted Boot from the MTM is to ensure that the HMD actually booted from the MTM—and that the HMD is not rootkitted and the boot spoofed. There are also new attacks that the hosted MTM implementation is subject to, including interception of the USB bus (a “man in the middle attack”), malicious software running on the host that mimics a host HMD that is booting from the MTM, thus “fooling” the MTM into believing a secure boot process had occurred, and malicious software that exists on the host “in the background” or “in hibernation,” avoiding detection while otherwise seeming to allow a secure boot to occur. Such malicious software might, for example, snoop on user keyboard or display I/O.
However, this implementation of MTM has several powerful resources at its disposal to mitigate such attacks. One resource is the secure time tick counter in the TPM on the MTM. This time tick counter holds the number of ticks in the current session. It can have programmable accuracy as fine as lus. Virus infections (including rootkits) have been shown to be vulnerable to discovery through execution time measurements, so the MTM can also execute random code challenges on the host MIEP and measure the execution times.
The MTM can also access a secure Server, and “cryptographically tunnel” through the potentially malicious host. By contacting a host and mutually authenticating based on a shared secret known only to the MTM and the Server, the MTM can leverage mutual resources with the server to verify the integrity of the host. This situation is shown in
Once the MTM has determined that a secure boot has taken place onto the HMD, all further communications over the USB bus are encrypted, eliminating simple snooping attacks on the USB bus.
In one embodiment, the operating state of a “warm” HMD is both preserved and usable after the Trusted Boot process from the MTM. In other words, the MTM is inserted into a running HMD and the VMM is dynamically installed “under” the existing OS and environment running on the HMD. In this scenario, the previously running OS and software environment on the HMD would, after the Trusted Boot from the MTM, be running in a VM hosted by the VMM. This approach has the advantage of leveraging the OS and the applications already resident on the HMD.
An alternate embodiment, which also preserves the state of the “warm” HMD is to HIBERNATE the HMD, and just before the HIBERNATE sequence finishes, initiate the Trusted Boot process from the MTM into the TVMM environment. Once the MTM is removed, or the user desires to revert to the previously running OS and environment, the HMD can be resumed from the HIBERNATED state.
When the TVMM is installed on the HMD as a result of the secure boot, the OS stored in the MTM (preferably LINUX) is loaded and runs on the HMD in one of the VMs hosted by the TVMM.
Achieving a secure boot from the MTM to the HMD preferably is a prerequisite for achieving secure user authentication, because the I/O paths through which the user authenticates are supported by the HMD and so preferably are “Trusted Paths.” It may be possible to add a fingerprint sensor integral to the MTM, and/or a microphone for speech recognition/authentication, which would make these additional authentication factors more secure.
One of the most reliable techniques for detecting a rootkit on a PC is to force a hard reboot (by removing power) and booting from a known good external media (after insuring the correct BIOS boot order), such as CD, to then scan the system.
To provide increased assurance of user control of the MTM/HMD system, there preferably is at least one control button on the MTM to initiate a System Reboot (Trusted Boot) of the MTM/HMD pair, and/or to initiate a System Verification of the HMD of a Trusted Boot has already occurred. In one implementation, there are three lighted status indicators, or one lighted status indicator capable of three different colors. Green might indicate successful Trusted Boot or verified and trusted system status, Orange might indicate Trusted Boot or verification underway, Red might indicate that Trusted Boot or system verification has failed.
As a secure, portable, standalone compute capable entity, the MTM is a natural place from which to execute anti-malware software for an HMD, particularly upon initial boot and before any suspect HMD resident code is loaded and run.
Because of its ability to establish a cryptographic link to a secure Server and perform a mutual attestation protocol, malware signature databases and other information can be downloaded directly to the MTM from a Server, potentially through a hostile HMD. With these capabilities, the MTM can act as a disinfecting agent for HMDs.
As will be discussed further below, it is desirable, in order to minimize energy expenditure and compute burden on the MTM/HMD combination, that malware scanning tasks be place shifted/virtualized to the Server where possible.
In one aspect of the system model, the MIEP/server role is extended beyond that of a classic thin client client/server model in that the server and its capabilities can be viewed as an extension of, and subordinate to, the MIEP.
One of the important roles of the Agent Server (“Server”) is to optimize the functionality of the MIEP, particularly in the areas of security, energy efficiency, and/or mitigation of the functional limitations imposed by the OCC (Occasionally Connected Computing) model and physical and energy limitations of the MIEP. We call this MIEP functional enhancement “trusted functional virtualization”. This differs from typical web servers that provide web services on a demand basis to any client with minimal formal trust or security guarantees.
To fully realize the advantages of Server supported functional virtualization, the Server preferably is capable of securely and reliably attesting its state to the MIEP—and to do this it supports the infrastructure necessary for remote attestation, including Protected Capabilities (such as those found in the TPM), Hardware Roots of Trust along the TCG model, and a Trusted Boot Process. The Server trust and security architecture in effect mirrors the trust capabilities of the MIEP except that the superior resources of the Server allow it to create many more VMs to support numerous MIEPs. Also, the Server's observability across MIEPs provides an MIEP with additional capability for network-wide authentication and validation.
In the situation where the Server does not possess the security capabilities outlined above by the TCG, then the trust level can gracefully degrade to an “implied trust” model in the Server, although the virtualization functionality can be equivalent. This is most appropriate for enterprise situations where the Server supports a specialized provisioned client (MIEP) base, sits behind the corporate firewall, and is carefully managed and provisioned (so that trust can be implied).
In one embodiment, applications running in MIEP VMs can “spawn” VMs on the Server to create trusted hosting environments in which MIEP Agents can run. This spawning process preferably includes mutual authentication and attestation.
The Server side VMs preferably conform to an API to support Agent execution and communication with MIEP VM hosted applications. This API allows the use of a variety of Server types and implementations. The types of configurations that can be supported include the following shown below in Table 1:
As can be seen in the table, overall MIEP/Server system security level increases going down the table. When the other, weaker, levels of security are utilized, the user preferably would be presented with the choice of whether to authorize Agent execution on the Server at that security level via some form of trust User Interface.
In one approach, VMs can attest to their state when challenged by an application running in an MIEP VM that has spawned a corresponding Server VM. This provides the mechanism for creating the trusted environment necessary for applications hosted in MIEP VMs to run Agents on the server to act on a proxy basis for the MIEP, and to provide dynamic validation of the trusted environment.
In order to customize the security environment of the Server VM, applications running on the MIEP VM preferably can control the Agent host environment by specifying capabilities of spawned Server VMs, including allowed I/O modalities. This specification of the Agent host environment can take the form of MIEP generated policies. As an example, the application running in the MIEP VM may specify that only the TCP/IP port to/from the server VM be enabled.
Note that this is the inverse of digital rights management situations where a content provider desires to specify policies on the MIEP VM, such as “locking down” the MIEP VM to which it is releasing content. Note that this is also the inverse of situations where corporate policy is to be enforced on the MIEP VM (such as allowed I/O modalities) to create a sufficiently secure environment to enable functionality such as Single Sign On (SSO), or the secure hosting of virtual desktop, terminal client, or push data environments.
For implementation efficiency reasons, it is usually desirable that applications running in different MIEP VMs be able to share the same Server VM, provided that sufficient security criteria are met by each participating MIEP VM.
Existing proposals to deal with trusted path issues involve adding hardware/software complexity to the MIEP. Examples include encrypted keyboard I/O, encrypted screen I/O, adding TPM type functionality to motherboard based Flash Memory, and adding TPM type functibnality to USB memory sticks. We term this a “distributed TPM” approach where, because the central mobile device implementation (software environment/OS) itself is not trustworthy, the mechanisms necessary to establish trust in these peripheral system resources have been pushed out to the peripheral system resources themselves.
An important UI requirement for any MIEP that simultaneously supports trusted and untrusted VMs and application software is to indicate to the user the trust level of the application and/or VM he is interacting with. We call the overall capability of securely displaying to the user the trust state of the MIEP “visual attestation”.
An important functional requirement to support visual attestation is the ability to place portions, and in some cases, all of the framebuffer under exclusive control of the VMM, or the console/DOMO VM under direct control of the VMM that is responsible for physical hardware I/O. This dedicated portion of the framebuffer under VMM control then provides trust status feedback according to configurable policies, and can be used for other user authentication purposes. There is then at all times a “trusted path” to said dedicated framebuffer portion of the display from the VMM.
There are two fundamental UI operating modes to consider:
In the Windowed mode case, the challenge is how to provide secure display based I/O to trusted software within a framebuffer shared by untrusted software, and to do so with minimal impact on either the performance or the pre-existing windowing models and behavior. It is desirable to implement this simultaneous support of trusted and untrusted “windows” as it provides a more seamless user experience.
Refer to
A prerequisite for correct operation is that there be a trusted path to the keyboard and mouse. That is, once the cursor is placed within the trusted window, that window has I/O focus and that focus cannot be changed by another application until the user moves the cursor out of the trusted window, and only user generated movements of the mouse can move the cursor. This will prevent untrusted software from “stealing” keystrokes by momentarily switching focus to another window without the user intent and action of moving the cursor out of the trusted window. Only while the mouse is within the perimeter of the trusted window is the trust indicator at the top of the screen set to the trusted state (perhaps displaying a green color).
Note also, that once a trusted window is to be released by a trusted application, where that area of the display is to be “returned” to the framebuffer for use by potentially untrusted applications, that section of the framebuffer should be first written with a random pattern. One skilled in the art can readily understand variations to the above approach, such as wishing to display the window representing an untrusted application within a framebuffer generally controlled by a trusted application—but all rely on the existence of trusted paths to the framebuffer, the keyboard, and the mouse—and the enforcement of transparency and predictability of I/O focus to the user.
The “trust bar” at the top of the display is controlled exclusively by the VMM or console/DOM0 VM, and, in this example, overlays the screen image controlled by the host VM and/or the application(s) hosted by that VM. The trust bar overlays the underlying window in a semi-transparent manner, indicating that this VM can be trusted. This is one visual method of indicating trust. Another might be to frame the entire display with a thin border of a certain color, such as a shade of green If the current display/framebuffer owner cannot be trusted, we use the convention of indicating untrusted status by turning the trust bar a transparent red with a black border around it. One skilled in the art can readily understand there are many possible visual mechanisms of displaying trust level—but none are reliable unless that part of the display/framebuffer displaying the trust level is exclusively controlled by a fully trusted resource, such as the VMM, guaranteeing a trusted path to that physical I/O resource.
V.H.1 Extending Trust Level Indication to Server Based Agents
Note that the trust bar concept, coupled with the ability of the MIEP and the Server to mutually attest to each other, can be extended to also enable the display to the user of the trust level of the software running on the Server. An example would be a VM that that user has spawned on the Server to host an Agent or a service on the MIEPs behalf. If the VM and hosted Agent can successfully attest to the correctness of their state to the MIEP, that information can be displayed in the trust bar in a manner similar to that described above.
With the continuing rapid decline in the price per bit of non-volatile memory (particularly NAND FLASH), a memory technology that uses very little quiescent power, it is attractive to leverage this resource to maximize functionality under the OCC model while minimizing MIEP energy requirements.
One approach is to create a substantial cache on the MIEP, called the Global State Cache (GSC), that caches user internet state, including data and programs. The GSC is managed on a contextually appropriate basis. Relevant contextual variables include time, location, available internet bandwidth, energy availability, and task. Although it is tempting to use simple “fetch ahead” type strategies to manage the GSC, such strategies have been shown to be energy inefficient.
The GSC will help maintain operational coherence in support of the OCC model. By operational coherence we mean that should connection be lost, there is sufficient state in the MIEP to continue meaningful computation/workfor the typically expected connectivity loss duration.
One strategy for maintaining cache contents that offers significant improvements is to use a running history time series ofpast contextual data, such as location and task, to predict future needs and thereby optimize the GSC maintenance policies.
Leveraging Trusted Computing technologies as outlined in the previous sections allows for the development of mobile applications and services using a distributed virtualization model that spans the network between them: Virtual Applications that provide some service to mobile users, combining the rich context and availability of mobile platforms with the reliability and ubiquity of web services in a seamless manner. This is facilitated by a system, enabled by a TVMM with a core root of trust that preferably:
In one approach, a platform or environment supports applications that take advantage of connectivity and mobility through the use of Virtual Services. In this platform, trusted application components on the MIEP are associated with trusted service components running on the TSEP. These components, which are running in trusted VMs at both Endpoints, attest to and communicate with each other through an encrypted link that is dedicated to their association. Because of this link, these mobile and service-based application components comprise a single Virtual Application that spans the network between them in a transparent way.
Note that the a TSEP is generally resident on a server, but not necessarily so. The TSEP could just as easily be resident on another VM on the MIEP.
Note the following:
The characteristics of the Virtual Service architecture changes somewhat when one considers the implementation of multiple tiers that are common in Service Software Architectures.
Note the following:
In addition:
Although not an ideal embodiment, a reasonable alternative embodiment could utilize OS hosted VMs, perhaps using a Type-2 hypervisor, to provide some reasonable level of security and trust for the Agents hosted on the service architecture. While the VM is hosted on an untrusted platform, specific measures can be taken to ensure a level of trust.
Storage Encryption—Storage utilized at the data store can be encrypted utilizing some standard form of repository encryption that is keyed off of key material originating from the MIEP.
Memory—The OS-hosted VM can be augmented to provide encryption for at least parts of the memory space assigned to the VM designated as critical. In fact, given the availability of processing power and the scaling aspect of the service architecture, the entire VM memory space can be encrypted.
Attestation—It is not possible to attest for the host OS or the platform in this architecture, but the static aspects of the VM can support attestation. Encryption of the VM storage and memory space makes the spoofing of VM attestation information difficult and time-consuming.
Path Limiting—Generally the data utilized or stored for the implementation of the Agent originates with the MIEP, especially for Agents that are spawned by the user via interaction with the MIEP. In this general case, the access to devices and resources on the server can be limited to the processor, memory, storage and network ports. Network access can utilize standard encryption methods for securing information passed between the MIEP and the Agent as well as for information passed between the Agent and the Internet.
In
The secured OS hosted virtualization system described above can be augmented through the introduction of some components that support the complete TVMM model. One possible example is the use of a TVMM Based Agent Master, which supports the trusted boot process and that can fully attest to the MIEP. As depicted in
This approach does not per se prevent the hacking of OS hosted VM's on the Web or Application servers, but it does make that hacking much more difficult, due to the ephemeral nature of these VM's. They are spawned to service one specific task or request and are removed as soon as they are done. Hacked Agents cannot survive the spawning process because their code is never committed to storage on the running server. Furthermore, if one of the VM instances is exposed, only the user data it is trusted with is at risk. The user keys do not leave the VM Master.
In short, using this approach all keys are secured by a fully attestable VM Master, the user data store is secured by the VM Master, and the VM Master will only honor fresh requests made by a VM that was spawned by it and is still attestable. Furthermore, the OS hosted VM can only access the limited subset of secure data registered to it.
In
We describe some possible Agents that are facilitated by aspects of the invention. These examples below represent just a few of many that are possible.
A web browsing agent acts as a proxy for the user for the purposes of improving privacy and anonymity and decreasing the code size and energy “footprint” of the browsing functionality on the MIEP. The web browsing Agent virtualizes the user, placeshifting him to the server from the perspective of the target web service.
The following benefits can accrue:
Much of the content of typical web pages consists of advertisements, and these advertisements are often image content in the form of .gif or .jpg files that dominate the web page in terms of total data payload. The purpose of the filtering Agent is to remove and/or filter this extraneous content to minimize downstream bandwidth requirements (and related transmission energy expenditure) to the MIEP and required rendering energy. This Agent would be preferentially a component of the Web Browsing Agent, but could be a standalone Agent if a Web Browsing Agent is not used. This type of Agent is also beneficial to the wireless network carrier as the wireless network capacity (the number of users that can be supported) can be increased if the average data bandwidth to each user can be decreased by filtering and compression.
Security requires energy expenditure, and one aspect of the invention moves as much of the anti-malware related energy expenditure, software complexity, and code size footprint to the Server as possible. This implies a paradigm shift in the current monolithic application model of anti-malware software for the PC in that in the mobile world the functionality is preferably partitioned between the MIEP and the trusted server. Provisioning can also be simplified as much of the actual scanning process is centralized, minimizing the need to “push” malware signature databases to leaf nodes.
IP traffic that arrives in plaintext can be easily scanned by the Agent. Examples of such traffic might be email where the Agent is scanning for SPAM, etc.
An advantage of the trusted Agent approach is that the Agent may have access to keys used by the MIEP for decryption of IP traffic, can therefore decrypt that traffic, and thereby scan a larger percentage of the traffic bound for the MIEP.
From the enterprise perspective, when combined with policies to “lock down” the corresponding VM on the MIEP to maximize security and to uniformly provision, along with malware scanning using a Server based Agent, these practices constitute an important component of “extending the corporate firewall” around the MIEP.
Another potential use for a Malware Agent is to scan data that is “passed thru” the MIEP to the Server. If the MIEP is browsing the web directly and wishes to download potentially harmful content, it may choose to upload the data to the scanning Agent on the Server to be scanned, or perhaps redirect the data stream directly to the web based scanning Agent, rather than perform the scan locally, depending on energy and cost tradeoffs of local vs. remote scanning.
Polymorphic/metamorphic viruses and zero-day attacks can escape static signature detection, and for these threats behavioral monitoring during runtime is often employed to flag suspicious behavior. Typical techniques include instrumenting kernel level routines and hooking the system API calls and passing data in real time to analysis software that utilizes heuristic rule systems or employs learning/neural net techniques. The drawback is that these systems run continuously, and therefore can consume considerable energy.
An alternative system is to instrument the MIEP VM, and then pass compressed “signatures” of real-time execution behavior to the Trusted Server based Behavioral Monitoring Agent for analysis. If the analysis energy expenditure is larger than the data transmission energy expenditure, then the approach is advantageous, although the response latency is likely increased. So for situations where rapid response is critical, it may be necessary to run that specific behavioral monitoring on the MIEP.
Most P2P networks, including examples such as Napster, BitTorrent, KaZaA, and eDonkey, require that the network client (peer) support an upstream data channel that is independent of actual user generated upstream data, in order to maintain the network. However, this upstream data support requirement usually is not desirable for the following reasons:
Like the Web Browsing agent, the P2P Agent can service the P2P network on behalf of the MIEP without exposing the MIEP identity.
A classic “thin client” implementation is one where the client simply presents a viewport into an application running on a server. Providers of such “Virtual PC” thin clients include NEC, Sun, CLI and others running software from providers such as Citrix. This model is facilitated by a dedicated reliable high bandwidth link between the client and the server. Data passing between the thin client and the server are often compressed to minimize enterprise network bandwidth requirements.
However the variable quality of the communications link between the MIEP and the Server, resulting in an Occasionally Connected Computing (OCC) model, makes the classic Thin Client model more difficult, so the MIEP should be capable of standalone operation. One goal of a data compression and transcoding Agent then is to support a mobile OCC model by reducing energy expenditure at the MIEP and reducing data transfer latency.
One of the prevailing current commercial examples of a data compression and transcoding system is the Opera Mini Browser. Opera Mini fetches all content through an Opera proxy server that runs the layout engine of the browser. The engine on the proxy server reformats web pages into a size that is suitable for small screens. The content is compressed and delivered to the phone in a markup language called Opera Binary Markup Language (OBML). Content is typically compressed by 70-90%. However, there are some difficulties with the centralized proxy server approach to this functionality:
We address these issues with a trusted Agent based approach that is personalized for each MIEP, and that can be deployed on a decentralized basis.
This functionality was discussed previously in the Communications Channel Virtualization section. A trusted Agent running on the Server acts as the “sink” to aggregate the multiple communications links on the “Server side” of the Internet Cloud. Requests to web based services, for example, are then relayed back out over the Internet by the Agent to the service provider.
The data storage Agent acts as a broker to store/retrieve data to/from the various storage locations (such as Amazon's Simple Storage Service—S3) via the web. The Agent makes intelligent decisions about where to store the MIEP data based on user weighted parameters such as cost, access latency, and storage location. The Agent handles encryption/decryption of data before it is forwarded to the appropriate storage location, thereby relieving the MIEP of that compute and energy burden.
This agent mediates classic thin client functionality in that it interfaces a viewport on the MIEP to an application running on behalf of the MIEP on a VM on the Server. This agent acts as a virtual screen and UI I/O channel for the application, passing the screen image down to the MIEP for rendering on a viewport. With this capability, software can be run on the Agent that is not “installed” on the MIEP or where the energy cost is too high to run locally or where the local compute resources are inadequate. An example might be an engineer that wishes to run a large Matlab simulation.
One purpose of the Global State Cache (GSC) is to improve MIEP functionality under the OCC computing model while minimizing MIEP resource requirements. This Agent uses contextual clues, past behavior (including location and internet connection quality), current MIEP status and task set, along with user specified parameters, to prefetch into the cache that state (data, programs, etc) which will maximize MIEP functionality at present and near future. Since prefetching into the cache that state which is not necessary is wasteful of energy and communications bandwidth, a highly intelligent contextually aware GSC Management Agent can be advantageous.
These types of Agents broker MIEP transactions when the MIEP or the user is unavailable. An example might be bidding on an eBay item where the user does not want to bid until a few seconds before the auction ends, but is not confident in the communications availability or latency of the MIEP. Another example might be a situation where the user wants a transaction Agent to monitor airline prices to shop for the best deal to a destination within a certain set of parameters. It is important that the Agents be trusted and operate in a trusted environment so that the user can leave with the Agents those passwords or other authentication and purchase information necessary (such as credit card information) for these Agents to act as a full proxy on behalf of the user.
This Agent maintains the various identities (authentication data, etc) used to interact with a variety of web sites and services to create a virtual Single Sign On (SSO) function to the web. The Agent based approach has an advantage over a centralized approach in that the Agent can be owned and controlled by the user, allowing Agent code and security measures to be personalized to individual user requirements. Another advantage over centralized systems that propose leveraging SIM cards at the Endpoint for authentication purposes is that wireless carriers often do not expose SIM data outside their network, typically supplying only session based IP addresses to the web. In other words, the authentication is not end-to-end. Use of a HROT such as the TPM insures secure end-to-end authentication regardless of which network the MIEP is utilizing to communicate with the web.
The relationship between the MIEP VM instance and the Server VM instance is shown schematically in
VIII.A.1 Authentication Prior to Attestation—Use of AIKs
As was highlighted in the example above, the ability for independent parties to mutually attest to each other's state is highly desirable. However, before attestation can take place the parties must authenticate each other's identity. This is done indirectly by digitally signing the PCR (Platform Configuration Register) values—residing in the TPM—to be delivered to the challenging entity using an asymmetric key pair.
Since Endorsement Keys (EK) are never made public, the TCG protocol calls for the use of a pseudonym, or alias, of the EK in the form of the Attestation Identity Key (AIK). The AIK is also an asymmetric key pair, and a TPM can create a virtually unlimited number of AIKs. AIKs are signature keys that are used to sign PCR values for delivery to a challenging third party.
However, for privacy reasons, it is preferable that the AIK not be linkable to the platform/TPM that created it, and so the TCG has designed a trusted service provider (or Trusted Third Party (TTP), the Privacy Certification Authority (PCA) to provide AIK Certificates.
VIII.A.2 Attestation Protocol Using AIK Certificates
We describe below a representative attestation protocol for a challenger wishing to run a secure application in a secure environment on the MIEP. A similar protocol occurs when a challenger (an application on the MIEP) wishes to run an application in a secure environment on the Server. This is not meant to be a definitive description. There are many possible variations.
Note that since the TPM is virtualized in our preferred embodiment, the protocol appears to the challenger as if it is dealing with a platform running a single OS and possessing a single TPM. This embodiment then supports our “compartmented attestation” model.
When a new TPM starts to function for the first time, a TPM Activation Protocol is run in which either the manufacturer, or a Trusted Third Party (TTP) Certification Authority (CA) generates an Endorsement Key pair (EK_PUB, EK_PRIV) consisting of the public (_PUB) and private (_PRIV) keys, which are installed into protected locations in the TPM, and also generates an Endorsement Certificate (EK_PUB_CERT), signed by the manufacturer or CA's public key, containing EK_PUB, the TPM version number, and manufacturer or CA identification information. The EK_PUB_CERT is stored on the platform, but not on the TPM.
The owner of the platform “takes ownership” of the TPM by inserting a shared secret into the TPM that is encrypted by EK_PUB.
The EK may not be used to create signatures; it may only be used to establish the TPM owner and to create AIKS, which act as pseudonyms for the EK. AIK key pair generation is completely controlled by the platform owner. AIKS in turn, may not be used to encrypt, but only for purposes of digital signature by the TPM on information such as PCR values.
AIK Certificate Generation: In order to avoid linking the AIK to the platform identity, and thereby protect the user's anonymity, a TTP CA is used—the so called Privacy CA (PCA) to provide a certificate for the AIK_PUB part of the AIK key pair.
An example of an AIK certificate generation protocol is diagrammed in
After generating an AIK pair, the platform requests an AIK certificate (AIK_PUB_CERT) be generated by sending to the PCA, via secure channel or encrypted with PCA_PUB, a bundle consisting of the AIK_PUB, the EK_PUB_CERT, and some other information. The PCA verifies the credentials by first decrypting the bundle using PCA_PRIV, verifies that the EK_PUB for that TPM is on its list, and returns an AIK_PUB_CERT certificate to the platform that has been encrypted with EK_PUB (the AIK_PUB_CERT is signed by PCA_PUB).
Remote Attestation: At the start of the remote attestation protocol, the MIEP platform holds the EK pair, the EK_PUB_CERT, the AIK pair, the AIK_PUB_CERT, and the PCA_PUB. Although the PCA holds the PCA pair, the EK_PUB, and the EK_PUB_CERT, it is not involved after the AIK certificate is generated. The challenger holds the PCA_PUB and the EK_PUB.
An example of an attestation protocol is diagrammed in
VIII.A.3 Direct Anonymous Attestation
A weakness with the use of a Privacy Certification Authority (PCA) to certify an AIK is that the third party may not in fact be trusted and that it is also possible to associate AIKs with a given device. To address this shortcoming the TCG has adopted a protocol known as Direct Anonymous Attestation (DAA) that is a group signature where the signature cannot be opened—and anonymity is not revocable.
Detractors of this type of group signature approach point out that if it is broken—it will be broke everywhere—a weakness of this type of approach that was made painfully public when the Content Scrambling System (CSS) was cracked. This weakness is known as BORE (Break Once, Run Everywhere).
Mobility is more than just about the ability to work and access resources and information when mobile. It is also about the ability to migrate work environments. The ability to migrate a complete environment (virtualized environment) between platforms is very powerful, particularly where at least one of the platforms is mobile and where the communications channel is wireless. Such a capability is facilitated by using a VMM model.
The MTM reduces mobility to its core essence of a mobile Root of Trust, a minimal portable repository of personal identity and Trust that is capable of leveraging a variety of hosts to access the internet using security based mechanisms to extend a Trusted Environment to the host.
Meta-data, that is, information about the nature of a given data, has been used in software engineering to provide capabilities for delayed declarations (common being use of reflection in Java). Meta-data can also be used for conveying contextual or environmental knowledge to a system. For instance, an operating system can be aware of memory performance issues being based by the cache/paging subsystem, or processor slowdown/shutdown. Meta-data has also been used in adaptively controlling transcoding of video data for energy efficient mobile devices. In another aspect of the invention, meta-data is used for contextual awareness such as the following elements:
VIII.D.1. Remote Provisioning
The ability to reliably, securely, and remotely provision MIEPs they are managing is crucial for both Enterprise and cellular Carriers. For Carriers, the driving needs include:
For Enterprise, the driving needs include:
Aspects of the invention significantly improves the ability of Enterprise IT departments and Carriers to meet these needs as, by virtue of the HROT, trusted boot, and integrity measurement and attestation capabilities they can be assured that the MIEP is in a known good state, and that secure trusted paths exist for user input to support reliable authentication and user I/O. Furthermore, the remote provisioning entity can create separate strongly isolated environments on the MIEP, by using VMs on the MIEP, that are individually provisionable and attestable, thus providing the provisioning entity with a great deal of flexibility in Endpoint management and configuration.
VIII.D.2 Applications
Existing mobile internet Endpoints that claim to offer high security typically achieve that security via a closed platform. However, as the market moves towards open platforms, spurred by open networks, more complex operating systems, the ability to download and install arbitrary applications, and with end users using their personal Endpoints for corporate purposes, aspects of the invention offer a method of achieving typically better than closed platform security on an open platform.
Significant effort is being expended in the Enterprise to support, centralized client/server computing, most recently in a form known as server based desktop virtualization. However, this approach has a number of drawbacks:
Two important reasons typically cited as to why the Enterprise does not place greater emphasis on Endpoint based desktop virtualization as an alternative are provisioning and security. Both of these Endpoint issues are addressed by aspects of the invention, enabling Endpoint based desktop virtualization to become a predominant Enterprise mobile computing paradigm.
Some example applications, and how they would be enabled by various aspects of the invention, are highlighted below:
Secure Terminal Client Hosting. A VM that is provisioned to be “locked down” on the MIEP, such as the locked down VM in
Secure MIEP Based Desktop Virtualization. Similar to the Terminal Client hosting example above, a strongly provisioned “locked down” VM on the MIEP can be used to host an Endpoint based desktop virtualization system.
Secure Push Data Hosting. Secure push email, calendar, and contact lists are the staple of Enterprise mobile Endpoint functionality, and typically the security of those push applications is via closed platforms. Aspects of the invention offer the opportunity of obtaining the “security of a closed platform on an open platform” through the HROT, trusted boot process, and integrity measurement capabilities to host push data applications on the MIEP.
Secure Autonomous Lost Data Destruction. With a HROT and trusted boot process, the MIEP is capable of reliable erasure of lost data on an autonomous basis, i.e. the data wipe does not require connection to the internet for the wipe to be initiated and logged by the IT department. IT can be confident that the data has been wiped, or safely sequestered via encryption, based on policies set on the MIEP.
The data wipe can be initiated on the MIEP based on policies, such as requiring that the MIEP “phone home” on a periodic basis, and if that is not achieved, initiate the data wipe of sensitive data.
Attestation, as defined by the TCG, is “the process vouching for the accuracy of information”. Attestation can take various forms—also defined by the TCG to be:
In the discussion below we use the terms verify and verification to mean an operation that is used to measure the validity or trustworthiness of a particular component of the system, which in turn can generally be viewed of as a step in an attestation process.
Current trusted boot models, represented by the trusted boot procedure outlined by the TCG (http://www.trustedcomputinggroup.org) take a fairly static view of the attestable state. That is, only the state of the system immediately after boot can typically be attested. But the system state may change with execution with the loading of dynamically linked libraries, modifications to the Windows registry, etc. Thus the system can drift from the initial attested state, and verification becomes less reliable and attestation more difficult. Thus “one time” existing trusted boot and the resultant attestation models limits the use of attestation in real world situations. A method is needed to extend attestation techniques to deal with the dynamic changes in the system state.
One approach is to run verifications in the background as the system state evolves, and “cache” results either by extending the PCR registers directly in the HROT or by storing verification results in sealed storage (“blobs”). While this may work, it leads to very high resource utilization, thus limiting its use on a sufficiently continuous basis. Furthermore, attestations become more time consuming as the number of extensions to the PCRs and the resulting attestation chains grow. A method is needed to extend attestation to cope with execution state mutation that does not require a significant attestation compute burden.
We introduce a concept called “dynamic attestation” that extends the attestation model through the software hierarchy from the BIOS to the application level while adhering to the general “trust ratcheting” principal inherent in the TCG based use of the PCRS. Encrypted, or sealed storage can be utilized to extend the PCR model to each level in the hierarchy, so that any and all levels, including applications, can be verified independently from one-another. They can be sealed against the entire ratchet chain beneath a particular level, or just against the invariant component of that level. We call these typically encrypted or sealed extensions of the PCR ECRs (“Extended Configuration Registers”).
Dynamic attestation operates at a finer granularity that standard models and deals with mutating state using a layered approach. This enables it to make the verification process incremental and computationally less burdensome.
To achieve dynamic attestation, we make a distinction between invariant and modifiable state. Invariant state information is useful in reducing the size of the verification task. We also architect the system to leverage “packaged and verified” software entities where possible to maximize system robustness. This is a hard problem in practice, particularly for the Windows environment, since it is difficult to create cleanly packaged and verified software modules. We note that VMs themselves, when first instantiated, are good examples of such “packaged and verified” entities.
Important modifiable state areas to consider include memory allocation/deallocation, the execution stacks, and the registry.
The system designer can make distinctions among modifiable state, including:
The keys for encrypted state can be stored in the TPM, encrypted and stored elsewhere in the system, or preferably as sealed blobs that can be sealed against aspects of the system, including the invariant state of the current software level, or against the attestation state of the software stack up to that level.
To minimize computation, the allocated memory can be brought into and out of RAM in large chunks to minimize encryption/decryption overhead. To reduce tampering and memory/TLB attacks, the VMM should ensure that those chunks are isolated in RAM.
Stack state is more challenging to protect. It is unreasonable to expect that an application stack can be effectively verified as a block of memory because specific aspects of the stack are nondeterministic and contain information such as specific hardware and memory addresses that will change from system to system and even from execution to execution within the same system. However portions of the stack that are volatile still remain predictable such that, “scrubbed” stack trace data, that is abstracted or simplified representations of the stack, can be conditionally verified at principle functional checkpoints. This provides protection from certain types of semantec attacks such as library substitutions and malicious plug-ins and components, since only certain program execution flows are allowed through known signed libraries, plug-ins, and components. Furthermore, the ability for a program to support stack state validation need not require explicit coding by the application. Since the nature of the execution stack is to store the function or method call history, a validation tool could link in bindings to validation routines so that a PCR measurement may be extended according to some scheme. This allows for the program to take measurements and validate stack state at specified stack locations with no additional programming.
We briefly discuss each level in the software hierarchy:
BIOS: The BIOS is considered invariant. It is usually a RTMS (Root of Trust for Measurement). Access to the BIOS is protected/controlled.
VMM: The VMM itself is readily attestable at any time as it is invariant to change, except principally for some state information associated with the VMs it is hosting, and this state information can easily be protected as sealed storage (blobs).
VM: VMs can be “packaged” as verifiable and attestable state for instantiation, and in general all VM instantiations can be realized as such.
Operating System: OS images can be “packaged” as verifiable at attestable state for instantiation, and for certain applications a “clean” OS image is appropriate. But in general OS image state will mutate and one or more of the dynamic attestation techniques mentioned above will be applied.
Application: Like OS images, application images can be “packaged” as verifiable at attestable state for instantiation, and in most instances a “clean” OS image is appropriate (with user preferences being the only state that typically mutates). In the cases where application image state will mutate and one or more of the dynamic attestation techniques mentioned above will be applied.
The foregoing discussion discloses and describes merely exemplary methods and embodiments of the present invention. As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Ser. No. 60/979,728, “Distributed Trusted Virtualization Platform,” filed Oct. 12, 2007 by Peter F. Foley et al. and to U.S. Provisional Patent Application Ser. No. 60/999,056, “Distributed Trusted Virtualization Platform,” filed Oct. 15, 2007 by Peter F. Foley et al. The subject matter of all of the foregoing is incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
60979728 | Oct 2007 | US | |
60999056 | Oct 2007 | US |