Demonstrating integrity of a compartment of a compartmented operating system

Information

  • Patent Grant
  • 9633206
  • Patent Number
    9,633,206
  • Date Filed
    Friday, June 7, 2002
    21 years ago
  • Date Issued
    Tuesday, April 25, 2017
    7 years ago
  • Inventors
  • Original Assignees
  • Examiners
    • Hayes; John
    • Winter; John M
    Agents
    • HP Patent Department
Abstract
A computing platform 20 runs a compartmented operating system 22 and includes a trusted device 23 for forming an integrity metric which a user can interrogate to confirm integrity of the operating system. Also, the integrity of an individual compartment 24 is verified by examining status information for that compartment including, for example, the identity of any open network connections, the identity of any running processes, and the status of a section of file space allocated to that compartment 24. Hence, the integrity of an individual compartment 24 of the compartmented operating system 22 can be demonstrated.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

The subject matter of the present application may also be related to the following U.S. Patent Applications: “Operation of Trusted State in Computing Platform,” Ser. No. 09/728,827, filed Nov. 28, 2000; “Performance of a Service on a Computing Platform,” Ser. No. 09/920,554, filed Aug. 1, 2001; “Secure E-Mail Handling Using a Compartmented Operating System,” Ser. No. 10/075,444, filed Feb. 15, 2002; “Electronic Communication,” Ser. No. 10/080,466, filed Feb. 22, 2002; “Multiple Trusted Computing Environments with Verifiable Environment Entities,” Ser. No. 10/175,183, filed Jun. 18, 2002; “Renting a Computing Environment on a Trusted Computing Platform,” Ser. No. 10/175,185, filed Jun. 18, 2002; “Interaction with Electronic Services and Markets,” Ser. No. 10/175,395, filed Jun. 18, 2002; “Multiple Trusted Computing Environments,” Ser. No. 10/175,542, filed Jun. 18, 2002; “Performing Secure and Insecure Computing Operations in a Compartmented Operating System,” Ser. No. 10/175,553, filed Jun. 18, 2002; “Privacy of Data on a Computer Platform,” Ser. No. 10/206,812, filed Jul. 26, 2002; “Trusted Operating System,” Ser. No. 10/240,137, filed Sep. 26, 2002; “Trusted Gateway System,” Ser. No. 10/240,139, filed Sep. 26, 2002; and “Apparatus and Method for Creating a Trusted Environment,” Ser. No. 10/303,690, filed Nov. 21, 2002.


FIELD OF THE INVENTION

The present invention relates in general to a method for demonstrating the integrity of a compartment of a compartmented operating system, and to a trusted device and computing platform for performing the same.


BACKGROUND OF THE INVENTION

Compartmented operating systems have been available for several years in a form designed for handling and processing classified (military) information, using a containment mechanism enforced by a kernel of the operating system with mandatory access controls to resources of the computing platform such as files, processes and network connections. The operating system attaches labels to the resources and enforces a policy which governs the allowed interaction between these resources based on their label values. Most compartmented operating systems apply a policy based on the Bell-LaPadula model discussed in the paper “Applying Military Grade Security to the Internet” by C I Dalton and J F Griffin published in Computer Networks and ISDN Systems 29 (1997) 1799-1808.


Whilst a compartmented operating system is secure offering a relatively high degree of containment, it is desired to provide a method for demonstrating the integrity of a compartment. In particular, it is desired to demonstrate that a compartment is in a trusted state and will operate in a predicted manner. As one example, it is desired to confirm that the compartment is free from subversion, either arising inadvertently or through an unauthorised attack.


SUMMARY OF THE INVENTION

An aim of the present invention is to provide a method for demonstrating the integrity of an operating system compartment. Another aim is to provide a computing platform allowing demonstration of the integrity of an operating system compartment.


According to a first aspect of the present invention there is provided a method for demonstrating integrity of an operating system compartment in a computing platform having a trusted device, comprising the steps of: (a) providing a host operating system; (b) confirming a status of the host operating system using the trusted device; (c) providing a compartment of the host operating system; and (d) confirming a status of the compartment.


The step (b) preferably comprises providing an integrity metric of the host operating system, which may be compared against an integrity metric in a previously formed certificate, to verify integrity of the host operating system. Preferably, the integrity metric is formed by the trusted device.


The step (d) preferably comprises providing status metric of the compartment, which may be compared against a status metric in a previously formed certificate, to verify integrity of the compartment. Here, the step (d) comprises comparing the current state of the compartment against an expected state. The status metric is formed by the host operating system, or preferably is formed by the trusted device.


Preferably, the step (d) comprises providing information about the current state of the compartment, including information about any one or more of (i) a section of file space allocated to the compartment; (ii) any processes allocated to the compartment; or (iii) any communication interfaces allocated to the compartment. Preferably, the step (d) comprises confirming that the compartment only has access to an expected section of file space. Preferably, the step (d) comprises confirming that the allocated section of file space is in an expected condition. Preferably, the step (d) comprises confirming that only an expected process or processes are allocated to the compartment. Preferably, the step (d) comprises confirming that only expected IPC channels are open. Preferably, the step (d) comprises confirming that only expected communication interfaces are allocated to the compartment.


Also according to this first aspect of the present invention there is provided a method for use in a computing platform having a trusted device, the method comprising the steps of: (a) providing a host operating system; (b) verifying a status of the host operating system by comparing an integrity metric formed by the trusted device against an integrity metric in a previously formed certificate; (c) providing a compartment of the host operating system; and (d) verifying a status of the compartment by comparing a status metric formed by the trusted device against a status metric in a previously formed certificate.


According to a second aspect of the present invention there is provided a computing platform, comprising: a host operating system; at least one compartment provided by the host operating system; a trusted device arranged to confirm a status of the host operating system; and a status unit arranged to confirm a status of the compartment.


Preferably, the trusted device forms an integrity metric of the host operating system. Preferably, the trusted device forms the integrity metric during boot of the host operating system. Optionally, the integrity metric is updated periodically while the host operating system is running.


Preferably, the status unit comprises at least one of the host operating system or the trusted device.


Preferably, the status unit provides a current status of the compartment to be compared against an expected status. The status unit ideally provides a status metric based on the current status. Preferably, the current status identifies any one or more of (i) a section of file space allocated to the compartment, (ii) any processes allocated to the compartment, (iii) any IPC channels open for any process allocated to the compartment, or (iv) any communication interfaces allocated to the compartment.


Preferably, the status unit confirms a condition of the section of file space allocated to the compartment. Preferably, the condition of the section of file space allocated to the compartment is used to determine whether the section of file space has been corrupted.


Also according to this second aspect of the present invention there is provided a computing platform, comprising: a host operating system; a compartment provided by the host operating system; and a trusted device arranged to obtain an integrity metric of the host operating system for comparison against an integrity metric in a previously formed certificate, and arranged to obtain a status metric of the compartment for comparison against a status metric in a previously formed certificate.


Further, according to a third aspect of the present invention there is provided a trusted device for use in a computing platform providing a host operating system having at least one compartment, the trusted device comprising: means arranged in use to obtain an integrity metric of the host operating system for comparison against an integrity metric in a previously formed certificate; and means arranged in use to obtain a status metric of the compartment for comparison against a status metric in a previously formed certificate.





BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of the invention, and to show how embodiments of the same may be carried into effect, reference will now be made, by way of example, to the accompanying diagrammatic drawings in which:



FIG. 1 shows an example computing platform; and



FIG. 2 shows a preferred method for demonstrating integrity of an operating system compartment.





DESCRIPTION OF PREFERRED EMBODIMENTS


FIG. 1 shows an example computing platform 20 employed in preferred embodiments of the present invention. The computing platform 20 comprises hardware 21 operating under the control of a host operating system 22. The hardware 21 may include standard features such as a keyboard, a mouse and a visual display unit which provide a physical user interface 211 to a local user of the computing platform. The hardware 21 also suitably comprises a computing unit 212 including a main processor, a main memory, an input/output device and a file storage device which together allow the performance of computing operations. Other parts of the computing platform are not shown, such as connections to a local or global network. This is merely one example form of computing platform and many other specific forms of hardware are applicable to the present invention.


In the preferred embodiment the hardware 21 includes a trusted device 213. The trusted device 213 is suitably a physical component such as an application specific integrated circuit (ASIC). Preferably the trusted device is mounted within a tamper-resistant housing. The trusted device 213 is coupled to the computing unit 212, and ideally to the local user interface unit 211. The trusted device 213 is preferably mounted on a motherboard of the computing unit 212. The trusted device 213 functions to bind the identity of the computing platform 20 to reliably measured data that provides an integrity metric of the platform.


Preferably, the trusted device 213 performs a secure boot process when the computing platform 20 is reset to ensure that the operating system 22 of the platform 20 is running properly and in a secure manner. During the secure boot process, the trusted device 213 acquires the integrity metric of the operating system 22 by examining operation of the computing unit 212 and the local user interface unit 211. The integrity metric is then available for a user to determine whether to trust the computing platform to operate is a predicted manner. In particular, a trusted computing platform is expected not to be subject to subversion such as by a virus or by unauthorised access.


WO 00/48063 (Hewlett-Packard) discloses an example computing platform suitable for use in preferred embodiments of the present invention. In this example the trusted device 213 acquires a hash of a BIOS memory of the computing unit 212 after reset. The trusted device 213 receives memory read signals from the main processor and returns instructions for the main processor to form the hash. The hash is stored in the trusted device 213, which then returns an instruction that calls the BIOS program and a boot procedure continues as normal.


Preferably, the trusted device 213 controls the local user interface 211 such that a local user can trust the display of data provided on a visual display unit. WO 00/73913 (Hewlett-Packard) discloses an example system for providing a trustworthy user interface by locating a driver for the visual display unit within the trusted device 213.


The hardware 21 may also comprise a trusted user interface for performing secure communication with a user device such as a smart card held by the user. The trusted user interface allows the user to perform trusted communications with the trusted device 213 in order to verify the integrity of the computing platform 20. The use of a smart card or other token for trusted local user interaction is described in more detail in WO 00/54125 (Hewlett-Packard) and WO 00/54126 (Hewlett-Packard).


The computing platform 20 provides a computing environment 24 which gives access to resources of the computing platform, such as processor time, memory area, and filespace. Preferably, a plurality of discrete computing environments 24 are provided. Each computing environment is logically distinct, but shares access to at least some of the resources of the computing platform with other computing environments.


Suitably, the computing environment 24 runs as a compartment. The actions or privileges within a compartment are constrained, particularly to restrict the ability of a process to execute methods and operations which have effect outside the compartment 24, such as methods that request network access or access to files outside of the compartment. Also, operation of the process within the compartment is performed with a high level of isolation from interference and prying by outside influences.


Preferably, the compartment is an operating system compartment controlled by a kernel of the host operating system 22. This is also referred to as a compartmented operating system or a trusted operating system.


The preferred embodiment of the present invention adopts a simple and convenient form of operating system compartment. Each resource of the computing platform which it is desired to protect is given a label indicating the compartment to which that resource belongs. Mandatory access controls are performed by the kernel of the host operating system to ensure that resources from one compartment cannot interfere with resources from another compartment. Access controls can follow relatively simple rules, such as requiring an exact match of the label. Examples of resources include data structures describing individual processes, shared memory segments, semaphores, message queues, sockets, network packets, network interfaces and routing table entries.


Communication between processes is controlled by IPC (Inter-Process Communication) channels. Communication between compartments is provided using narrow kernel level controlled interfaces to a transport mechanism such as TCP/UDP. Access to these communication interfaces is governed by rules specified on a compartment by compartment basis. At appropriate points in the kernel, access control checks are performed such as through the use of hooks to a dynamically loadable security module that consults a table of rules indicating which compartments are allowed to access the resources of another compartment. In the absence of a rule explicitly allowing a cross compartment access to take place, an access attempt is denied by the kernel. The rules enforce mandatory segmentation across individual compartments, except for those compartments that have been explicitly allowed to access another compartment's resources. Communication between a compartment and a network resource is provided in a similar manner. In the absence of an explicit rule, access between a compartment and a network resource is denied.


Suitably, each compartment is allocated an individual section of a file system of the computing platform. For example, the section is a chroot of the main file system. Processes running within a particular compartment only have access to that section of the file system. Advantageously, through kernel controls, a process is restricted to the predetermined section of file system and cannot escape. In particular, access to the root of the file system is denied.


Advantageously, a compartment provides a high level of containment, whilst reducing implementation costs and changes required in order to implement an existing application within the compartment.


Referring to FIG. 1, it is desired to run a process 23 in one of the computing environments 24. In practical embodiments, many processes run on the computing platform simultaneously. Some processes are grouped together to form an application or service.


Each computing environment is suitably an operating system compartment 24 that contains a section of file space, a group of one or more processes, and a set of allowed communication interfaces to other compartments and to network resources. It is desired to demonstrate the integrity of a compartment by confirming that the compartment is in an expected state. In one example the expected state requires that any one or more of the following are in a particular condition, namely (a) the compartment only has access to an expected section of file space, (b) that the predetermined section of file space is in an expected condition, e.g. has not been corrupted by a virus, (c) only the expected process or processes are running; (d) only expected IPC channels are open; and (e) only expected communication interfaces are available. Preferably, information about any one or more of these conditions or other suitable criteria is combined to form a status metric. Suitably, the status metric is individual to the compartment 24 and describes the current status of that compartment.


The status metric can include many elements. For example, data event logging is performed as described in WO 00/73880 (Hewlett-Packard) and applied specifically to the compartment 24. Also, a file digest is produced by applying a hash function to one or more data files stored in the section of file space allocated to the compartment, as described in WO 00/73904 (Hewlett-Packard).


Preferably, the status metric is formed by the trusted device 213. To achieve the status metric, the trusted device 213 communicates with other components of the computing platform 20 such as the computing unit 212.


Preferably, information about the compartment used to form the status metric is gathered in response to hooks (e.g. IOCTALS, SYSCALLS) into a kernel of the host operating system 22.



FIG. 2 shows a preferred method for demonstrating the integrity of a compartment 24.


The method can be initiated by a local user of the computing platform 20, or a remote user coupled directly or indirectly to the computing platform 20.


Optionally, in step 201 authentication and authorisation checks are made to confirm that the party requesting demonstration of the integrity of a compartment is allowed access to that information.


In step 202 integrity of the computing platform is verified. Particularly, integrity of the host operating system 22 is verified. Preferably, the trusted device 213 provides an integrity metric of the host operating system 22.


In step 203, the status of a compartment 24 of the host operating system 22 is verified. Compartment status verification suitably includes providing access to information about the compartment or, preferably, providing a status metric containing the information in a specified form.


Preferably, the integrity metric of the host operating system and/or the status metric of the compartment are each compared against a certificate issued by a trusted party that is prepared to vouch for the integrity of the computing platform. A challenge and response may occur, such as the user sending a random number sequence to the computing platform and receiving the random number in return in an encoded format. If the verification is successful, the computing platform is considered a trusted computing platform. The user trusts the computing platform because the user trusts the trusted party. The trusted party trusts the computing platform because the trusted party has previously validated the identity and determined the proper integrity metric of the platform. More detailed background information concerning an example method for verifying the computing platform and the host operating system using an integrity metric is given in WO 00/48063 (Hewlett-Packard). A similar approach is adopted to verify the status metric of the compartment.


A method and computing platform have been described which allow demonstration of the integrity of a compartment of a compartmented operating system. Advantageously, a chain of trust is established firstly by verifying the host operating system, and then by verifying a particular compartment of the host operating system.

Claims
  • 1. A method for demonstrating integrity of an operating system compartment in a computing platform having a trusted device, comprising the steps of: (a) providing a host operating system of the computing platform;(b) determining a host operating system status of the host operating system using the trusted device;(c) providing a compartment of the host operating system; and(d) determining, by a processor, whether resources assigned to the compartment have been interfered with by resources from outside the compartment, the resources comprising at least a computer process assigned to the compartment; and(e) defining a compartment status based on the determining in step (d),wherein the step (d) comprisescomparing a current state of the compartment against an expected state,providing information about the current state of the compartment, including information about at least one of (i) a section of file space allocated to the compartment, (ii) any processes allocated to the compartment, and (iii) any communication interfaces allocated to the compartment, andat least one of:confirming that the compartment has access only to an expected section of file space;confirming that the allocated section of file space is in an expected condition;confirming that only an expected process or processes are allocated to the compartment; andconfirming that only an expected communication interface or communication interfaces are allocated to the compartment.
  • 2. The method of claim 1, comprising providing a status metric representing the current state of the compartment.
  • 3. The method of claim 2, comprising providing the status metric from the trusted device of the computing platform.
  • 4. The method of claim 1, comprising confirming for each process allocated to the compartment that only an expected Inter-Process Communication channel or channels are open.
  • 5. The method of claim 1, wherein the trusted device is arranged to obtain an integrity metric of the host operating system for comparison against a previously formed certificate issued by a trusted party.
  • 6. The method of claim 5 further including reporting to a user of the computing platform the results of the comparison made against the previously formed certificate issued by the trusted party.
  • 7. The method of claim 1, wherein the step of providing the host operating system includes providing a motherboard, wherein said trusted device is an Application Specific Integrated Circuit (ASIC) and wherein the step of providing the host operating system further includes the step of mounting said ASIC on said motherboard.
  • 8. The method of claim 1, wherein an analysis of the compartment is based upon an analysis of the host operating system.
  • 9. A computing platform, comprising: a host operating system;at least one compartment provided by the host operating system;a trusted device to determine a host operating system status of the host operating system; anda status unit todetermine whether resources assigned to the compartment have been interfered with by resources from outside the compartment, the resources comprising at least a computer process assigned to the compartment, anddefine a compartment status based on the determination of whether the resources assigned to the compartment have been interfered with by resources from outside the compartment,wherein to determine whether the resources assigned to the compartment have been interfered with by resources from outside the compartment, the status unit is to:compare a current state of the compartment against an expected state,provide information about the current state of the compartment, including information about at least one of (i) a section of file space allocated to the compartment, (ii) any processes allocated to the compartment, and (iii) any communication interfaces allocated to the compartment, andat least one of:confirm that the compartment has access only to an expected section of file space,confirm that the allocated section of file space is in an expected condition,confirm that only an expected process or processes are allocated to the compartment, andconfirm that only an expected communication interface or communication interfaces are allocated to the compartment.
  • 10. The computing platform of claim 9, wherein the trusted device forms an integrity metric of the host operating system to be compared against an expected status.
  • 11. The computing platform of claim 9, wherein the status unit comprises at least one of the host operating system or the trusted device.
  • 12. The computing platform of claim 9, wherein the status unit provides a current status of the compartment to be compared against an expected status.
  • 13. The computing platform of claim 12, wherein the status unit provides a status metric.
  • 14. The computing platform of claim 12, wherein the current status identifies at least one of (i) a section of file space allocated to the compartment, (ii) any processes allocated to the compartment, (iii) any IPC channels open for any process allocated to the compartment, or (iv) any communication interfaces allocated to the compartment.
  • 15. The computing platform of claim 14, wherein the status unit confirms a condition of the section of file space allocated to the compartment.
  • 16. The computing platform of claim 15, wherein the condition of the section of file space allocated to the compartment is used to determine whether the section of file space has been corrupted.
  • 17. The computing platform of claim 9, wherein the computing platform includes a motherboard and the trusted device comprises an Application Specific Integrated Circuit (ASIC) mounted on said motherboard.
  • 18. The computing platform of claim 9, wherein an analysis of the compartment by the status unit is based upon an analysis of the host operating system by the trusted device.
Priority Claims (1)
Number Date Country Kind
0114885.7 Jun 2001 GB national
US Referenced Citations (195)
Number Name Date Kind
4747040 Blanset et al. May 1988 A
4799156 Shavit et al. Jan 1989 A
4926476 Covey May 1990 A
4962533 Krueger et al. Oct 1990 A
4984272 McIlroy et al. Jan 1991 A
5029206 Marino et al. Jul 1991 A
5032979 Hecht et al. Jul 1991 A
5038281 Peters Aug 1991 A
5136711 Hugard et al. Aug 1992 A
5144660 Rose Sep 1992 A
5210795 Lipner et al. May 1993 A
5261104 Bertram et al. Nov 1993 A
5278973 O'Brien et al. Jan 1994 A
5325529 Brown et al. Jun 1994 A
5359659 Rosenthal Oct 1994 A
5361359 Tajalli et al. Nov 1994 A
5379342 Arnold et al. Jan 1995 A
5404532 Allen et al. Apr 1995 A
5410707 Bell Apr 1995 A
5414860 Canova et al. May 1995 A
5421006 Jablon et al. May 1995 A
5440723 Arnold et al. Aug 1995 A
5444850 Chang Aug 1995 A
5448045 Clark Sep 1995 A
5454110 Kannan et al. Sep 1995 A
5473692 Davis Dec 1995 A
5483649 Kuznetsov et al. Jan 1996 A
5495569 Kotzur Feb 1996 A
5497490 Harada et al. Mar 1996 A
5497494 Combs et al. Mar 1996 A
5504814 Miyahara Apr 1996 A
5504910 Wisor et al. Apr 1996 A
5530758 Marino et al. Jun 1996 A
5535411 Speed et al. Jul 1996 A
5537540 Miller et al. Jul 1996 A
5548763 Combs et al. Aug 1996 A
5555373 Dayan et al. Sep 1996 A
5572590 Chess Nov 1996 A
5619571 Sandstrom et al. Apr 1997 A
5657390 Elgamal et al. Aug 1997 A
5680452 Shanton Oct 1997 A
5680547 Chang Oct 1997 A
5692124 Holden et al. Nov 1997 A
5694590 Thuraisingham et al. Dec 1997 A
5771354 Crawford Jun 1998 A
5787175 Carter Jul 1998 A
5796841 Cordery et al. Aug 1998 A
5809145 Slik Sep 1998 A
5812669 Jenkins et al. Sep 1998 A
5815665 Teper et al. Sep 1998 A
5825890 Elgamal et al. Oct 1998 A
5828751 Walker et al. Oct 1998 A
5841869 Merkling et al. Nov 1998 A
5844986 Davis Dec 1998 A
5845068 Winiger Dec 1998 A
5864683 Boebert et al. Jan 1999 A
5867646 Benson et al. Feb 1999 A
5883956 Le et al. Mar 1999 A
5887163 Nguyen et al. Mar 1999 A
5889989 Robertazzi et al. Mar 1999 A
5892900 Ginter et al. Apr 1999 A
5903732 Reed et al. May 1999 A
5913024 Green et al. Jun 1999 A
5915019 Ginter et al. Jun 1999 A
5915021 Herlin et al. Jun 1999 A
5917360 Yasutake Jun 1999 A
5917912 Ginter et al. Jun 1999 A
5922074 Richard et al. Jul 1999 A
5923756 Shambroom Jul 1999 A
5923763 Walker et al. Jul 1999 A
5933498 Schneck et al. Aug 1999 A
5949876 Ginter et al. Sep 1999 A
5960177 Tanno Sep 1999 A
5968136 Saulpaugh et al. Oct 1999 A
5982891 Ginter et al. Nov 1999 A
5987605 Hill et al. Nov 1999 A
5987608 Roskind Nov 1999 A
5991414 Garay et al. Nov 1999 A
5996076 Rowney et al. Nov 1999 A
6003084 Green et al. Dec 1999 A
6006332 Rabne et al. Dec 1999 A
6012080 Ozden et al. Jan 2000 A
6023689 Herlin et al. Feb 2000 A
6023765 Kuhn Feb 2000 A
6049878 Caronni et al. Apr 2000 A
6067559 Allard et al. May 2000 A
6078948 Podgorny et al. Jun 2000 A
6079016 Park Jun 2000 A
6081830 Schindler Jun 2000 A
6081894 Mann Jun 2000 A
6081900 Subramaniam et al. Jun 2000 A
6092202 Veil et al. Jul 2000 A
6100738 Illegems Aug 2000 A
6105131 Carroll Aug 2000 A
6115819 Anderson Sep 2000 A
6125114 Blanc et al. Sep 2000 A
6134328 Cordery et al. Oct 2000 A
6138239 Veil Oct 2000 A
6154838 Le et al. Nov 2000 A
6157719 Wasilewski et al. Dec 2000 A
6157721 Shear et al. Dec 2000 A
6175917 Arrow et al. Jan 2001 B1
6185678 Arbaugh et al. Feb 2001 B1
6185683 Ginter et al. Feb 2001 B1
6189103 Nevarez et al. Feb 2001 B1
6192472 Garay et al. Feb 2001 B1
6195751 Caronni et al. Feb 2001 B1
6198824 Shambroom Mar 2001 B1
6211583 Humphreys Apr 2001 B1
6237786 Ginter et al. May 2001 B1
6253193 Ginter et al. Jun 2001 B1
6263438 Walker et al. Jul 2001 B1
6272631 Thomlinson et al. Aug 2001 B1
6275848 Arnold Aug 2001 B1
6282648 Walker et al. Aug 2001 B1
6289453 Walker et al. Sep 2001 B1
6289462 McNabb et al. Sep 2001 B1
6292569 Shear et al. Sep 2001 B1
6292900 Ngo et al. Sep 2001 B1
6304970 Bizzaro et al. Oct 2001 B1
6314409 Schneck et al. Nov 2001 B2
6314519 Davis et al. Nov 2001 B1
6327579 Crawford Dec 2001 B1
6327652 England et al. Dec 2001 B1
6330669 McKeeth Dec 2001 B1
6330670 England et al. Dec 2001 B1
6367012 Atkinson et al. Apr 2002 B1
6393412 Deep May 2002 B1
6446203 Aguilar et al. Sep 2002 B1
6449716 Rickey Sep 2002 B1
6477702 Yellin et al. Nov 2002 B1
6487601 Hubacher et al. Nov 2002 B1
6496847 Bugnion et al. Dec 2002 B1
6505300 Chan et al. Jan 2003 B2
6513156 Bak et al. Jan 2003 B2
6519623 Mancisidor Feb 2003 B1
6530024 Proctor Mar 2003 B1
6609248 Srivastava et al. Aug 2003 B1
6622018 Erekson Sep 2003 B1
6671716 Diedrichsen et al. Dec 2003 B1
6678833 Grawrock Jan 2004 B1
6681304 Vogt et al. Jan 2004 B1
6701440 Kim et al. Mar 2004 B1
6732276 Cofler et al. May 2004 B1
6751680 Langerman et al. Jun 2004 B2
6757824 England Jun 2004 B1
6757830 Tarbotton et al. Jun 2004 B1
6775779 England et al. Aug 2004 B1
6847995 Hubbard et al. Jan 2005 B1
6892307 Wood et al. May 2005 B1
6931545 Ta et al. Aug 2005 B1
6948069 Teppler Sep 2005 B1
6965816 Walker Nov 2005 B2
6988250 Proudler et al. Jan 2006 B1
7058807 Grawrock et al. Jun 2006 B2
7194092 England et al. Mar 2007 B1
7877799 Proudler Jan 2011 B2
8037380 Cagno et al. Oct 2011 B2
20010037450 Metlitski et al. Nov 2001 A1
20020012432 England et al. Jan 2002 A1
20020023212 Proudler Feb 2002 A1
20020042874 Arora Apr 2002 A1
20020059286 Challener May 2002 A1
20020069354 Fallon et al. Jun 2002 A1
20020120575 Pearson et al. Aug 2002 A1
20020144104 Springfield et al. Oct 2002 A1
20020180778 Proudler Dec 2002 A1
20020184486 Kershenbaum et al. Dec 2002 A1
20020184520 Bush et al. Dec 2002 A1
20020188763 Griffin Dec 2002 A1
20020194482 Griffin et al. Dec 2002 A1
20020194496 Griffin et al. Dec 2002 A1
20030009685 Choo et al. Jan 2003 A1
20030014372 Wheeler et al. Jan 2003 A1
20030014466 Berger et al. Jan 2003 A1
20030023872 Chen et al. Jan 2003 A1
20030037233 Pearson Feb 2003 A1
20030037246 Goodman et al. Feb 2003 A1
20030074548 Cromer et al. Apr 2003 A1
20030084285 Cromer et al. May 2003 A1
20030084436 Berger et al. May 2003 A1
20030145235 Choo Jul 2003 A1
20030191957 Hypponen et al. Oct 2003 A1
20030196083 Grawrock et al. Oct 2003 A1
20030196110 Lampson et al. Oct 2003 A1
20030226031 Proudler et al. Dec 2003 A1
20030226040 Challener et al. Dec 2003 A1
20040003288 Wiseman et al. Jan 2004 A1
20040039924 Baldwin et al. Feb 2004 A1
20040045019 Bracha et al. Mar 2004 A1
20040073617 Milliken et al. Apr 2004 A1
20040073806 Zimmer Apr 2004 A1
20040083366 Nachenberg et al. Apr 2004 A1
20040148514 Fee et al. Jul 2004 A1
20050256799 Warsaw et al. Nov 2005 A1
Foreign Referenced Citations (58)
Number Date Country
2 187 855 Jun 1997 CA
0 3040 33 Feb 1989 EP
0 421 409 Apr 1991 EP
0 510 244 Oct 1992 EP
0 580 350 Jan 1994 EP
0 825 511 Feb 1998 EP
0 849 657 Jun 1998 EP
0 849 680 Jun 1998 EP
0 465 016 Dec 1998 EP
0 893 751 Jan 1999 EP
0 895 148 Feb 1999 EP
0 926 605 Jun 1999 EP
0 992 958 Apr 2000 EP
1 030 237 Aug 2000 EP
1 056 014 Aug 2000 EP
1 049 036 Nov 2000 EP
1 055 990 Nov 2000 EP
1 056 010 Nov 2000 EP
1 076 279 Feb 2001 EP
1 107 137 Jun 2001 EP
2 317 476 Mar 1998 GB
2 336 918 Nov 1999 GB
0020441.2 Aug 2000 GB
2 353 885 Mar 2001 GB
2 361 153 Oct 2001 GB
9325024 Dec 1993 WO
9411967 May 1994 WO
9524696 Sep 1995 WO
9527249 Oct 1995 WO
9729416 Aug 1997 WO
9815082 Apr 1998 WO
9836517 Aug 1998 WO
9826529 Sep 1998 WO
9840809 Sep 1998 WO
9844402 Oct 1998 WO
9845778 Oct 1998 WO
0019324 Apr 2000 WO
0019324 Apr 2000 WO
0031644 Jun 2000 WO
0048062 Aug 2000 WO
0048063 Aug 2000 WO
0052900 Sep 2000 WO
0054125 Sep 2000 WO
0054125 Sep 2000 WO
0054126 Sep 2000 WO
0058859 Oct 2000 WO
0073880 Dec 2000 WO
0073880 Dec 2000 WO
0073904 Dec 2000 WO
0073904 Dec 2000 WO
0073913 Dec 2000 WO
0073913 Dec 2000 WO
0109781 Feb 2001 WO
0113198 Feb 2001 WO
0123980 Apr 2001 WO
0127722 Apr 2001 WO
0165334 Sep 2001 WO
0165366 Sep 2001 WO
Non-Patent Literature Citations (64)
Entry
U.S. Appl. No. 09/979,902, filed Nov. 27, 2001, Proudler, et al.
U.S. Appl. No. 09/979,903, filed Nov. 27, 2001, Proudler, et al.
U.S. Appl. No. 10/080,476, filed Feb. 22, 2002, Proudler, et al.
U.S. Appl. No. 10/080,477, filed Feb. 22, 2002, Brown, et al.
U.S. Appl. No. 10/080,478, filed Feb. 22, 2002, Pearson, et al.
U.S. Appl. No. 10/080,479, filed Feb. 22, 2002, Pearson, et al.
U.S. Appl. No. 10/165,840, filed Jun. 7, 2002, Dalton.
U.S. Appl. No. 10/194,831, filed Jul. 11, 2002, Chen, et al.
U.S. Appl. No. 10/240,138, filed Sep. 26, 2002, Choo.
Barkley, J., et al., “Managing Role/Permission Relationships Using Object Access Types,” ACM, pp. 73-80, Jul. 1998, retrieved Jun. 25, 2005.
Bontchev, V., “Possible Virus Attacks Against Integrity Programs and How to Prevent Them,” Virus Bulletin Conference, pp. 131-141 (Sep. 1992).
Grimm, R., et al., “Separating Access Control Policy, Enforcement, and Functionality in Extensible Systems,” ACM pp. 36-70, Feb. 2001, retrieved Jun. 25, 2005.
Jaeger, T., et al., “Requirements of Role-Based Access Control for Collaborative Systems,” ACM, pp. 53-64, Dec. 1996, retrieved Jun. 25, 2005.
Naor, M., et al., “Secure and Efficient Metering,” Internet: <http://citeseer.nj.com/naor98secure.html> Sections 1-1.3 (1998).
P.C Magazine Online; The 1999 Utility Guide: Desktop Antivirus; Norton Antivirus 5.0 DeLux, Internet.
Radai, Y., “Checksumming Techniques for Anti-Viral Purposes,” Virus Bulletin Conference, pp. 39-68 (Sep. 1991).
Schneck, P.B., “Persistent Access Control to Prevent Piracy of Digital Information,” Proceedings of the IEEE, vol. 87, No. 7, pp. 1239-1250 (Jul. 1999).
“System for Detecting Undesired Alteration of Software,” IBM Technical Bulletin, vol. 32, No. 11 pp. 48-50 (Apr. 1990).
The Trusted Computing Platform Alliance, “Building Foundation of Trust in the PC,”, 9 pages, located at Internet address <www.trustedpc.org/home/home.html.> (Jan. 2000).
Zhang, N.X., et al., “Secure Code Distribution,” pp. 76-79, 1997 IEEE, retrieved Jun. 25, 2005.
U.S. Appl. No. 09/728,827, filed Nov. 28, 2000, Proudler et al.
U.S. Appl. No. 09/920,554, filed Aug. 1, 2001, Proudler.
U.S. Appl. No. 10/075,444, filed Feb. 15, 2002, Brown et al.
U.S. Appl. No. 10/080,466, filed Feb. 22, 2002, Pearson et al.
U.S. Appl. No. 10/175,183, filed Jun. 18, 2002, Griffin et al.
U.S. Appl. No. 10/175,185, filed Jun. 18, 2002, Pearson et al.
U.S. Appl. No. 10/175,395, filed Jun. 18, 2002, Pearson et al.
U.S. Appl. No. 10/175,542, filed Jun. 18, 2002, Griffin et al.
U.S. Appl. No. 10/175,553, filed Jun. 18, 2002, Griffin et al.
U.S. Appl. No. 10/206,812, filed Jul. 26, 2002, Proudler.
U.S. Appl. No. 10/240,137, filed Sep. 26, 2002, Dalton et al.
U.S. Appl. No. 10/240,139, filed Sep. 26, 2002, Choo et al.
U.S. Appl. No. 10/303,690, filed Nov. 21, 2002, Proudler et al.
Burke, J.P., “Security Suite Gives Sniffer Programs Hay Fever,” HP Professional, vol. 8, No. 9, 3 pages total (Sep. 1994).
Dalton, C.I. and J.F. Griffin, “Applying Military Grade Security to the Internet,” Proceedings JENCS- Computer Networks and ISDN Systems, vol. 29, pp. 1799-1808(1999).
Ford, B., et al., “Microkernels Meet Recursive Virtual Machines”, Operating Systems Review, ACM, vol. 30, No. Special Issue, pp. 137-151 (Dec. 21, 1996).
Goldberg, R.P., “Survey of Virtual Machine Research”, Computer, IEEE Service Center, vol. 7 , No. 6, pp. 34-45 (Jun. 1974).
Popek, G. J., “Formal Requirements for Virtualizable Third Generation Architectures”, Communications of the Association for Computing Machinery, ACM, vol. 17, No. 7, pp. 412-421 (Jul. 1974).
EDS Timeline, The 1960's, at EDS.com.
Anderson, R., et al., “Tamper Resistance—a Cautionary Note,” ISENIX Association, Second USENIX Workshop on Electronic Commerce, pp. 1-11 (Nov. 18-21, 1996).
Berger, J.L., et al., “Compartmented Mode Workstation: Prototype Highlights,” IEEE Transactions on Software Engineering, vol. 16, No. 6 (Jun. 1990).
Chaum, D., “Security without Identification: Transaction Systems to Make Big Brother Obsolete,” Communications of the ACM, vol. 28, No. 10, pp. 1030-1044 (Oct. 1985).
Choo, T.H., et al., “Trusted Linux: A Secure Platform for Hosting Compartmented Applications,” Enterprise Solutions, pp. 1-14 (Nov./Dec. 2001).
Dalton, C., et al., “An operating system approach to securing e-services,” Communications of the ACM, vol. 44, Issue 2 (Feb. 2001).
Dalton, C.I., et al., “Design of secure UNIX,” Elsevier Information Security Report, (Feb. 1992).
Hallyn, S.E., et al., “Domain and Type Enforcement for Linux,” <Internet: <http://www.usenix.org/publications/library/proceedings/als2000/full—papers/hallyn/hallyn—html/>. (Retrieved Apr. 24, 2002).
Loscocco, P., et al., “Integrating Flexible Support for Security Policies into the Linux Operating System,” Internet: <www.nsa.gov/selinux> (Retrieved Apr. 24, 2002).
Milojicic, D., et al., “Process Migration,” Internet <http://www.hpl.hp.com/techreports/1999/HPL-1999-21.html.> pp. 1-48 (Dec. 5, 1998).
Scheibe, M., “TCPA Security: Trust your Platform!” Quarterly Focus PC Security, pp. 44-47. Internet: <http://www.silicon-trust.com/pdf/secure—PDF/Seite—44-47.pdf>.
Senie, D., “Using the Sock—Packet mechanism in Linux to gain complete control of an Ethernet Interface,” Internet: <http://www.senie.com/dan/technology/sock—packet.html>. (Retrieved Apr. 24, 2002).
Wiseman, S., et al., “The Trusted Path between Smite and the User,” Proceedings 1988 IEEE Symposium on and Privacy, pp. 147-155 (Apr. 18-21, 1988).
“Yee, B., Using Secure Coprocessors,” Doctoral thesis—Carnegie Mellon University, pp. 1-94 (May 1994).
Boot Integrity Services Application Programming Interface, Version 1.0, Intel Corporation, pp. 1-60 (Dec. 28, 1998).
“Building a Foundation of Trust in the PC,” Trusted Computing Platform Alliance, pp. 1-7 (Jan. 2000).
“HP Virtualvault: Trusted Web-server Platform Product Brief,” Internet: <http://www.hp.com/security/products/virtualvault/papers/brief—4.0/> pp. 1-6.
“Information technology—Security techniques—Entity authentication; Part 3: Mechanisms using digital signature techniques,” ISO/IEC 9798-3, Second Edition, pp. 1-6 (1998).
“Information technology—Security techniques—Key management—Part 3: Mechanisms using asymmetric techniques,” ISO/IEC 11770-3, pp. 1-34 (1999).
“NIST Announces Technical Correction to Secure Hash Standard,” Internet: <http://www.nist.gov/public—affairs/releases/hashstan.htm> pp. 1-2 (Oct. 24, 2002).
“Norton AntiVirus 5.0 Delux,” PC Magazine Online; The 1999 Utility Guide: Desktop Antivirus, pp. 1-2, Internet: <http://wwww.zdnet.com/pcmag/features/utilities99/deskav07.html> (Retrieved Nov. 30, 2001).
“Secure Computing with JAVA™: Now and the Future,” <http://java.sun.com/marketing/collateral/security.html> pp. 1-29 (Apr. 2, 2002).
“Secure Execution Environments, Internet Safety through Type-Enforcing Firewalls,” Internet: <thp://www.ghp.com/research/nailabs/secure-execution/internet-safety.asp> (Retrieved Apr. 24, 2002).
Sophos Anti-Virus for Notes/Domino release Note Version 2.0, pp. 1-2, Internet: <http://www.sophos.com/sophos/products/full/readmes/readnote.txt> (Retrieved Nov. 30, 2001).
Trusted Computing Platform Alliance (TCPA), Main Specification, Version 1.0, pp. 1-284 (2000).
Trusted Computing Platform Alliance (TCPA), TCPA Design Philosophies and Concepts, Version 1.0, Internet: <www.trustedpc.org> pp. 1-30 (Jan. 2001).
Related Publications (1)
Number Date Country
20020194493 A1 Dec 2002 US