The subject matter of the present application may also be related to the following U.S. Patent Applications: “Operation of Trusted State in Computing Platform,” Ser. No. 09/728,827, filed Nov. 28, 2000; “Performance of a Service on a Computing Platform,” Ser. No. 09/920,554, filed Aug. 1, 2001; “Secure E-Mail Handling Using a Compartmented Operating System,” Ser. No. 10/075,444, filed Feb. 15, 2002; “Electronic Communication,” Ser. No. 10/080,466, filed Feb. 22, 2002; “Demonstrating Integrity of a Compartment of a Compartmented Operating System,” Ser. No. 10/165,840, filed Jun. 7, 2002; “Multiple Trusted Computing Environments with Verifiable Environment Entities,” Ser. No. 10/175,183, filed Jun. 18, 2002; “Renting a Computing Environment on a Trusted Computing Platform,” Ser. No. 10/175,185, filed Jun. 18, 2002; “Interaction with Electronic Services and Markets,” Ser. No. 10/175,395, filed Jun. 18, 2002; “Performing Secure and Insecure Computing Operations in a Compartmented Operating System,” Ser. No. 10/175,553, filed Jun. 18, 2002; “Privacy of Data on a Computer Platform,” Ser. No. 10/206,812, filed Jul. 26, 2002; “Trusted Operating System,” Ser. No. 10/240,137, filed Sep. 26, 2002; “Trusted Gateway System,” Ser. No. 10/240,139, filed Sep. 26, 2002; and “Apparatus and Method for Creating a Trusted Environment,” Ser. No. 10/303,690, filed Nov. 21, 2002.
The present invention relates in general to a method for providing multiple computing environments running on a single host computing platform, and relates to a method for verifying integrity of the computing environments.
It is desired to run multiple applications on a single host computing platform such as a server. It is known to provide a separate logically distinct computing environment for each application. However, a problem arises when one application or its environment is incompatible with another application, or is not considered trusted by another application.
An aim of the present invention is to provide a method that allows multiple computing environments to be provided on a single host computing platform. A preferred aim is to provide a high degree of isolation between the multiple computing environments. Another preferred aim is to provide a method for verifying integrity of one computing environment independently of any other of the computing environments, such that each environment is independently trustworthy.
According to a first aspect of the present invention there is provided a method for providing a trusted computing environment, comprising the steps of: (a) providing a host operating system; (b) obtaining an integrity metric for the host operating system; (c) providing a computing environment including a guest operating system; and (d) obtaining an integrity metric for the computing environment.
Preferably, the step (b) includes obtaining the integrity metric during boot of the host operating system. Preferably, the step (b) includes obtaining an integrity metric for a BIOS and/or an OS loader and/or an operating system software of the host operating system. Preferably, the step (b) includes obtaining the integrity metric by performing data event logging, and/or by performing a hash function to all or selected data files associated with the host operating system. Preferably, the step (b) comprises updating at least part of the integrity metric for the host operating system.
Additionally, the step (d) comprises obtaining an integrity metric of the guest operating system. Suitably, the step (c) comprises providing a virtual machine application running on the host operating system for providing the guest operating system. Preferably, the step (d) comprises obtaining an integrity metric of the virtual machine application. Further, the step (c) comprises providing a process running on the guest operating system. Preferably, the step (d) comprises obtaining an integrity metric of the process.
In the preferred embodiments of the invention, the step (c) comprises providing the computing environment in a compartment of the host operating system. Preferably, the host operating system is a compartmented operating system. Suitably, the compartment confines the guest operating system. It is preferred that the step (d) comprises obtaining an integrity metric from a history of all processes launched in the compartment.
Preferably, the step (d) comprises updating at least part of the integrity metric for the computing environment. Preferably, the step (b) comprises storing the integrity metric for the host operating system, and/or the step (d) comprises storing the integrity metric for the computing environment. Preferably, the integrity metric for the computing environment is stored associated with an identity of the computing environment.
Preferably, the step (b) and/or the step (d) comprises obtaining the integrity metric using a trusted device, and storing the integrity metric in a platform configuration register of the trusted device. Preferably, the integrity metric for the computing environment is stored in a platform configuration register or group of platform configuration registers associated with the computing environment.
Additionally, the method preferably comprises the step of verifying the trusted computing environment including the steps of: (e) identifying the computing environment; (f) supplying the integrity metric for the host operating system; and (g) supplying the integrity metric for the computing environment.
Although the present invention has been introduced above in terms of a single computing environment, preferably a plurality of computing environments are provided on a single host computing platform. Suitably, the step (c) comprises providing a plurality of computing environments each including a guest operating system, and the step (d) comprises obtaining an integrity metric of each computing environment.
According to a second aspect of the present invention there is provided a method for verifying integrity of a trusted computing environment amongst many on a single host computing platform running a host operating system, each computing environment comprising a guest operating system running on the host operating system, the method comprising the steps of: (a) identifying the computing environment; (b) supplying an integrity metric of the host operating system; and (c) supplying an integrity metric associated with the identified computing environment.
Preferably, the step (a) comprises receiving identity information associated with the computing environment, such as receiving information about a process running in a computing environment, and determining the computing environment which contains that process.
According to a third aspect of the present invention there is provided a computing platform, comprising: a host operating system; a plurality of computing environments each comprising a guest operating system running on the host operating system; and a trusted device for obtaining an integrity metric of the host operating system and an integrity metric of each computing environment.
Preferably, the trusted device stores the integrity metric for the host operating system and the integrity metric for each guest operating system. Preferably, the trusted device stores each integrity metric in a platform configuration register or a group of platform configuration registers. Preferably, the trusted device allocates a platform configuration register or group of platform configuration registers to each computing environment.
For a better understanding of the invention, and to show how embodiments of the same may be carried into effect, reference will now be made, by way of example, to the accompanying diagrammatic drawings in which:
In the preferred embodiment the hardware 21 includes a trusted device 213. The trusted device 213 is suitably a physical component such as an application specific integrated circuit (ASIC). Preferably the trusted device is mounted within a tamper-resistant housing. The trusted device 213 is coupled to the computing unit 212, and ideally to the local user interface unit 211. The trusted device 213 is preferably mounted on a motherboard of the computing unit 212. The trusted device 213 functions to bind the identity of the computing platform 20 to reliably measured data that provides an integrity metric of the platform.
Preferably, the trusted device 213 performs a secure boot process when the computing platform 20 is reset to ensure that the host operating system 22 of the platform 20 is running properly and in a secure manner. During the secure boot process, the trusted device 213 acquires an integrity metric (or a group of integrity metrics) of the computing platform 20, such as by examining operation of the computing unit 212 and the local user interface unit 211. The integrity metrics are then available for a user to determine whether to trust the computing platform to operate is a predicted manner. In particular, a trusted computing platform is expected not to be subject to subversion such as by a virus or by unauthorised access. The user includes a local user of the computing platform, or a remote user communicating with the computing platform by networking (including LAN, WAN, internet and other forms of networking).
WO 00/48063 (Hewlett-Packard) discloses an example computing platform suitable for use in preferred embodiments of the present invention. In this example the trusted device 213 acquires a hash of a BIOS memory of the computing unit 212 after reset. The trusted device 213 receives memory read signals from the main processor and returns instructions for the main processor to form the hash. The hash is stored in the trusted device 213, which then returns an instruction that calls the BIOS program and a boot procedure continues as normal.
Preferably, the trusted device 213 controls the local user interface 211 such that a local user can trust the display of data provided on a visual display unit. WO 00/73913 (Hewlett-Packard) discloses an example system for providing a trustworthy user interface by locating a driver for the visual display unit within the trusted device 213.
The hardware 21 may also comprise a trusted user interface for performing secure communication with a user device such as a smart card held by the user. The trusted user interface allows the user to perform trusted communications with the trusted device 213 in order to verify the integrity of the computing platform 20. The use of a smart card or other token for trusted user interaction is described in more detail in WO 00/54125 (Hewlett-Packard) and WO 00/54126 (Hewlett-Packard).
The computing platform 20 provides a computing environment 24 which gives access to resources of the computing platform, such as processor time, memory area, and filespace. Preferably, a plurality of discrete computing environments 24 are provided. Each computing environment is logically distinct, but shares access to at least some of the resources of the computing platform with other computing environments.
Suitably, the computing environment 24 comprises a compartment. The actions or privileges within a compartment are constrained, particularly to restrict the ability of a process to execute methods and operations which have effect outside the compartment, such as methods that request network access or access to files outside of the compartment. Also, operation of the process within the compartment is performed with a high level of isolation from interference and prying by outside influences.
Preferably, the compartment is an operating system compartment controlled by a kernel of the host operating system 22. This is also referred to as a compartmented operating system or a trusted operating system.
Compartmented operating systems have been available for several years in a form designed for handling and processing classified (military) information, using a containment mechanism enforced by a kernel of the operating system with mandatory access controls to resources of the computing platform such as files, processes and network connections. The operating system attaches labels to the resources and enforces a policy which governs the allowed interaction between these resources based on their label values. Most compartmented operating systems apply a policy based on the Bell-LaPadula model discussed in the paper “Applying Military Grade Security to the Internet” by C I Dalton and J F Griffin published in Computer Networks and ISDN Systems 29 (1997) 1799-1808.
The preferred embodiment of the present invention adopts a simple and convenient form of operating system compartment. Each resource of the computing platform which it is desired to protect is given a label indicating the compartment to which that resource belongs. Mandatory access controls are performed by the kernel of the host operating system to ensure that resources from one compartment cannot interfere with resources from another compartment. Access controls can follow relatively simple rules, such as requiring an exact match of the label.
Examples of resources include data structures describing individual processes, shared memory segments, semaphores, message queues, sockets, network packets, network interfaces and routing table entries.
Communication between compartments is provided using narrow kernel level controlled interfaces to a transport mechanism such as TCP/UDP. Access to these communication interfaces is governed by rules specified on a compartment by compartment basis. At appropriate points in the kernel, access control checks are performed such as through the use of hooks to a dynamically loadable security module that consults a table of rules indicating which compartments are allowed to access the resources of another compartment. In the absence of a rule explicitly allowing a cross compartment access to take place, an access attempt is denied by the kernel. The rules enforce mandatory segmentation across individual compartments, except for those compartments that have been explicitly allowed to access another compartment's resources. Communication between a compartment and a network resource is provided in a similar manner. In the absence of an explicit rule, access between a compartment and a network resource is denied.
Suitably, each compartment is allocated an individual section of a file system of the computing platform. For example, the section is a chroot of the main file system. Processes running within a particular compartment only have access to that section of the file system. Through kernel controls, the process is restricted to the predetermined section of file system and cannot escape. In particular, access to the root of the file system is denied.
Advantageously, a compartment provides a high level of containment, whilst reducing implementation costs and changes required in order to implement an existing application within the compartment.
Referring to
The process 23 runs on a guest operating system 25. The guest operating system 25 is suitably provided by a virtual machine application 26. The virtual machine application 26 runs on the host operating system 22 and provides an image of a computing platform, or at least appropriate parts thereof. The virtual machine application 26 provides the virtual guest operating system 25 such that, as far as the process 23 is concerned, the process 23 runs on the guest operating system 25 equivalent to running on a host operating system 22. For the purposes of the present invention, the guest operating system 25 is preferably a replica of the host operating system, or at least necessary parts thereof. However, it is equally possible for the virtual machine application 26 to provide a different emulated software or hardware environment, such as a different operating system type or version. An example virtual machine application is sold under the trade mark VMware by VMware, Inc of Palo Alto, Calif., USA.
The virtual machine application 26 assists security by isolating the process 23 from the remainder of the computing platform. Should problems occur during running of the process 23 or as a result thereof, the host operating system 22 can safely shut down the guest operating system 25 provided by the virtual machine application 26. Also, the virtual machine application 26 protects the host operating system 22 and hardware resources 21 from direct access by the process 23. Therefore, it is very difficult for the process 23 to subvert the host operating system 22. Further, the process 23 accesses resources of the computing platform made available through the virtual machine application 26. Each process 23 only sees resources of the computing platform allocated through the virtual machine application 26, such that a process 23 can be restricted to an appropriate share of the resource of the computing platform and cannot stop other processes having their allocated share.
Preferably, the virtual machine application 26 providing the guest operating system 25 runs in a compartment 220 of the host operating system 22. The compartment confines communications and data access of the virtual machine application. The compartment 220 provides secure separation between applications, such that processes are inhibited from communicating with each other, accessing each others status, or interfering with each other, except in accordance with strictly enforced access controls. In particular, a compartment assists the virtual machine application in resisting subversion by a process running in that computing environment.
Referring again to
As described above, the trusted device 213 is arranged to form an integrity metric (or a group of integrity metrics) of the host operating system 22. Also, in the preferred embodiments of the present invention, the trusted device 213 is arranged to obtain an integrity metric (or a group of integrity metrics) for each computing environment 24. Preferably, the trusted device 213 obtains an integrity metric of the guest operating system 25. Further, the trusted device preferably obtains an integrity metric of the virtual machine application 26. Each integrity metric suitably comprises one or more separate integrity metric values.
In the preferred configuration the host operating system 22 has direct access to the trusted device 213. However, to improve security, processes (i.e. applications) running on the host operating system 22 do not have direct access to the trusted device 213. Therefore, a trusted device driver 221 is provided, suitably as part of the host operating system 22. The trusted device driver 221 provides an interface available to applications running on the host operating system 22, including allowing results to be reported to the trusted device 213, and allowing stored integrity metric values to be obtained from the trusted device 213.
The stored integrity metric value 231 preferably represents a sequence of integrity metric values obtained, for example, by examination of the host platform 20 periodically or in response to relevant events. The old stored integrity metric value is combined with a new integrity metric value to produce a new updated digest of the sequence of values.
In step 401, the host operating system 22 is provided. Suitably, this includes the steps of starting a BIOS, starting an OS loader, and starting the host operating system as will be familiar to the skilled person.
In step 402, a group of integrity metrics 230 for the host operating system 22 are measured and reported to the trusted device 213. Preferably, the trusted device 213 obtains an integrity metric for the BIOS, and preferably also obtains an integrity metric for the OS loader and the operating system software. Preferably, integrity metric values relevant to the host operating system are stored in a group of PCRs (or other addressable storage) such that the integrity metrics 230 for the host operating system are available later. Steps 401 and 402 are shown separately for clarity. In practical embodiments of the invention it will be appreciated that the integrity metrics 230 are obtained concurrently with providing the host OS 22.
Optionally, at step 403 additional integrity metrics are obtained relevant to other selected elements of the computing platform. For example, the trusted device 213 performs data event logging as described in WO 00/73880 (Hewlett-Packard). Also, the trusted device 213 may produce a digest by applying a hash function to all or selected data files stored on the computing platform, as described in WO 00/73904 (Hewlett-Packard). Preferably, at least some of the integrity metrics obtained in step 402 or step 403 are updated periodically or in response to relevant events to confirm the current integrity status of the host operating system and related components of the computing platform.
In step 404, a guest operating system 25 is provided, to form a new computing environment 24. Suitably, step 404 includes providing a virtual machine application 26 which provides the guest operating system 25.
Preferably, the step 404 includes providing the guest operating system 25 in a compartment 220 of the host operating system 22. Also, the step 404 preferably includes providing a history of all processes (applications) launched in the compartment. Here, it is desired to record whether any other applications have been launched alongside the virtual machine application 26 which provides the guest operating system 25.
In step 405, the trusted device 213 obtains an integrity metric for the computing environment 24. In particular, the trusted device 213 obtains an integrity metric or group of integrity metrics 230 for the guest operating system 25, and preferably the virtual machine application 26. The corresponding integrity metric values 231 are stored in a PCR or group of PCRs allocated to that computing environment. Also, the step 405 preferably includes obtaining an integrity metric for the or each process 23 in the computing environment. Suitably, each integrity metric is obtained by forming a digest (hash value) of program code of a process. As will be familiar to the skilled person, the term integrity metric can refer to a single data item, or can refer to a metric formed from two or more parts each of which themselves can be considered an integrity metric.
Preferably, step 405 is repeated such that a current integrity status of the computing environment is available and history information is updated, periodically or in response to a relevant event.
When it is desired to create or update a stored integrity metric for a particular computing environment, a result is reported to the trusted device driver 221 along with information identifying that particular computing environment, such as an arbitrary label. In one preferred embodiment a process ID of the virtual machine application 26 is used to identify the computing environment. In another embodiment each logical computing environment is supplied with a secret, e.g. a secret is supplied to the virtual machine application 26 by the trusted device driver 221, and then the secret is subsequently used to identify the computing environment. Suitably the computing environment label, such as a secret, is supplied by the host OS 22 when the virtual machine application 26 is launched.
Referring to
Optionally, in step 501 a secure channel is established for communicating with the computing platform 20. For a local user 10, a secure channel is provided such as by using a trustworthy user interface and/or by using a token such as a smart card. A remote user 10 establishes a secure channel 30 such as by performing authentication of the computing platform, ideally using a signature from the trusted device 213. Here again, the user optionally employs trusted hardware, such as the user's own client platform, a PDA, mobile phone or other device, optionally in co-operation with a smart card or other token. Preferably, the step 501 includes establishing the authentication and authorisation of the user.
In step 502, the user 10 requests demonstration of the integrity of a computing environment 24. For example, the user 10 issues an integrity challenge. To avoid a re-play attack, the challenge suitably includes a random number sequence (nonce). More detailed background information is provided in “TCPA Specification Version 1.0” published by the Trusted Computing Platform Alliance.
In step 503 the trusted device 213 supplies integrity metrics associated with the host operating system 22. Suitably, these integrity metrics include integrity metrics for the BIOS, operating system loader and host operating system, and integrity metrics formed by periodic or event-driven checks on the host operating system and related components of the computing platform.
In step 504, the trusted device 213 supplies an integrity metric associated with the selected computing environment. Preferably, the step 504 includes supplying integrity metrics associated with the virtual machine application 26, the guest operating system 25, the process 23, and a history of periodic or event-driven checks made on the integrity status of the computing environment 24.
The step 504 preferably includes supplying a history of any applications launched by the host operating system in the same compartment as the guest operating system, i.e. alongside the virtual machine application 26.
Preferably, in step 505 the integrity metric for the host operating system 22 and the computing environment 24 are compared against expected values, such as by using a certificate issued by a trusted party that is prepared to vouch for the integrity of the computing platform. If the comparison is successful, the computing environment is considered to be a trusted computing environment.
In a first example, the integrity challenge is issued direct to a component of the host operating system 22, such as the trusted device driver 221. In this embodiment, the integrity challenge includes information previously given to the user 10, such as an arbitrary label, which allows the trusted device driver 221 to establish the relevant computing environment 24. The external computing environment identity label given to the user 10 may be the same as, or complementary to, any information held internally identifying the computing environment. Suitably, the external identity information supplied as part of the integrity challenge is matched against a list of computing environments currently provided on the host operating system, this step ideally being performed by the trusted device driver 221. Suitably, there is a one to one relationship between the compartment identity label as given to the user 10, and any compartment identity label used internally in the host computing platform 20. In step 504 the trusted device 213 supplies an integrity metric or group of integrity metrics 230 associated with the identified computing environment 24.
In a second preferred example, the integrity challenge is issued from the user 10 and is received by a component of the relevant computing environment 24, such as the process 23 which suitably forms part of an application running in that computing environment 24. The integrity challenge is passed from the computing environment 24 to the trusted device driver 221. In this case, the trusted device driver 221 can readily establish the identity of the computing environment 214 passing the integrity challenge. In one example embodiment the computing environment 24 supplies an internal computing environment identity label such as a process ID of the virtual machine application 26, or a secret previously given to the virtual machine application 26 by the host operating system 22. In step 504 the trusted device 213 supplies integrity metrics associated with that computing environment 24.
In a further preferred aspect that can be applied to any of the methods described herein, the guest operating system 25 is itself a compartmented operating system. Multiple applications can be run on the guest operating system 25, each within a separate compartment of the guest operating system. This embodiment enables each computing environment 24 to be subdivided, and the method described above is applied to the subdivided computing environments.
Advantageously, a trusted computing environment is provided by using a trusted device to verify that a guest operating system has booted in a trusted manner. By repeating this process and running multiple guest operating systems, multiple trusted computing environments are provided. A first application can run in a first of the computing environments, whilst a second application can run in a second of the computing environments, where the first and second applications are mutually incompatible or one does not trust the other. The preferred implementation using a virtual machine application in combination with a compartment allows each computing environment to be independently trusted.
It is very difficult for a process running in one computing environment to affect the integrity of any other computing environment. Advantageously, a user can verify the integrity of one computing environment without reference to the integrity of any other computing environment. In the preferred implementation each computing environment has an associated set of one or more integrity metrics which do not include or depend on information about any other computing environment.
Number | Date | Country | Kind |
---|---|---|---|
0114891.5 | Jun 2001 | GB | national |
Number | Name | Date | Kind |
---|---|---|---|
4747040 | Blanset et al. | May 1988 | A |
4799156 | Shavit et al. | Jan 1989 | A |
4926476 | Covey | May 1990 | A |
4962533 | Kruger et al. | Oct 1990 | A |
4984272 | McIlroy et al. | Jan 1991 | A |
5029206 | Marino et al. | Jul 1991 | A |
5032979 | Hecht et al. | Jul 1991 | A |
5038281 | Peters | Aug 1991 | A |
5136711 | Hugard et al. | Aug 1992 | A |
5144660 | Rose | Sep 1992 | A |
5261104 | Bertram et al. | Nov 1993 | A |
5278973 | O'Brien et al. | Jan 1994 | A |
5325529 | Brown et al. | Jun 1994 | A |
5359659 | Rosenthal | Oct 1994 | A |
5361359 | Tajalli et al. | Nov 1994 | A |
5379342 | Arnold et al. | Jan 1995 | A |
5404532 | Allen et al. | Apr 1995 | A |
5410707 | Bell | Apr 1995 | A |
5414860 | Canova et al. | May 1995 | A |
5421006 | Jablon et al. | May 1995 | A |
5440723 | Arnold et al. | Aug 1995 | A |
5444850 | Chang | Aug 1995 | A |
5448045 | Clark | Sep 1995 | A |
5454110 | Kannan et al. | Sep 1995 | A |
5473692 | Davis | Dec 1995 | A |
5483649 | Kuznetsov et al. | Jan 1996 | A |
5495569 | Kotzur | Feb 1996 | A |
5497490 | Harada et al. | Mar 1996 | A |
5497494 | Combs et al. | Mar 1996 | A |
5504814 | Miyahara | Apr 1996 | A |
5504910 | Wisor et al. | Apr 1996 | A |
5530758 | Marino et al. | Jun 1996 | A |
5535411 | Speed et al. | Jul 1996 | A |
5548763 | Combs et al. | Aug 1996 | A |
5555373 | Dayan et al. | Sep 1996 | A |
5572590 | Chess | Nov 1996 | A |
5619571 | Sandstrom et al. | Apr 1997 | A |
5621912 | Borruso et al. | Apr 1997 | A |
5680452 | Shanton | Oct 1997 | A |
5680547 | Chang | Oct 1997 | A |
5692124 | Holden et al. | Nov 1997 | A |
5694590 | Thuraisingham et al. | Dec 1997 | A |
5787175 | Carter | Jul 1998 | A |
5809145 | Slik | Sep 1998 | A |
5815665 | Teper et al. | Sep 1998 | A |
5841869 | Merkling et al. | Nov 1998 | A |
5844986 | Davis | Dec 1998 | A |
5845068 | Winiger | Dec 1998 | A |
5867646 | Benson et al. | Feb 1999 | A |
5887163 | Nguyen et al. | Mar 1999 | A |
5889989 | Robertazzi et al. | Mar 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5903732 | Reed et al. | May 1999 | A |
5922074 | Richard et al. | Jul 1999 | A |
5933498 | Schneck et al. | Aug 1999 | A |
5960177 | Tanno | Sep 1999 | A |
5987605 | Hill et al. | Nov 1999 | A |
5987608 | Roskind | Nov 1999 | A |
6006332 | Rabne et al. | Dec 1999 | A |
6012080 | Ozden et al. | Jan 2000 | A |
6023765 | Kuhn | Feb 2000 | A |
6067559 | Allard et al. | May 2000 | A |
6078948 | Podgorny et al. | Jun 2000 | A |
6079016 | Park | Jun 2000 | A |
6081830 | Schindler | Jun 2000 | A |
6081894 | Mann | Jun 2000 | A |
6125114 | Blanc et al. | Sep 2000 | A |
6138239 | Veil | Oct 2000 | A |
6154838 | Le et al. | Nov 2000 | A |
6157719 | Wasilewski et al. | Dec 2000 | A |
6175917 | Arrow et al. | Jan 2001 | B1 |
6185678 | Arbaugh et al. | Feb 2001 | B1 |
6272631 | Thomlinson et al. | Aug 2001 | B1 |
6275848 | Arnold | Aug 2001 | B1 |
6289462 | McNabb et al. | Sep 2001 | B1 |
6292900 | Ngo et al. | Sep 2001 | B1 |
6304970 | Bizzaro et al. | Oct 2001 | B1 |
6327652 | England et al. | Dec 2001 | B1 |
6330669 | McKeeth | Dec 2001 | B1 |
6330670 | England et al. | Dec 2001 | B1 |
6334118 | Benson | Dec 2001 | B1 |
6367012 | Atkinson et al. | Apr 2002 | B1 |
6393412 | Deep | May 2002 | B1 |
6446203 | Aguilar et al. | Sep 2002 | B1 |
6449716 | Rickey | Sep 2002 | B1 |
6477702 | Yellin et al. | Nov 2002 | B1 |
6487601 | Hubacher et al. | Nov 2002 | B1 |
6496847 | Bugnion et al. | Dec 2002 | B1 |
6505300 | Chan et al. | Jan 2003 | B2 |
6513156 | Bak et al. | Jan 2003 | B2 |
6519623 | Mancisidor | Feb 2003 | B1 |
6530024 | Proctor | Mar 2003 | B1 |
6609248 | Srivastava et al. | Aug 2003 | B1 |
6622018 | Erekson | Sep 2003 | B1 |
6671716 | Diedrichsen et al. | Dec 2003 | B1 |
6681304 | Vogt et al. | Jan 2004 | B1 |
6701440 | Kim et al. | Mar 2004 | B1 |
6732276 | Cofler et al. | May 2004 | B1 |
6751680 | Langerman et al. | Jun 2004 | B2 |
6757824 | England | Jun 2004 | B1 |
6775779 | England et al. | Aug 2004 | B1 |
6854114 | Sexton et al. | Feb 2005 | B1 |
6892307 | Wood et al. | May 2005 | B1 |
6931545 | Ta et al. | Aug 2005 | B1 |
6948069 | Teppler | Sep 2005 | B1 |
6965816 | Walker | Nov 2005 | B2 |
6988250 | Proudler et al. | Jan 2006 | B1 |
7194623 | Proudler et al. | Mar 2007 | B1 |
20010037450 | Metlitski et al. | Nov 2001 | A1 |
20020012432 | England et al. | Jan 2002 | A1 |
20020023212 | Proudler | Feb 2002 | A1 |
20020069354 | Fallon et al. | Jun 2002 | A1 |
20020120575 | Pearson et al. | Aug 2002 | A1 |
20020184486 | Kershenbaum et al. | Dec 2002 | A1 |
20020184520 | Bush et al. | Dec 2002 | A1 |
20020188935 | Hertling et al. | Dec 2002 | A1 |
20030084436 | Berger et al. | May 2003 | A1 |
20030145235 | Choo | Jul 2003 | A1 |
20030191957 | Hypponen et al. | Oct 2003 | A1 |
20030196083 | Grawrock et al. | Oct 2003 | A1 |
20030196110 | Lampson et al. | Oct 2003 | A1 |
20040045019 | Bracha et al. | Mar 2004 | A1 |
20040148514 | Fee et al. | Jul 2004 | A1 |
20050256799 | Warsaw et al. | Nov 2005 | A1 |
Number | Date | Country |
---|---|---|
2 187 855 | Jun 1997 | CA |
0 304 033 | Feb 1989 | EP |
0 421 409 | Apr 1991 | EP |
0 510 244 | Oct 1992 | EP |
0 580 350 | Jan 1994 | EP |
0 825 511 | Feb 1998 | EP |
0 849 657 | Jun 1998 | EP |
0 849 680 | Jun 1998 | EP |
0 465 016 | Dec 1998 | EP |
0 893 751 | Jan 1999 | EP |
0 895 148 | Feb 1999 | EP |
0 926 605 | Jun 1999 | EP |
0 992 958 | Apr 2000 | EP |
1 056 014 | Aug 2000 | EP |
1030237 | Aug 2000 | EP |
1 049 036 | Nov 2000 | EP |
1 055 990 | Nov 2000 | EP |
1 056 010 | Nov 2000 | EP |
1 076 279 | Feb 2001 | EP |
1 107 137 | Jun 2001 | EP |
2 317 476 | Mar 1998 | GB |
2 336 918 | Nov 1999 | GB |
00204412 | Aug 2000 | GB |
2 353 885 | Mar 2001 | GB |
2 361 153 | Oct 2001 | GB |
9325024 | Dec 1993 | WO |
9411967 | May 1994 | WO |
9524696 | Sep 1995 | WO |
9527249 | Oct 1995 | WO |
9729416 | Aug 1997 | WO |
9815082 | Apr 1998 | WO |
9826529 | Jun 1998 | WO |
9836517 | Aug 1998 | WO |
9840809 | Sep 1998 | WO |
9844402 | Oct 1998 | WO |
9845778 | Oct 1998 | WO |
0016200 | Mar 2000 | WO |
0019324 | Apr 2000 | WO |
0031644 | Jun 2000 | WO |
0048062 | Aug 2000 | WO |
0048063 | Aug 2000 | WO |
0052900 | Sep 2000 | WO |
0054125 | Sep 2000 | WO |
0054126 | Sep 2000 | WO |
0058859 | Oct 2000 | WO |
0073880 | Dec 2000 | WO |
0073904 | Dec 2000 | WO |
0073904 | Dec 2000 | WO |
0073913 | Dec 2000 | WO |
0109781 | Feb 2001 | WO |
0113198 | Feb 2001 | WO |
0123980 | Apr 2001 | WO |
0127722 | Apr 2001 | WO |
0142889 | Jun 2001 | WO |
0165334 | Sep 2001 | WO |
0165366 | Sep 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20020194496 A1 | Dec 2002 | US |