The present invention relates to the field of computers, and particularly, although not exclusively, to a computing entity which can be placed into a trusted state, and a method of operating the computing entity to achieve the trusted state, and operation of the computing entity when in the trusted state.
Conventional prior art mass market computing platforms include the well-known personal computer (PC) and competing products such as the Apple Macintosh™, and a proliferation of known palm-top and laptop personal computers. Generally, markets for such machines fall into two categories, these being domestic or consumer, and corporate. A general requirement for a computing platform for domestic or consumer use is a relatively high processing power, Internet access features, and multi-media features for handling computer games. For this type of computing platform, the Microsoft Windows® '95 and '98 operating system products and Intel processors dominate the market.
On the other hand, for business use, there are a plethora of available proprietary computer platform solutions available aimed at organizations ranging from small businesses to multi-national organizations. In many of these applications, a server platform provides centralized data storage, and application functionality for a plurality of client stations. For business use, other key criteria are reliability, networking features, and security features. For such platforms, the Microsoft Windows NT 4.0™ operating system is common, as well as the Unix™ operating system.
With the increase in commercial activity transacted over the Internet, known as “e-commerce”, there has been much interest in the prior art in enabling data transactions between computing platforms over the Internet. However, because of the potential for fraud and manipulation of electronic data, in such proposals, fully automated transactions with distant unknown parties on a wide-spread scale as required for a fully transparent and efficient market place have so far been held back. The fundamental issue is one of trust between interacting computer platforms for the making of such transactions.
There have been several prior art schemes which are aimed at increasing the security and trustworthiness of computer platforms. Predominantly, these rely upon adding in security features at the application level, that is to say the security features are not inherently imbedded in the kernel of operating systems, and are not built in to the fundamental hardware components of the computing platform. Portable computer devices have already appeared on the market which include a smart card, which contains data specific to a user, which is input into a smart card reader on the computer. Presently, such smart cards are at the level of being add-on extras to conventional personal computers, and in some cases are integrated into a casing of a known computer. Although these prior art schemes go some way to improving the security of computer platforms, the levels of security and trustworthiness gained by prior art schemes may be considered insufficient to enable widespread application of automated transactions between computer platforms. For businesses to expose significant value transactions to electronic commerce on a widespread scale, they require confidence in the trustworthiness of the underlying technology.
Prior art computing platforms have several problems which stand in the way of increasing their inherent security:
It is known to provide security features for computer systems, which are embedded in operating software. These security features are primarily aimed at providing division of information within a community of users of the system. In the known Microsoft Windows NT™ 4.0 operating system, there exists a monitoring facility called a “system log event viewer” in which a log of events occurring within the platform is recorded into an event log data file which can be inspected by a system administrator using the windows NT operating system software. This facility goes some way to enabling a system administrator to security monitor pre-selected events. The event logging function in the Windows NT™ 4.0 operating system provides system monitoring.
In terms of overall security of a computer platform, a purely software based system is vulnerable to attack, for example by viruses of which there are thousands of different varieties. Several proprietary virus finding and correcting applications are known, for example the Dr Solomons™ virus toolkit program The Microsoft Windows NT™ 4.0 software includes a virus guard software, which is preset to look for known viruses. However, virus strains are developing continuously, and the virus guard software will not give reliable protection against newer unknown viruses. New strains of virus are being developed and released into the computing and internet environment on an ongoing basis.
Further, prior art monitoring systems for computer entities focus on network monitoring functions, where an administrator uses network management software to monitor performance of a plurality of network computers. In these known systems, trust in the system does not reside at the level of individual trust of each hardware unit of each computer platform in a system.
One object of the present invention is to provide a computing entity in which a third party user can have a high degree of confidence that the computing entity has not been corrupted by an external influence, and is operating in a predictable and known manner.
Another object of the present invention is to simplify a task of judging whether a trustworthiness of a computing entity is sufficient to perform a particular task or set of tasks or type of task.
In specific implementations of the present invention, a computing entity is capable of residing in a plurality of distinct operating states. Each operating state can be distinguished from other operating states using a set of integrity metrics designed to distinguish between those operating states.
According to first aspect of the present invention there is provided a computing entity comprising:
a computer platform comprising a plurality of physical and logical resources including a first data processor and a first memory means;
a monitoring component comprising a second data processor and a second memory means;
wherein, said computer platform is capable of operating in a plurality of different states, each said state utilising a corresponding respective set of individual ones of said physical and logical resources;
wherein said monitoring component operates to determine which of said plurality of states said computer platform operates in.
Preferably a said memory means contains a set of instructions for configuration of said plurality of physical and logical resources of said computer platform into said pre-determined state.
Preferably exit of said computer platform from said pre-determined state is monitored by said monitoring component.
A BIOS (Basic Input Output System) may be provided within the monitoring component itself. By providing the BIOS file within the monitoring component, the BIOS file may be inherently trusted.
In an alternative embodiment, said computer platform may comprise an internal firmware component configured to compute a digest data of a BIOS file data stored in a predetermined memory space occupied by a BIOS file of said computer platform.
According to second aspect of the present invention there is provided a method of activating a computing entity comprising a computer platform having a first data processing means and a first memory means and a monitoring component having a second data processing means and a second memory means, into an operational state of a plurality of pre-configured operational states into which said computer platform can be activated, said method comprising the steps of:
selecting a state of said plurality of pre-configured operational states into which to activate said computer platform;
activating said computer platform into said selected state according to a set of stored instructions; and
wherein said monitoring component monitors activation into said selected state by recording data describing which of said plurality of pre-configured states said computer platform is activated into.
Said monitoring component may continue to monitor said selected state after said computer platform has been activated to said selected state.
Said monitoring component may generate a state signal in response to a signal input directly to said monitoring component by a user of said computing entity, said state signal containing data describing which said state said computer platform has entered.
In one embodiment, said set of stored instructions which allow selection of said state may be stored in a BIOS file resident within said monitoring component. Once selection of a said state has been made, activation of the state may be carried out by a set of master boot instructions which are themselves activated by the BIOS.
Preferably the method comprises the step of generating a menu for selection of a said pre-configured state from said plurality of pre-configured states.
The method may comprise the step of generating a user menu displayed on a user interface for selection of a said pre-configured state from said plurality of pre-configured states, and said step of generating a state signal comprises generating a state signal in response to a user input accepted through said user interface.
Alternatively, the predetermined state may be automatically selected by a set of instructions stored on a smartcard, which selects a state option generated by said BIOS. The selection of states may be made automatically via a set of selection instructions to instruct said BIOS to select a state from said set of state options generated by said BIOS.
Said step of monitoring a said state may comprise:
immediately before activating said computer platform, creating by means of a firmware component a digest data of a first pre-allocated memory space occupied by a BIOS file of said computer platform;
writing said digest data to a second pre-allocated memory space to which only said firmware component has write access; and
said monitoring component reading said digest data from said second pre-allocated memory space.
Said step of monitoring a said state into which said computer platform is activated may comprise:
executing a firmware component to compute a digest data of a BIOS file of said computer platform;
writing said digest data to a predetermined location in said second memory means of said monitoring component.
Said step of activating said computer platform into said selected state may comprise:
at a memory location of said first memory means, said location occupied by a BIOS file of said computer platform, storing an address of said monitoring component which transfers control of said first processor to said monitoring component;
storing in said monitoring component a set of native instructions which are accessible immediately after reset of said first processor, wherein said native instructions instruct said first processor to calculate a digest of said BIOS file and store said digest data in said second memory means of said monitoring component; and
said monitoring component passing control of said activation process to said BIOS file, once said digest data is stored in said second memory means.
Said step of monitoring said state into which said computer platform is activated may comprise:
after said step of activating said computer platform into said selected state, monitoring a plurality of logical and physical components to obtain a first set of metric data signals from those components, said metric data signals describing a status and condition of said components;
comparing said first set of metric data signals determined from said plurality of physical and logical components of said computer platform, with a set of prerecorded metric data stored in a memory area reserved for access only by said monitoring component; and
comparing said first set of metric data signals obtained directly from said plurality of physical and logical components with said set of pre-stored metric data signals stored in said reserved memory area.
According to a third aspect of the present invention there is provided a method of operating a computing entity comprising a computer platform having a first data processing means and a first memory means, and a monitoring component having a second data processing means and a second memory means, such that said computer platform enters one of a plurality of possible pre-determined operating states said method comprising the steps of:
in response to an input from a user interface, generating a state signal, said state signal describing a selected state into which said computer platform is to be activated into;
activating said computer platform into a pre-determined state, in which a known set of physical and logical resources are available for use in said state and known processes can operate in said state;
from said pre-determined state, entering a configuration menu for reconfiguration of said monitoring component; and
modifying a configuration of said monitoring component by entering data via a user interface in accordance with an instruction set comprising said configuration menu.
Said step of entering said monitoring component configuration menu may comprise:
entering a confirmation key signal directly into said monitoring component, said confirmation key signal generated in response to a physical activation of a confirmation key.
Said step of entering said monitoring component configuration menu may comprise entering a password to said trusted component via a user interface.
According to a fourth aspect of the present invention there is provided a method of operation of a computing entity comprising a monitoring component having a first data processing means and a first memory means, and a computer platform having a second data processing means and a second memory means, said method comprising the steps of:
entering a first state of said computer entity, wherein in said first state are available a plurality of pre-selected physical and logical resources;
commencing a user session in said first state, in which said user session a plurality of data inputs are received by said computer platform, said second data processing means performing data processing on said received data; reconfiguring said plurality of physical and logical resources according to instructions received in said session;
generating a session data describing a configuration of said physical and logical resources;
generating a plurality of user data resulting from processes operating within said session;
storing said user data;
storing session data;
exiting said session; and
exiting said computer platform from said state.
Said method may further comprise the step of reconfiguring said monitoring component during said user session in said first state. Thus, the monitoring component may be reconfigured from a trusted state of the computer platform.
For a better understanding of the invention and to show how the same may be carried into effect, there will now be described by way of example only, specific embodiments, methods and processes according to the present invention with reference to the accompanying drawings in which:
There will now be described by way of example the best mode contemplated by the inventors for carrying out the invention. In the following description numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent however, to one skilled in the art, that the present invention may be practiced without limitation to these specific details. In other instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the present invention.
Specific embodiments of the present invention comprise a computer platform having a processing means and a memory means, and which is physically associated with a component, known herein after as a “trusted component” which monitors operation of the computer platform by collecting metrics data from the computer platform, and which is capable of verifying to third party computer entities interacting with the computer platform to the correct functioning of the computer platform.
Two computing entities each provisioned with such a trusted component, may interact with each other with a high degree of ‘trust’. That is to say, where the first and second computing entities interact with each other the security of the interaction is enhanced compared to the case where no trusted component is present, because:
In this specification, the term “trusted” when used in relation to a physical or logical component, is used to mean a physical or logical component which always behaves in an expected manner. The behavior of that component is predictable and known. Trusted components have a high degree of resistance to unauthorized modification.
In this specification, the term “computer platform” is used to refer to at least one data processor and at least one data storage means, usually but not essentially with associated communications facilities e.g. a plurality of drivers, associated applications and data files, and which may be capable of interacting with external entities e.g. a user or another computer entity, for example by means of connection to the internet, connection to an external network, or by having an input port capable of receiving data stored on a data storage medium, e.g. a CD ROM, floppy disk, ribbon tape or the like. The term “computer platform” encompasses the main data processing and storage facility of a computer entity.
Referring to
In general, in the best mode described herein, a trusted computer entity comprises a computer platform consisting of a first data processor, and a first memory means, together with a trusted component which verifies the integrity and correct functioning of the computing platform. The trusted component comprises a second data processor and a second memory means, which are physically and logically distinct from the first data processor and first memory means.
In the example shown in
Referring to
In the best mode herein, as illustrated in
External to the motherboard and connected thereto by data bus 304 are provided the one or more hard disk drive memory devices 203, keyboard data entry device 101, pointing device 105, e.g. a mouse, trackball device or the like; monitor device 100; smart card reader device 103 for accepting a smart card device as described previously; the disk drive(s), keyboard, monitor, and pointing device being able to communicate with processor 201 via said data bus 304; and one or more peripheral devices 307, 308, for example a modem, printer scanner or other known peripheral device.
To provide enhanced security confirmation key switch 104 is hard wired directly to confirmation key interface 306 on motherboard 200, which provides a direct signal input to trusted component 202 when confirmation key 104 is activated by a user such that a user activating the confirmation key sends a signal directly to the trusted component, by-passing the first data processor and first memory means of the computer platform.
In one embodiment the confirmation key may comprise a simple switch. Confirmation key 104, and confirmation key driver 306 provide a protected communication path (PCP) between a user and the trusted component, which cannot be interfered with by processor 201, which by-passes data bus 304 and which is physically and logically unconnected to memory area 300 or hard disk drive memory device(s) 203.
Trusted component 202 is positioned logically and physically between monitor 100 and processor 201 of the computing platform, so that the trusted component 202 has direct control over the views displayed on monitor 100 which cannot be interfered with by processor 201.
The trusted component lends its identity and trusted processes to the computer platform and the trusted component has those properties by virtue of its tamper-resistance, resistance to forgery, and resistance to counterfeiting. Only selected entities with appropriate authentication mechanisms are able to influence the processes running inside the trusted component. Neither a user of the trusted computer entity, nor anyone or any entity connected via a network to the computer entity may access or interfere with the processes running inside the trusted component. The trusted component has the property of being “inviolate”.
Smart card reader 103 is wired directly to smart card interface 305 on the motherboard and does not connect directly to data bus 304. Alternatively, smart card reader 103 may be connected directly to data bus 304. On each individual smart card may be stored a corresponding respective image data which is different for each smart card. For user interactions with the trusted component, e.g. for a dialogue box monitor display generated by the trusted component, the trusted component takes the image data from the users smart card, and uses this as a background to the dialogue box displayed on the monitor 100. Thus, the user has confidence that the dialogue box displayed on the monitor 100 is generated by the trusted component. The image data is preferably easily recognizable by a human being in a manner such that any forgeries would be immediately apparent visually to a user. For example, the image data may comprise a photograph of a user. The image data on the smart card may be unique to a person using the smart card.
Referring to
Trusted component 202 comprises a physically and logically independent computing entity from the computer platform. In the best mode herein, the trusted component shares a motherboard with the computer platform so that the trusted component is physically linked to the computer platform. In the best mode, the trusted component is physically distinct from the computer platform, that is to say it does not exist solely as a sub-functionality of the data processor and memory means comprising the computer platform, but exists separately as a separate physical data processor 400 and separate physical memory area 401, 402, 403, 404. By providing a physically present trusted component separate from a main processor of the computer entity, the trusted component becomes harder to mimic or forge through software introduced onto the computer platform. Another benefit which arises from the trusted component being physical, separate from the main processor of the platform, and tamper resistant is that the trusted component cannot be physically subverted by a local user, and cannot be logically subverted by either a local user or a remote entity. Programs within the trusted component are pre-loaded at manufacture of the trusted component in a secure environment. The programs cannot be changed by users, but may be configured by users, if the programs are written to permit such configuration. The physicality of the trusted component, and the fact that the trusted component is not configurable by the user enables the user to have confidence in the inherent integrity of the trusted component, and therefore a high degree of “trust” in the operation and presence of the trusted component on the computer platform.
Referring to
In the trusted component space, are resident the trusted component itself, displays generated by the trusted component on monitor 100; and confirmation key 104, inputting a confirmation signal via confirmation key interface 306.
In the best mode for carrying out the invention, the computing entity has a plurality of modes of operation, referred to herein as operating states. Different ones of the plurality of operating states allow the computing entity to perform different sets of tasks and functionality. In some of the individual states, complex operations can be carried out with a large number of degrees of freedom, and complexity. In other operating states, there are more restrictions on the behavior of the computing entity.
The level of ‘trust’ which can be placed on the computing entity when operating in each of the plurality of different states is related to:
The trust placed in the computer entity is composed of two separate parts;
As described herein, levels or degrees of trust placed in the computer entity are determined as being relative to a level of trust which is placed in the trusted component. Although the amount of trust in a computer entity is related to many factors, a key factor in measuring that trust are the types, extent and regularity of integrity metric checks which the trusted component itself carries out on the computer entity.
The trusted component is implicitly trusted. The trusted component is embedded as the root of any trust which is placed in the computing platform and the computing platform as a whole cannot be any more trusted than the amount of trust placed in the trusted component.
By virtue of the trusted component monitoring operations of the computer platform, the trust placed in the trusted component can be extended to various parts of the computer platform, with the level and extent of trust placed in individual areas of the computer platform, being dependent upon the level and reliability with which the trusted component can monitor that particular area of the computing platform.
Since the trusted areas of the computing platform are dependent upon the frequency, extent, and thoroughness with which the trusted component applies a set of integrity metric measurements to the computer platform, if the trusted component does not comprehensively measure all measurable aspects of the operation of the computing platform at all times, then the level of trust placed in individual parts of the computer platform will form a subset of the overall trust placed in the trusted component itself. If the computing entity supports only a limited number of integrity metrics, a user of the equipment, including a third party computing entity, is restricted in its ability to reason about the level of trust which can be placed in the computing entity.
Although various islands of the computer platform are trusted at various levels, depending upon the integrity metrics which are applied by the trusted component for measuring those areas of the computer platform, the level of trust placed in the computer platform as a whole is not as high as that which is inherent in the trusted component. That is to say, whilst the trusted component space 502 is trusted at a highest level, the user space 501 may comprise several regions of various levels of trust. For example, applications programs 504 may be relatively untrusted. Where a user wishes to use the computer entity for an operation which involves a particularly high degree of confidentiality or secrecy, for example working on a new business proposal, setting pay scales for employees or equally sensitive operations, then the human user may become worried about entering such details onto the computer platform because of the risk that the confidentiality or secrecy of the information will become compromised. The confidential information must be stored in the computing entity, and islands of high trust may not extend over the whole computing platform uniformly and with the same degree of trust. For example, it may be easier for an intruder to access particular areas or files on the computing platform compared with other areas or files.
Additionally, a user may wish to instruct the trusted component to perform certain functions, this poses the problem that all the commands to instruct the trusted component must pass through the computer platform, which is at a lower level of trust than the trusted component itself. Therefore, there is a risk of the commands to the trusted component becoming compromised during their passage and processing through the computer platform.
According to specific implementations of the present invention, the computer entity may enter a plurality of different states, each state having a corresponding respective level of trust, wherein the individual levels of trust corresponding to different states may be different from each other.
Referring to
In this specification, by the term “state” when used in relation to a computing entity, it is meant a mode of operation of the computing entity in which a plurality of functions provided by the computing platform may be carried out. For example in a first state, the computing entity may operate under control of a first operating system, and have access to a first set of application programs, a first set of files, and a first set of communications capabilities, for example modems, disk drives, local area network cards, e.g. Ethernet cards. In a second state, the computing platform may have access to a second operating system, a second set of applications, a second set of data files and a second set of input/output resources. Similarly, for successive third, fourth states up to a total number of states into which the computing entity can be set. There can be overlap between the facilities available between two different states. For example, a first and second state may use a same operating system, whereas a third state may use a different operating system.
Referring to
Trusted state 700 is distinguished from the second and third states 701, 702 by virtue of the way in which the trusted state can be accessed. In one option, trusted state 700 can only be accessed by reference to the trusted component 202. However, in the preferred best mode implementation entry into the trusted state need not be controlled by the trusted component. To access the trusted state, a user may turn on the computing entity, that is to say turn on the power supply to the computing entity in a turn on process 703. Upon turning on the power supply, the computing entity boots up via the BIOS file 301 in process 704, from a routine contained in the computer BIOS. The computing entity may enter either the trusted state 700, the second state 701, or the third state 702, depending upon how the BIOS file is configured. In the best mode herein, a user of the computer entity has the option, provided as a menu display option on monitor 100 during boot up of the computer entity, or as a selectable option presented as a screen icon, when in any state, to enter either the trusted state 700, or one of the other states 701, 702 by selection. For example on turn on, the BIOS may be configured to default boot up in to the second state 701. Once in the second state, entry into a different state 700 may require a key input from a user, which may involve entry of a password, or confirmation of the users identity by the user entering their smart card into smart card reader 103.
Once the computing entity has entered a state other than the trusted state, e.g. the second state 701 or third state 702, then from those states the user may be able to navigate to a different state. For example the user may be able to navigate from the second state 701 to the third state 702 by normal key stroke entry operations on the keyboard, by viewing the monitor and using a pointing device signal input, usually with reference back to the BIOS. This is shown schematically as select new state process 705.
In order to enter the trusted state 700, the computer entity must be either booted up for the first time after turn on process 704, or re-booted via the BIOS in re-boot process 706. Re-boot process 706 is very similar to boot up process 704 except that it can be entered without having to turn the power of the computing entity off and then on again. To leave the trusted state 700, the computing entity must again refer to the BIOS 704 which involves automatic monitoring by the trusted component 202 in monitor process 706. Similarly, re-booting via the BIOS in process 705 involves automatic monitoring by the trusted component in monitoring process 706.
To leave the trusted state 700, the trusted state can only be left either by turning the power off in power down process 707, or by re-booting the computing entity in re-boot process 705. Re-booting the BIOS in re-boot process 705 involves automatic monitoring by the trusted component 706. Once the trusted state is left, it is not possible to re-enter the trusted state without either re-booting the computing entity, in re-boot process 705, or booting up the computing entity after a power down in process 704, both of which involve automatic monitoring by the trusted component in monitoring process 706.
Referring to
Referring to
In step 900, the computer enters a boot up routine, either as a result of a power supply to the computing entity being turned on, or as a result of a user inputting a reset instruction signal, for example by clicking a pointer icon over a reset icon displayed on the graphical user interface, giving rise to a reset signal. The reset signal is received by the trusted component, which monitors internal bus 304. The BIOS component 301 initiates a boot-up process of the computer platform in step 901. Trusted component 202 proceeds to make a plurality of integrity checks on the computer platform and in particular checks the BIOS component 301 in order to check the status of the computer platform. Integrity checks are made by reading a digest of the BIOS component. The trusted component 202 acts to monitor the status of the BIOS, and can report to third party entities on the status of the BIOS, thereby enabling third party entities to determine a level of trust which they may allocate to the computing entity.
There are several ways to implement integrity metric measurement of the BIOS. In each case, the trusted component is able to obtain a digest of a BIOS file very early on in the boot up process of the computer platform. The following are examples:
In one embodiment, trusted component 202 may interrogate individual components of the computer platform, in particular hard disk drive 203, microprocessor 201, and RAM 301, to obtain data signals directly from those individual components which describe the status and condition of those components. Trusted component 202 may compare the metric signals received from the plurality of components of the computer entity with the pre-recorded metric data stored in a memory area reserved for access by the trusted components. Provided that the signals received from the components of the computer platform coincide with and match those of the metric data stored within the memory, then the trusted component 202 provides an output signal confirming that the computer platform is operating correctly. Third parties, for example, other computing entities communicating with the computing entity may take the output signal as confirmation that the computing entity is operating correctly, that is to say is trusted.
In step 903 BIOS generates a menu display on monitor 100 offering a user a choice of state options, including a trusted state 700. The user enters details of which state is to be entered by making key entry to the graphical user interface or data entry using a pointing device, e.g. mouse 105. The BIOS receives key inputs from a user which instruct a state in to which to boot in step 904. The trusted component may also require a separate input from confirmation key 104 requiring physical activation by a human user, which bypasses internal bus 304 of the computer entity and accesses trusted component 202 directly, in addition to the user key inputs selecting the state. Once the BIOS 301 has received the necessary key inputs instructing which state is required, the processing of the set of configuration instructions stored in BIOS 301 occurs by microprocessor 201, and instructs which one of a set of state options stored in the BIOS file, the computer platform will configure itself into. Each of a plurality of state selections into which the computer platform may boot may be stored as separate boot options within BIOS 301, with selection of the boot option being controlled in response to keystroke inputs or other graphical user inputs made by a user of the computing entity. Once the correct routine of BIOS file 301 is selected by the user, then in step 906, the BIOS file then releases control to an operating system load program stored in a memory area of the computer platform, which activates boot up of the computer platform into an operating system of the selected state. The operating system load program contains a plurality of start up routines for initiating a state, which include routines for starting up a particular operating system corresponding to a selected state. The operating load program boots up the computer platform into the selected state. The operating system measures the metrics of the load program which is used to install the operating system, in step 907. Once in the selected state, trusted component 202 continues, in step 908, to perform on an ongoing continuous basis further integrity check measurements to monitor the selected state continuously, looking for discrepancies, faults, and variations from the normal expected operation of the computer platform within that state. Such integrity measurements are made by trusted component 202 sending out interrogation signals to individual components of the computer platform, and receiving response signals from the individual components of the computer platform, which response signals the trusted component may compare with a predetermined preloaded set of expected response signals corresponding to those particular states which are stored within the memory of the trusted component, or the trusted component 202 compares is the integrity metrics measured from the computer platform in the selected state with the set of integrity metrics initially measured as soon as the computer platform enters the selected state, so that on an ongoing basis any changes to the integrity metrics from those initially recorded can be detected.
During the boot up procedure, although the trusted component monitors the boot up process carried out by the BIOS component, it does not necessarily control the boot up process. The trusted component acquires a value of the digest of the BIOS component 301 at an early stage in the boot up procedure. In some of the alternative embodiments, this may involve the trusted component seizing control of the computer platform before boot up by the BIOS component commences. However, in alternative variations of the best mode implementation described herein, it is not necessary for the trusted component to obtain control of the boot up process, but the trusted component does monitor a computer platform, and in particular the BIOS component 301. By monitoring the computer platform, the trusted component stores data which describes which BIOS options have been used to boot up the computer, and which operating system has been selected. The trusted component also monitors the loading program used to install the operating system.
There will now be described an example of operation of a computer entity within a trusted state in a first specific mode of operation according to the present invention.
Referring to
Referring to
Referring to
At the end of the second session, the session is dosed after having saved the work produced in the second session, and the trusted state is exited via a power down process or re-boot process 705, 707. All memory of the trusted state and second session other than that stored as the session data 1107 and stored output user data 1106 is lost from the computer platform.
It will be appreciated that the above example is a specific example of using a computer in successive first and second sessions on different days. In between use of those sessions, the computing entity may be used in a plurality of different states, for different purposes and different operations, with varying degrees of trust. In operating states which have a lower level of trust, for example the second and third states (being ‘untrusted’ states) the computer entity will not lose memory of this data configuration between transitions from state to state. According to the above method of operation, the trusted state 700 may be activated any number of times, and any number of sessions carried out. However, once the trusted state is exited, the trusted state has no memory of previous sessions. Any configuration of the trusted state must be by new input of data 1003, 1102, or by input of previously stored session data or user data 1007, 1008, 1106, 1107.
In the above described specific implementations, specific methods, specific embodiments and modes of operation according to the present invention, a trusted state comprises a computer platform running a set of processes all of which are in a known state. Processes may be continuously monitored throughout a session operating in the trusted state, by a trusted component 202.
Referring to
Additionally, or optionally, the user may be required to insert a smart card into smart card reader 103 in step 1203, following which the trusted component verifies the identity of the user by reading data from the smart card via smart card interface 305. Additionally, the user may be required to input physical confirmation of his or her presence by activation of confirmation key 104 providing direct input into trusted component 202 as described with reference to
Once the security checks including the password, verification by smart card and/or activation of the confirmation key are accepted by the trusted component, the file configuration menu is displayed on the graphical user interface under control of trusted component 202 in step 1205. Reconfiguration of the trusted component can be made using the menu in step 1206 by the user. Depending upon the level of security applied, which is an implementation specific detail of the trusted component configuration menu, the user may need to enter further passwords and make further confirmation key activations when entering data into the menu itself. In step 1207, the user exits the trusted component reconfiguration menu having reconfigured the trusted component.
In the trusted component configuration menu, a user may reconfigure operation of the trusted component. For example, a user may change the integrity metrics used to monitor the computer platform.
By storing predetermined digest data corresponding to a plurality of integrity metrics present in a state inside the trusted component's own memory, this may provide the trusted component with data which it may compare with a digest data of a state into which the computer platform is booted, for the trusted component to check that the computer platform has not been booted into an unauthorized state.
The trusted component primarily monitors boot up of the computer platform. The trusted component does not necessarily take control of the computer platform if the computer platform boots into an unauthorized state, although optionally, software may be provided within the trusted component which enables the trusted component to take control of the computer platform if the computer platform boots into an unauthorized, or an unrecognized state.
When in the trusted state, a user may load in new applications to use in that trusted state, provided the user can authenticate those applications for use in the trusted state. This may involve a user entering a signature data of the required application to the trusted component, to allow the trusted component to verify the application by means of its signature when loading the application into the trusted state. The trusted component checks that the signature of the application is the same as the signature which the user has loaded into the trusted component before actually loading the application. At the end of a session, the application is lost from the platform altogether. The session in the trusted state exists only in temporary memory, for example random access memory, which is reset when the trusted state is exited.
In the above described implementations, a version of a computer entity in which a trusted component resides within a video path to a visual display unit have been described. However, the invention is not dependent upon a trusted component being present in a video path to a visual display unit, it will be understood by persons skilled in the art that the above best mode implementations are exemplary of a large class of implementations which can exist according to the invention.
In the above described best mode embodiment, methods of operation have been described wherein a user is presented with a set of options for selecting a state from a plurality of states, and a user input is required in order to enter a particular desired state. For example a user input may be required to specify a particular type of operating system which is required to be used, corresponding to a state of the computer platform. In a further mode of operation of the specific embodiment, data for selecting a predetermined operating state of the computer platform may be stored on a smart card, which is transportable from computer platform to computer platform, and which can be used to boot up a computer platform into a predetermined required state. The smartcard responds to a set of state selection options presented by a BIOS, and selects one of a plurality of offered choices of state. The BIOS contains the state selections available, and a set of loading programs actually install the various operating systems which provide the states. In this mode of operation, rather than data describing a predetermined state being stored within the first memory area of the trusted component, and the BIOS system obtaining that data from the trusted component in order to boot the computer platform up into a required predetermined state, the information can be accessed from a smart card entered into the smart card reader.
Using such a smart card pre-configured with data for selecting one or a plurality of predetermined states, a user carrying the smart card may activate any such computing entity having a trusted component and computer platform as described herein into a predetermined state as specified by the user, with a knowledge that the computing entity will retain no record of the state after a user session has taken place. Similarly as described with reference to
Number | Date | Country | Kind |
---|---|---|---|
99307380 | Sep 1999 | EP | regional |
This application is being filed as a continuation of co-pending PCT International Patent Application No. PCT/GB00/03613 (filed on 19 Sep. 2000), which PCT application claims priority to EP Application No. 99307380.8 (filed on 17 Sep. 1999). The subject matter of the present application may also be related to the following U.S. patent applications: “Performance of a Service on a Computing Platform,” Ser. No. 09/920,554, filed Aug. 1, 2001; “Secure E-Mail Handling Using a Compartmented Operating System,” Ser. No. 10/075,444, filed Feb. 15, 2002; “Electronic Communication,” Ser. No. 10/080,466, filed Feb. 22, 2002; “Demonstrating Integrity of a Compartment of a Compartmented Operating System,” Ser. No. 10/165,840, filed Jun. 7, 2002; “Multiple Trusted Computing Environments with Verifiable Environment Entities,” Ser. No. 10/175,183, filed Jun. 18, 2002; “Renting a Computing Environment on a Trusted Computing Platform,” Ser. No. 10/175,185, filed Jun. 18, 2002; “Interaction with Electronic Services and Markets,” Ser. No. 10/175,395, filed Jun. 18, 2002; “Multiple Trusted Computing Environments,” Ser. No. 10/175,542, filed Jun. 18, 2002; “Performing Secure and Insecure Computing Operations in a Compartmented Operating System,” Ser. No. 10/175,553, filed Jun. 18, 2002; “Privacy of Data on a Computer Platform,” Ser. No. 10/206,812, filed Jul. 26, 2002; “Trusted Operating System,” Ser. No. 10/240,137, filed Sep. 26, 2002; “Trusted Gateway System,” Ser. No. 10/240,139, filed Sep. 26, 2002; and “Apparatus and Method for Creating a Trusted Environment,” Ser. No. 10/303,690, filed Nov. 21, 2002.
Number | Name | Date | Kind |
---|---|---|---|
4747040 | Blanset et al. | May 1988 | A |
4799156 | Shavit et al. | Jan 1989 | A |
4926476 | Covey | May 1990 | A |
4962533 | Kruger et al. | Oct 1990 | A |
4984272 | McIlroy et al. | Jan 1991 | A |
5029206 | Marino et al. | Jul 1991 | A |
5032979 | Hecht et al. | Jul 1991 | A |
5038281 | Peters | Aug 1991 | A |
5136711 | Hugard et al. | Aug 1992 | A |
5144660 | Rose | Sep 1992 | A |
5261104 | Bertram et al. | Nov 1993 | A |
5278973 | O'Brien et al. | Jan 1994 | A |
5325529 | Brown et al. | Jun 1994 | A |
5379342 | Arnold et al. | Jan 1995 | A |
5410707 | Bell | Apr 1995 | A |
5414860 | Canova et al. | May 1995 | A |
5440723 | Arnold et al. | Aug 1995 | A |
5448045 | Clark | Sep 1995 | A |
5454110 | Kannan et al. | Sep 1995 | A |
5473692 | Davis | Dec 1995 | A |
5483649 | Kuznetsov et al. | Jan 1996 | A |
5495569 | Kotzur | Feb 1996 | A |
5497490 | Harada et al. | Mar 1996 | A |
5497494 | Combs et al. | Mar 1996 | A |
5504814 | Miyahara | Apr 1996 | A |
5504910 | Wisor et al. | Apr 1996 | A |
5530758 | Marino et al. | Jun 1996 | A |
5535411 | Speed et al. | Jul 1996 | A |
5548763 | Combs et al. | Aug 1996 | A |
5555373 | Dayan et al. | Sep 1996 | A |
5577220 | Combs et al. | Nov 1996 | A |
5652868 | Williams | Jul 1997 | A |
5680452 | Shanton | Oct 1997 | A |
5680547 | Chang | Oct 1997 | A |
5692124 | Holden et al. | Nov 1997 | A |
5758174 | Crump et al. | May 1998 | A |
5771354 | Crawford | Jun 1998 | A |
5784549 | Reynolds et al. | Jul 1998 | A |
5787175 | Carter | Jul 1998 | A |
5841869 | Merkling et al. | Nov 1998 | A |
5845068 | Winiger | Dec 1998 | A |
5859911 | Angelo et al. | Jan 1999 | A |
5860001 | Cromer et al. | Jan 1999 | A |
5887163 | Nguyen et al. | Mar 1999 | A |
5889989 | Robertazzi et al. | Mar 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5892902 | Clark | Apr 1999 | A |
5892906 | Chou et al. | Apr 1999 | A |
5892959 | Fung | Apr 1999 | A |
5910180 | Flory et al. | Jun 1999 | A |
5922074 | Richard et al. | Jul 1999 | A |
5923841 | Lee | Jul 1999 | A |
5935251 | Moore | Aug 1999 | A |
5953422 | Angelo et al. | Sep 1999 | A |
5960177 | Tanno | Sep 1999 | A |
5978912 | Rakavy et al. | Nov 1999 | A |
5987605 | Hill et al. | Nov 1999 | A |
6012080 | Ozden et al. | Jan 2000 | A |
6023765 | Kuhn | Feb 2000 | A |
6047373 | Hall et al. | Apr 2000 | A |
6067559 | Allard et al. | May 2000 | A |
6067618 | Weber | May 2000 | A |
6076118 | Klein | Jun 2000 | A |
6079016 | Park | Jun 2000 | A |
6081894 | Mann | Jun 2000 | A |
6088794 | Yoon et al. | Jul 2000 | A |
6098158 | Lay et al. | Aug 2000 | A |
6125114 | Blanc et al. | Sep 2000 | A |
6138239 | Veil | Oct 2000 | A |
6148387 | Galasso et al. | Nov 2000 | A |
6157719 | Wasilewski et al. | Dec 2000 | A |
6175917 | Arrow et al. | Jan 2001 | B1 |
6178503 | Madden et al. | Jan 2001 | B1 |
6185678 | Arbaugh et al. | Feb 2001 | B1 |
6205547 | Davis | Mar 2001 | B1 |
6230285 | Sadowsky et al. | May 2001 | B1 |
6243809 | Gibbons et al. | Jun 2001 | B1 |
6272631 | Thomlinson et al. | Aug 2001 | B1 |
6275848 | Arnold | Aug 2001 | B1 |
6279120 | Lautenbach-Lampe et al. | Aug 2001 | B1 |
6304970 | Bizzaro et al. | Oct 2001 | B1 |
6308264 | Rickey | Oct 2001 | B1 |
6317798 | Graf | Nov 2001 | B1 |
6324644 | Rakavy et al. | Nov 2001 | B1 |
6327579 | Crawford | Dec 2001 | B1 |
6327652 | England et al. | Dec 2001 | B1 |
6330669 | McKeeth | Dec 2001 | B1 |
6334118 | Benson | Dec 2001 | B1 |
6353885 | Herzi et al. | Mar 2002 | B1 |
6367012 | Atkinson et al. | Apr 2002 | B1 |
6393412 | Deep | May 2002 | B1 |
6393560 | Merrill et al. | May 2002 | B1 |
6401202 | Abgrall | Jun 2002 | B1 |
6421776 | Hillis et al. | Jul 2002 | B1 |
6446203 | Aguilar et al. | Sep 2002 | B1 |
6449716 | Rickey | Sep 2002 | B1 |
6477642 | Lupo | Nov 2002 | B1 |
6477702 | Yellin et al. | Nov 2002 | B1 |
6484262 | Herzi | Nov 2002 | B1 |
6487601 | Hubacher et al. | Nov 2002 | B1 |
6490677 | Aguilar et al. | Dec 2002 | B1 |
6496847 | Bugnion et al. | Dec 2002 | B1 |
6505300 | Chan et al. | Jan 2003 | B2 |
6513156 | Bak et al. | Jan 2003 | B2 |
6519623 | Mancisidor | Feb 2003 | B1 |
6529966 | Willman et al. | Mar 2003 | B1 |
6530024 | Proctor | Mar 2003 | B1 |
6560706 | Carbajal et al. | May 2003 | B1 |
6560726 | Vrhel et al. | May 2003 | B1 |
6609248 | Srivastava et al. | Aug 2003 | B1 |
6622018 | Erekson | Sep 2003 | B1 |
6671716 | Diedrechsen et al. | Dec 2003 | B1 |
6678712 | McLaren et al. | Jan 2004 | B1 |
6681304 | Vogt et al. | Jan 2004 | B1 |
6684326 | Cromer et al. | Jan 2004 | B1 |
6701440 | Kim et al. | Mar 2004 | B1 |
6711686 | Barrett | Mar 2004 | B1 |
6727920 | Vineyard et al. | Apr 2004 | B1 |
6732276 | Cofler et al. | May 2004 | B1 |
6735696 | Hannah | May 2004 | B1 |
6751680 | Langerman et al. | Jun 2004 | B2 |
6757824 | England | Jun 2004 | B1 |
6757830 | Tarbotton et al. | Jun 2004 | B1 |
6775779 | England et al. | Aug 2004 | B1 |
6810478 | Anand et al. | Oct 2004 | B1 |
6892307 | Wood et al. | May 2005 | B1 |
6931545 | Ta et al. | Aug 2005 | B1 |
6948069 | Teppler | Sep 2005 | B1 |
6965816 | Walker | Nov 2005 | B2 |
20010037450 | Metlitski et al. | Nov 2001 | A1 |
20020012432 | England et al. | Jan 2002 | A1 |
20020023212 | Proudler | Feb 2002 | A1 |
20020042874 | Arora | Apr 2002 | A1 |
20020069354 | Fallon et al. | Jun 2002 | A1 |
20020184486 | Kerschenbaum et al. | Dec 2002 | A1 |
20020184520 | Bush et al. | Dec 2002 | A1 |
20030084436 | Berger et al. | May 2003 | A1 |
20030145235 | Choo | Jul 2003 | A1 |
20030191957 | Hypponen et al. | Oct 2003 | A1 |
20030196110 | Lampson et al. | Oct 2003 | A1 |
20040045019 | Bracha et al. | Mar 2004 | A1 |
20040073617 | Milliken et al. | Apr 2004 | A1 |
20040148514 | Fee et al. | Jul 2004 | A1 |
20050256799 | Warsaw et al. | Nov 2005 | A1 |
Number | Date | Country |
---|---|---|
2 187 855 | Jun 1997 | CA |
0 421 409 | Apr 1991 | EP |
0 580 350 | Jan 1994 | EP |
0 849 657 | Jun 1998 | EP |
0 465 016 | Dec 1998 | EP |
0 893 751 | Jan 1999 | EP |
0 992 958 | Apr 2000 | EP |
1 056 014 | Aug 2000 | EP |
1 049 036 | Nov 2000 | EP |
1 076 279 | Feb 2001 | EP |
1 107 137 | Jun 2001 | EP |
2 317 476 | Mar 1998 | GB |
2 336 918 | Nov 1999 | GB |
0020441.2 | Aug 2000 | GB |
2 353 885 | Mar 2001 | GB |
2 361 153 | Oct 2001 | GB |
9411967 | May 1994 | WO |
9524696 | Sep 1995 | WO |
9729416 | Aug 1997 | WO |
9815082 | Apr 1998 | WO |
9826529 | Jun 1998 | WO |
9836517 | Aug 1998 | WO |
9840809 | Sep 1998 | WO |
0019324 | Apr 2000 | WO |
0048062 | Aug 2000 | WO |
0052900 | Sep 2000 | WO |
0054125 | Sep 2000 | WO |
0054126 | Sep 2000 | WO |
0058859 | Oct 2000 | WO |
0073913 | Dec 2000 | WO |
0109781 | Feb 2001 | WO |
0127722 | Apr 2001 | WO |
0165366 | Sep 2001 | WO |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/GB00/03613 | Sep 2000 | US |
Child | 09728827 | US |