This invention relates to saving and retrieving data, and particularly to saving and retrieving data based on public key encryption.
Protecting data on computers so that the data is only disclosed to appropriate parties has become an important concern for users. The types of data that users want to protect varies greatly, such as work-related or personal confidential documents, bank account numbers, credit card numbers, social security numbers, and so forth. Additionally, it is also important to some third parties to protect the data on the users' computers from improper use or access. For example, credit card issuers want credit card numbers to be protected so that they are not disclosed to malicious programs or parties hacking into the computer, music companies want songs to be protected so they cannot be copied, movie studios want movies to be protected so they cannot be copies, and so forth.
One solution to protect data on computers is to do away with general-purpose computing devices and use special-purpose tamper-resistant boxes for delivery, storage, and display of secure content. This solution, however, can be undesirable as it prevents users from expanding their computers (e.g., users cannot install additional software components and/or hardware components on such tamper-resistant boxes). Thus, it would be beneficial to provide a way to allow data to be protected on general-purpose computing devices.
Saving and retrieving data based on public key encryption is described herein.
In accordance with one or more aspects, data to be sealed and one or more conditions that are to be satisfied in order for the data to be unsealed are obtained. Both the data and the one or more conditions are encrypted, using public key encryption, to generate a ciphertext that includes both the encrypted data and the encrypted one or more conditions. One of the one or more conditions comprises a time constraint for when the data can be unsealed, and the data is not unsealed if the one or more conditions are not satisfied.
The same numbers are used throughout the document to reference like components and/or features.
A principal 102 refers to a component or module that requests access to protected data. This request may be a request to retrieve the protected data (e.g., a request for retrieval of a cryptographic key), or a request to perform an operation(s) using the protected data (e.g., the protected data could be a cryptographic key and the request could be a request to encrypt or decrypt particular data using the cryptographic key). The principal 102 can be implemented as a component or module in hardware, software, firmware, or a combination of hardware, software, and/or firmware.
A guard 104 refers to a component or module that controls access to the protected data. Guard 104 uses an access policy associated with the protected data, as well as other information (such as the identity of the principal requesting access to the protected content), to determine whether to allow the principal to access the protected data. If guard 104 determines that the requesting principal is permitted to access the protected data, then guard 104 responds to the request in an appropriate manner (e.g., if the request is a request for the protected data, then the protected data is returned to the principal; or, if the request is a request for particular data to be encrypted using the protected data, then guard 104 encrypts the particular data using the protected data and returns the ciphertext (the encrypted data) to the principal). It should be noted that guard 104 may restrict principals based on the nature of the request. For example, guard 104 may allow a particular principal to have particular data signed using the protected data but may not allow the protected data to be returned to the particular principal.
A guard 104 can also be characterized as a disclosure guard and/or a service guard. A service guard performs certain operations (e.g., encryption, decryption, digital signing, etc.) with the protected data (e.g., a cryptographic key) at the request of principals without disclosing the protected data. A disclosure guard, on the other hand, reveals the protected data to authorized requestors. It should be noted that a particular guard 104 can be both a disclosure guard and a service guard.
Resource 106 can be any type of data to which access is to be restricted. Examples of resources 106 include cryptographic keys, bank account numbers, credit card numbers, personal information such as social security numbers, passwords, and so forth. Resource 106 can also be virtually anything else in a computing device. For example, a resource 106 may also be physical memory (e.g., RAM or ROM), optical or magnetic disks or disk drives, video cards, sound cards, smart cards, and so forth. By way of another example, a resource 106 may also be operating system abstractions, such as processes, files, threads, semaphores, and so forth.
In the discussion herein, access control model 100 is described predominately with reference to being implemented on a single computing device. However, it is to be appreciated that different portions of the model can be implemented on different computing devices. For example, a principal 102 may be on one computing device while a guard 104 and resource 106 may be on another computing device.
The principals and guards on a computing device can be categorized into any number n of hierarchical layers ln.
In the example environment of
By way of example, assume that a program 120 desires to retrieve a root resource 128 that is guarded by guard 126. Program 120 acts as a principal requesting access to the root resource 128 from module 122, which acts as a guard of the resource. If module 122 has a copy of the resource 128 (e.g., previously obtained from guard 126 in response to a previous request for the resource by program 120 or some other program in layer l4, or when module 122 was initialized and loaded in the computing device), then module 122 checks whether program 120 is allowed to retrieve the resource. Module 122 then returns the resource to program 120 if program 120 is allowed to retrieve the resource.
However, if module 122 does not have a copy of the resource 128, then module 122 acts as a principal requesting access to the root resource from module 124, which acts as a guard of the resource. If module 124 has a copy of the resource 128 (e.g., previously obtained from guard 126 in response to a previous request for the resource by module 122 or some other module in layer l3, or when module 124 was initialized and loaded in the computing device), then module 124 checks whether module 122 is allowed to retrieve the resource. Module 124 then returns the resource to module 122 if module 122 is allowed to retrieve the resource. Module 122 then returns the resource to program 120 if program 120 is allowed to retrieve the resource.
However, if module 124 does not have a copy of the resource 128, then module 124 acts as a principal requesting access to the root resource from guard 126. Guard 126 checks whether module 124 is allowed to retrieve the resource, and returns the resource to module 124 if module 124 is allowed to retrieve the resource. Module 124 then returns the resource to module 122 if module 122 is allowed to retrieve the resource, and module 122 returns the resource to program 120 if program 120 is allowed to retrieve the resource.
In the discussion herein, multiple references are made to employing access control model 100 of
Another example of the use of access control model 100 is the authentication of a user(s) to a computer. Most modern computers have an access control system. A user logs on to the computer so that the computer knows who the user is. After logging on, the user runs programs that typically need to access system resources (e.g. read files, write to windows on the screen, etc.). Typically, the access control system of the computer is consulted (e.g., “can user x perform operation y on resource z?”). If the answer is “no” the program cannot access the resource.
Another example of the use of access control model 100 is the authentication of a user(s) to a remote service. Remote services such as web sites (e.g., on-line brokers or banks) can be thought of as having access control systems. The resources are people's bank accounts, their money and their stocks. After a user logs on to the web site, the access control system will determine if the user is authorized to perform the accesses requested by the user, such as a “read” access on the resource “bank account data” (to retrieve the latest bank statement), or a “transfer” access on the resource “$1000 in bank account 12345”.
Yet another example of the use of access control model 100 is restricting physical access to particular buildings or areas. For example, when a user arrives at work in the morning, the user shows his or her badge and requests the “open” operation on the resource “front door”. Some electronic system (a guard) determines, based on information stored on the badge, if the user is allowed to enter the building and unlocks the door accordingly.
A computing device enables authenticated operation of a program (software) if it is possible to let the program obtain protected access (from a disclosure guard or from a service guard) to at least one cryptographic resource. In certain embodiments, a computing device that enables authentication and isolation, as described below, enables authenticated operation.
A program C can be referred to as being isolated from another program D if two points are satisfied: (1) there is memory that can be accessed by program C but not by program D, and (2) program D cannot initiate execution of program C (except, possibly, at an entry point(s) determined by program C). A program is given by its transition rules (executable code) and by its initial state (entry point(s) or initial value of the instruction pointer IP). The first point guarantees integrity of the program code and the state information of program C, even in the presence of adversarial behavior by program D, since data can be stored in the memory that cannot be accessed by program D. This point also allows program C to protect confidential data (e.g., cryptographic keys) from observation by program D. The second point guarantees that D cannot subvert the behavior of C by choosing the entry point adversarially.
Additionally, it can be said that a program C can authenticate a program D if program C is able to identify both the transition rules (program code) and the initial state of program D. The computing device enables isolation for any program C from any other program D, with the exception of a single program Ej for each layer j<i, where i is the layer of program C. This protects programs from observation and interference by any program, except for the sequence E1, E2, . . . , Ei−1 of guards through which program C requests access to its resources. Furthermore, for any layer i, the computing device enables a program executing in layer i to authenticate at least some programs in layer i+1. This requirement allows a program to act as a guard for requests from principals in the next layer. These two observations give rise to an inductive argument that programs in any layer can act as guards for resources by requesting access to a resource from their predecessor, protecting their integrity and the resource through isolation and authenticating requests from principals in the next layer.
Isolation can be implemented by using physical memory protections. This approach is referred to as “isolation in space” or “space isolation”. For example, the ring and virtual memory protections found in many modern microprocessors are sufficient to implement isolation in space. An operating system kernel (layer i) running in privileged mode can set up page tables for applications (layer i+1), such that any application can only access those parts of physical memory that the operating system kernel chooses to map into the application's virtual address space. Furthermore, the kernel restricts applications' privileges so that they cannot change the memory mapping, and ensures that applications can initiate execution of kernel code only at a well defined entry point(s) (system calls).
Another approach to implementing isolation between two layers is to separate their execution in time. This approach is referred to as “isolation in time” or “time isolation”. A program in a first layer i executes to completion, makes certain resources unavailable, and then terminates. Subsequently, control is transferred to the next layer i+1.
Authentication occurs between subsequent layers (j=i+1). Program C authenticates the program (transition rules) and the initial state of the configuration of j. The program can be authenticated by letting program C inspect the program in layer j. That is, typically program C reads the memory, which contains the program for layer j, and computes a cryptographic digest over this memory range. It should be noted that the goal at this point is only to ascertain the identity of the code, not to evaluate statements made by other principals about the code. Thus, certificates are not necessary at this point.
The second task for program C is to identify the initial state of program D. In general, the problem of determining the initial state of a program at an arbitrary execution stage is very difficult. Thus, program C controls the initial state of program D. In practical terms, this means that program C can only ascertain the initial state σ of program D if program C initiates the execution of program D at σ.
In summary, in order to authenticate program D, program C inspects the memory contents it deems relevant (program and, possibly, data) and computes a cryptographic digest. After that, program C transfers execution to a well-defined entry point of program D.
In situations where the resources are cryptographic keys, authenticated operation allows each operating system and application program to have exclusive access to one or more secrets. The isolation discussed above protects each secret from attacks by adversarial code. The authentication of programs discussed above allows programs to be identified, such that each secret is disclosed only to the program that owns it.
Generally, given a request from a program (a principal 102 of
Additionally, in either case, guard 104 authenticates the caller (principal 102). Authenticating a principal 102 is also referred to herein by a function ID( ) which returns a digest of the calling program (the program calling a gating function of guard 104). The digest can be generated in any of a wide variety of conventional manners, such as using any one or more of a variety of cryptographic hash functions (also referred to as one-way hash functions), such as SHA1 (Secure Hash Algorithm 1), MD5 (Message Digest 5), MD2 (Message Digest 2), etc.; using a keyed MAC (Message Authentication Code); and so forth.
One class of gating functions described herein implement sealed storage. The purpose of sealed storage is to allow programs to store secrets, such that only a particular set of one or more programs (defined by the program that stores the secret) can retrieve the secrets. In one implementation, only the program that originally saves (seals) the secret can recover (unseal) the secret. Typically, the life time of these secrets will exceed the time of individual executions of the program. Secrets used during a single execution of a program can be saved (sealed), or alternatively isolation and a random number generator also allow a program to maintain secrets during a single execution. Sealed storage also allows a program to maintain secrets across different executions, which may not overlap in time. A layer li exposes sealed storage to the next layer li+1 by means of the following interface (e.g., using the “Seal” and “UnSeal” operations and/or PKSeal and PKUnseal operations).
The discussions herein regarding sealed storage refer to cryptographic keys being used to encrypt and decrypt data. These cryptographic keys are the keys associated with the guard that is guarding access to the resource (e.g., guard 104 of
The discussions herein also refer to identifiers of programs (e.g., an identifier of the program calling or invoking an operation, or an identifier of a target program that is allowed to access a resource). These identifiers are often referred to herein as digests. However, it is to be appreciated that digests are only one example of identifiers of programs. Other types of identifiers that are a measure or other representation of the program and that allow any changes to the program to be detected can be used. If any changes are made to the program (e.g., one or more instructions being changed by an adversary in an attempt to maliciously gain access to and make use of the protected data) then the identifier of the program should reflect that change (e.g., the identifier for the unchanged program will be different than the identifier for the changed program).
The Seal operation receives, as an input, data (e.g., a secret) to be sealed. The Seal operation also optionally receives, as an input, a condition that identifies when and/or to whom the secret may be revealed (unsealed). In one implementation, this condition is a digest of a target program that is allowed to retrieve (unseal) the data. Alternatively, programs that are to be allowed to retrieve (unseal) the data can be identified in other manners. For example, the programs may be identified by a public key that verifies one or more certificates, with each certificate being associated with one or more of the programs.
Alternatively, other conditions may be used in addition to, or in place of, an identifier of a target program. For example, the condition may include particular time constraints for when the data can be revealed (unsealed), such as particular times of the day or days of the week during which the secret can be revealed (unsealed). By way of another example, the condition may include an identifier of a password or other data that must be provided in order for the secret to be revealed (unsealed)—e.g., the secret can only be unsealed by programs having knowledge of the password.
By way of yet another example, the condition can be a logical formula (e.g., any statement written in first order logic, any statement written in predicate logic, etc.). The logical formula is evaluated (e.g., by the guard) and the secret is revealed (unsealed) only if the evaluation returns an indication of true.
In still another example, the condition can be an executable program in some language (e.g., java, C*, Javascript, VBScript, etc.). The program is executed (e.g., by the guard) and the secret is revealed (unsealed) only if the program returns some indication of “true” or “satisfied”.
In situations where the condition is the digest of the target program, rather than being supplied with the digest of the target program, the Seal operation may use the digest of the program that invokes the Seal operation (thereby implicitly inputting the digest of the target program). Additionally, digests of multiple target programs can be input to the Seal operation, thereby allowing multiple target programs to retrieve (unseal) the data.
The Seal operation encrypts its inputs (the data and the condition(s) allowing retrieval (unsealing) of the data) together with an identifier of the caller. The Seal operation returns the input data in an encrypted form (as ciphertext). The Seal operation also returns a value (e.g., a message authentication code (MAC) value) that can be used to verify the integrity of the sealed data. This returned data allows the stored data to be referenced in subsequent UnSeal operations, as discussed in more detail below.
Pseudo code for the Seal operation is illustrated in Table I. In the pseudo code of Table I, ID( ) refers to the ID( ) function discussed above, e refers to the value (e.g., a string or sequence of bits) that is returned to the caller, data refers to the data to be sealed, and [t1, . . . , tm] refers to the digests of one or more (m) target program(s) that are allowed to retrieve (unseal) the data (or alternatively one or more other conditions).
Initially, a secret to be sealed is received from the caller (act 202). The secret is encrypted so that the secret can only be retrieved by a particular target program(s) (act 204), or alternatively so that the secret can only be retrieved if one or more particular conditions are satisfied. Ciphertext including the encrypted secret is then returned to the caller (act 206). Additional information may also be returned to the caller (as part of the ciphertext or separate from the ciphertext), such as a digest of the caller and/or digest(s) of the target program(s).
The UnSeal operation receives, as an input, a bit string that was returned by the Seal operation when sealing data (e.g., a cryptographic key) that the calling program now desires to retrieve. The UnSeal operation obtains the condition(s) for revealing the data and checks whether those conditions are satisfied. For example, if the condition(s) included digest(s) of the one or more target program(s) that are allowed to retrieve (unseal) the data, then the UnSeal operation obtains those digest(s) and checks whether the calling program is one of the one or more target program(s). If the calling program is not one of the one or more target program(s) then the UnSeal operation fails and the requested data is not returned to the caller. However, if the calling program is one of the one or more target program(s), then the UnSeal operation succeeds and the requested data is returned to the calling program. The digest of the program that sealed the data is also optionally returned by the UnSeal operation.
Pseudo code for the UnSeal operation is illustrated in Table II. In the pseudo code of Table II, data refers to the data that is being requested (and that has been previously sealed), [t1, . . . , tm] refers to the digests of one or more (m) target program(s) that are allowed to retrieve (unseal) the data (or alternatively one or more other conditions), e refers to the input to the UnSeal operation (typically previously output by a Seal operation), and d refers to the digest of the program that sealed the data.
Initially, ciphertext with encrypted data that the caller desires to retrieve is received (act 222). A check is made as to whether the caller is allowed to retrieve the data (act 224), and processing proceeds based on whether the caller is allowed to retrieve the data (act 226). If the caller is allowed to retrieve the data, then the data (decrypted) is returned to the caller (act 228). If the caller is not allowed to retrieve the data, then the process fails (act 230) and the data is not returned to the caller.
Sealed storage can be implemented in different manners. In one implementation, sealed storage is implemented using physically protected non-volatile memory. In this implementation, the computing device associates different guards with different portions of the protected non-volatile memory and allows each guard to access only those portions which are associated with that guard. In this implementation, the Store and Retrieve operations referenced in the Seal and UnSeal operations are invoked to have the computing device store and retrieve, respectively, the data in the protected non-volatile memory associated with the guard.
By way of example, a storage device (such as a hard disk drive) can implement a guard. Rather than simply executing read and write commands to the storage device unconditionally, the storage device identifies the principal attempting to access the storage device (e.g., based on a digest of the principal) and allows only a particular principal(s) to access the storage device. Alternatively, different principals may be restricted to accessing only particular portions of the storage device (e.g., particular sectors or address ranges).
In another implementation, sealed storage is implemented using cryptography. A description of one exemplary implementation of sealed storage using cryptography follows.
When using cryptography to implement sealed storage, the resource is a key K rather than physically protected memory. The Store operation does not physically store its inputs. Rather, the Store operation produces a cryptographically protected output c, which is the inputs of the Store operation in an encrypted and integrity protected form. The encryption is a result of applying a symmetric cipher to the input(s). The latter property results from applying a message authentication code (MAC) to the input(s) (either before or after the input(s) is encrypted).
Pseudo code for the Store operation is illustrated in Table III. In the pseudo code of Table III, b refers to the bit string input to the Store operation, c refers to the bit string output by the Store operation, K1 refers to a first part of the key K, and K2 refers to a second part of the key K. The key K is a symmetric key of the guard implementing the Seal and Store operations.
Thus, as can be seen in Table III, a value (m) is generated by applying a MAC to the bit string input to the Store operation. The MAC is keyed to a portion (K1) of the key K. The bit string input to the store operation is also encrypted using a second portion (K2) of the key K. The values generated by applying the MAC to the input bit string and by encrypting the input bit string are then returned to the caller of the Store operation.
The key K is partitioned into two independent keys K1 and K2 in order to avoid using the same key for the MAC and the cipher. This partitioning can be performed in any of a variety of manners. The partitions may use different bits of the key K or alternatively may use one or more of the same bits. For example, assuming that the key K is 1024 bits, then the low 512 bits may be used as key K1 and the high 512 bits may be used as key K2, the even numbered bits (bits 0, 2, 4, 6, 8, 10, . . . , 1022) may be used as key K1 and the odd numbered bits (bits 1, 3, 5, 7, 9, 11, . . . , 1023) may be used as key K2, the low 650 bits may be used as key K1 and the high 650 bits may be used as key K2 (resulting in some bits being used for both K1 and K2), and so forth. Alternatively, the same key K may be used for both the MAC and the cipher.
The pseudo code illustrated in Table III implements the Store operation by computing a MAC over the data, encrypting the data, and outputting both the MAC and the ciphertext. Alternatively, the Store operation may be implemented in different manners. For example, the Store operation may encrypt the data first, then compute a MAC over the ciphertext and output both the ciphertext and the MAC. By way of another example, the Store operation may compute a MAC over the data, then encrypt both the data and the MAC, and output the ciphertext.
The encryption performed by the cipher of the Store operation can be performed using any of a variety of symmetric encryption algorithms. Generally, symmetric encryption algorithms use the same key for both encryption and decryption. Examples of such algorithms include triple-DES (Data Encryption Standard), AES (Advanced Encryption Standard), and so forth.
Similarly, the MAC can be any of a variety of message authentication codes, such as the MAC described in M. Bellare, R. Canetti, and H. Krawczyk, “Keying hash functions for message authentication,” in Advances in Cryptology—Crypto'96, number 1109 in Lecture Notes in CS, 1996. Alternatively, integrity can be protected by means of a public key digital signature in place of a MAC.
Initially, data to be stored is received (act 252). A symmetric cipher is applied to the data (act 254) and a message authentication code (MAC) is applied to the data (act 256). The encrypted data generated in act 254 and the MAC value generated in act 256 are then returned to the caller (act 258).
The Retrieve operation receives an input bit string that includes a MAC value and ciphertext. The ciphertext is decrypted to generate plaintext and a MAC value is generated for the plaintext. If the MAC value generated for the plaintext is the same as the MAC value received as part of the input bit string, then the plaintext is returned to the caller. However, if the MAC value generated for the plaintext is not the same as the MAC value received as part of the input bit string, then the Retrieve operation fails and the plaintext is not returned to the caller. It is to be appreciated that the specific manner in which the Retrieve operation is implemented to obtain the MAC and the ciphertext from the input bit string is dependent on the manner in which the Store operation is implemented
Pseudo code for the Retrieve operation is illustrated in Table IV. In the pseudo code of Table IV, c refers to the bit string input to the Retrieve operation, b refers to the bit string output by the Retrieve operation, m refers to the MAC value portion of the bit string input to the Retrieve operation, d refers to the ciphertext portion of the bit string input to the Retrieve operation, K1 refers to a first part of the key K, and K2 refers to a second part of the key K. The K1 and K2 keys are the same portions of the key K as discussed above with respect to the Store operation.
Thus, as can be seen in Table IV, a value (b) is generated by decrypting the bit string input to the Retrieve operation. A MAC value is then generated for the value (b). If the MAC value generated by the Retrieve operation is the same as the MAC value that is received as part of the bit string input to the Retrieve operation then the value (b) is returned to the caller of the Retrieve operation, otherwise the Retrieve operation fails.
The pseudo code of Table IV is based on the implementation of the Store operation where the MAC is computed over the data, the data is encrypted, and the MAC and ciphertext together are output (and serve as the input bit string to the Retrieve operation). If the Store operation were implemented to encrypt the data first, then compute a MAC over the ciphertext and output both the ciphertext and the MAC, then the Retrieve operation would be implemented to compute the MAC of the ciphertext and compare it to the MAC value received as part of the input bit string, then decrypt the ciphertext and return the decrypted data if the MAC values match. If the Store operation were implemented to compute a MAC over the data then encrypt both the data and the MAC, then the Retrieve operation would be implemented to decrypt the input bit string, then compute a MAC over the data in the input bit string and compare the computed MAC to a MAC value in the decrypted string, and return the data if the MAC values match.
Analogous to the discussion above regarding the Store operation, any of a variety of decryption algorithms can be used by the Retrieve operation. However, the decryption algorithm should correspond to the encryption algorithm so that the encrypted data can be decrypted. Similarly, any of a variety of message authentication codes can be used as the MAC, but he message authentication code used should be the same as the message authentication code used by the Store operation.
Initially, a ciphertext and MAC value are received (act 272). The ciphertext is decrypted to generate plaintext data (act 274). A message authentication code (MAC) is applied to the plaintext data to generate a MAC value (act 276) and a check is made as to whether the MAC value generated in act 276 is equal to the MAC value received in act 272 (act 278). Processing then proceeds based on whether the generated MAC value is equal to the received MAC value (act 280). If the generated MAC value is equal to the received MAC value, then the plaintext data is returned to the caller (act 282). However, if the generated MAC value is not equal to the received MAC value, then the process fails (act 284) and the plaintext data is not returned to the caller.
Thus, the cryptography approach to sealed storage substantially guarantees that any corruption of the value c (the output of the Store operation) can be detected, and that the value b (the input to the Store operation) cannot be retrieved without access to the key K2 (the key used by the cipher to encrypt the value b).
Another class of gating functions implement remote authentication. The purpose of remote authentication is to allow programs to be authenticated even in the absence of a strong physical coupling to the authenticator (e.g., using servers or smart cards). In this situation, authentication is based on cryptography. That is, both entities go through a cryptographic authentication protocol. This involves the authenticated configuration having access to a secret, which, depending on the protocol, is typically a private key or a symmetric key. Additionally, the computing device can tie the use of these authentication secrets to the identity of the configuration (e.g., the processor and/or software) that requests their use. Thus, the authenticator can establish the identity of the computing device, as well as the software executing on it.
Two operations, the Quote operation and the PKUnseal operation, are the respective gating functions for public key signing and public key decryption. The guard implementing these gating functions has access to a signing key Ks and a decryption key Kd. Both the signing key Ks and the decryption key Kd are also referred to as the private key of a public/private key pair. This public/private key pair is a key pair of the guard implementing the Quote and PKUnseal operations.
The Quote operation returns a public key signature over a combination of (e.g., the concatenation of) the input to the Quote operation and a condition that identifies when and/or to whom the secret may be revealed. Analogous to the Seal and UnSeal operations discussed above, revealing of the secret can be tied to any of a variety of conditions. In one implementation, the condition is an identifier of (e.g., digest of) the calling program.
Inherent in the signature is the assertion that the operation was performed at the request of the identified calling program. The Quote operation works in conjunction with a Verify operation, which typically executes on a device other than the device on which the Quote operation executes (e.g., on a remote server device, on a smart card, etc.). The Verify operation performs a public key signature verification and retrieves and evaluates the identifier of the calling program (and/or other conditions for revealing the secret).
Pseudo code for the Quote operation is illustrated in Table V. In the pseudo code of Table V, ID( ) refers to the ID( ) function discussed above, a refers to the data input to the Quote operation, and Ks refers to a signing key.
Thus, as can be seen in Table V, the Quote operation obtains a digest of the calling program and receives an input value a. The Quote operation generates a digital signature (sn) of the input value a and the digest of the calling program using the signing key Ks. The input value a can be generated by the calling program, or alternatively may be a value that is received from another component or device (e.g., from the device that will be performing the Verify operation). The digital signature is generated using public key cryptography.
Initially, input data is received from a caller (act 302). An identifier of the caller (an/or one or more other conditions for retrieving the input data) is obtained (act 304) and a digital signature over the combination of the input data and the identifier (and/or one or more other conditions) of the caller is generated (act 306). The generated digital signature is then returned to the caller (act 308).
The Verify operation performs a public key signature verification and retrieves and evaluates the identifier of the calling program. The Verify operation receives a digital signature that was generated by a Quote operation, typically from a device other than the device on which the Verify operation executes (e.g., on a remote server device, on a smart card, etc.). The Verify operation extracts the digest of the program (e.g., an application program, operating system, firmware program, etc.) that called the Quote operation from the received digital signature, and evaluates that digest to determine how to proceed.
Pseudo code for the Verify operation is illustrated in Table VI. In the pseudo code of Table VI, d refers to the digest of the program that called the Quote operation, a refers to the value that was input to the Quote operation, and Sn refers to the digital signature received by the Verify operation as an input.
Thus, as can be seen in Table VI, the Verify operation receives a digital signature and, using verification key Kv (which is the public key of the public/private key pair that includes the signing key Ks) extracts the digest d and the value a from the signature. The Verify program can then evaluate the digest d of the program that called the Quote operation. The manner in which the digest d is evaluated can vary. For example, the evaluation may involve comparing the digest d to a list of “approved” or “trusted” application programs.
Initially, a digital signature is received (act 322). Both the identifier of the caller (and/or one or more other conditions for retrieving the input value) that quoted an input value (using the Quote operation) and the input value itself are extracted from the digital signature (act 324). The identifier of the caller (and/or the one or more other extracted conditions) is then evaluated to determine how to proceed with the input value (act 326).
The PKUnseal operation is a version of public key decryption, which is gated on the identity of the caller (e.g., the digest of the calling program), or alternatively one or more other conditions. The result of the public key decryption of the input c to the PKUnseal operation is interpreted as a pair (d, s), where s is a secret and d identifies a configuration (e.g., digest of a calling program) to which s may be revealed. If the caller of PKUnseal is not d then the PKUnseal operation fails. The input c to the PKUnseal operation is generated by a second operation PKSeal, which can be executed on a device other than the device on which the PKUnseal operation executes (e.g., on a remote server device, on a smart card, etc.). The PKSeal operation performs a public key encryption of a pair (d, s). The PKUnseal and PKSeal operations can also be used to implement sealed storage.
Pseudo code for the PKUnseal operation is illustrated in Table VII. In the pseudo code of Table VII, ID( ) refers to the ID( ) function discussed above, c refers to the input to the PKUnseal operation, [d1, . . . , dm] refers to the digest(s) of the one or more calling programs to which s can be revealed (or alternatively one or more other conditions), s refers to the protected data, and Kd refers to a decryption key (a private key of a public/private key pair associated with the guard that is implementing the PKUnseal operation).
Thus, as can be seen in Table VII, the PKUnseal operation decrypts the input value a using public key decryption and the decryption key Kd. The decrypted input value includes the digest(s) [d1, . . . , dm] of one or more calling programs to which the protected data s is allowed to be revealed (or alternatively one or more other conditions identifying when and/or to whom the protected data s is allowed to be revealed). The PKUnseal operation also generates a digest of the calling program. If the digest of the calling program is equal to one of the digests [d1, . . . , dm], then the protected data s is returned to the calling program. However, if the digest of the calling program is not equal to one of the digests [d1, . . . , dm], then the protected data s is not returned to the calling program.
Initially, ciphertext with encrypted data that the caller desires to retrieve is received (act 342). A check is made as to whether the caller is allowed to retrieve the data (act 344), and processing proceeds based on whether the caller is allowed to retrieve the data (act 346). If the caller is allowed to retrieve the data, then the data (decrypted using public key decryption) is returned to the caller (act 348). If the caller is not allowed to retrieve the data, then the process fails (act 350) and the data is not returned to the caller.
The PKSeal operation is a version of public key encryption, which is gated on the identity of the caller (e.g., the digest of the calling program or one or more other programs). The PKSeal operation performs a public key encryption of a pair (d, s), where s is a secret and d identifies one or more configurations (e.g., digests of a calling program) to which s may be revealed.
Pseudo code for the PKSeal operation is illustrated in Table VIII. In the pseudo code of Table VIII, c refers to the output of the PKSeal operation, [d1, . . . , dm] refers to the digest(s) of the one or more calling programs to which s can be revealed, s refers to the protected data, and Ke refers to an encryption key.
Thus, as can be seen in Table VIII, the PKSeal operation receives as an input the protected data s and digests [d1, . . . , dm] of one or more programs to which the protected data s can be revealed. The pair [d1, . . . , dm], s is then encrypted using public key cryptography based on the encryption key Ke. The encryption key Ke is the public key of the guard that is intended to be able to decrypt the ciphertext. The ciphertext resulting from the public key encryption is then returned to the calling program.
Initially, a secret to be sealed is received from the caller (act 362). The secret is encrypted using public key encryption so that the secret can only be retrieved by a particular target program(s) (act 364), or alternatively only if one or more other conditions are satisfied. Ciphertext including the encrypted secret is then returned to the caller (act 366). Additional information may also be returned to the caller (as part of the ciphertext or separate from the ciphertext), such as a digest of the caller and/or digest(s) of the target program(s).
The Quote and PKUnseal operations are intended to be used in connection with public key authentication protocols. Most public key authentication protocols can be straightforwardly adapted by replacing any call to public key decryption, public key encryption, signing, and signature verification by a call to PKUnseal, PKSeal, Quote, Verify, respectively.
In some situations, it is important to be able to obtain a random number (e.g., as a basis for generating cryptographic keys). Random numbers can be obtained in a variety of different manners. In one implementation, the source of random numbers is a cryptographically strong random number generator implemented in the hardware of the computing device.
One alternative to the Seal operation discussed above is a GenSeal operation that combines the Seal operation with a generate random number operation. The GenSeal operation receives as input the digests [t1, . . . , tm] of target program(s) that should be able to retrieve the secret (and/or other conditions that must be satisfied in order for the secret to be retrieved). The GenSeal operation generates a random number and seals the newly generated random number so that it can be retrieved only by calling programs having one of the target digest(s) [t1, . . . , tm] (and/or the other conditions satisfied).
Pseudo code for the GenSeal operation is illustrated in Table IX. In the pseudo code of Table IX, ID( ) refers to the ID( ) function discussed above, c refers to the output of the GenSeal operation, s refers to the newly generated random number, [t1, . . . , tm] refer to one or more target program(s) that should be permitted to retrieve the value s (one of which may optionally be the program calling the GenSeal operation) or alternatively one or more other conditions, and GenRandom( ) refers to a function that generates a random number.
Initially, an input is received from a caller that identifies a target program(s) that should be able to retrieve a secret (act 382), or alternatively one or more other conditions that are to be satisfied in order for the secret to be retrieved. A secret is then generated (act 384), and the secret is encrypted so that the secret can only be retrieved by the identified target program(s) (act 386), or alternatively so that the secret can be retrieved only if the one or more other conditions are satisfied. Ciphertext including the encrypted secret is then returned to the caller (act 388). Additional information may also be returned to the caller (as part of the ciphertext or separate from the ciphertext), such as a digest of the caller and/or digest(s) of the target program(s).
The services provided by a disclosure guard can be used for general-purpose sealing services. For example, referring back to
In certain embodiments, a lower layer discloses one or more secrets to the next layer when that next layer is initialized (e.g., after reset or booting of the computing device, or upon beginning execution of a program). Following this gated disclosure, the lower layer is no longer used (until the next boot or reset). This use-model is also referred to as the disclosure guard model. By employing the disclosure guard model, accesses to the lower layer are reduced.
The gating functions discussed herein can be used with service guards and disclosure guards implemented using time isolation and space isolation. Four service model implementations for authenticated operation are discussed below: (1) service guard—space isolation; (2) disclosure guard—space isolation; (3) disclosure guard—time isolation; (4) service guard—time isolation. In the discussion of these service models, assume that a lower-level guard has disclosed one or more keys to the guard at the layer being considered. The manner in which these keys are obtained depends on the guard and isolation model of the layer beneath. Different layers on the same computing device can use different ones of these service models.
(1) Service guard—space isolation: The guard measures and saves the identity of the requesting program when it is initialized. The guard implements a protection system using processor services (e.g., of a CPU or some other security processor or co-processor), and a system-call interface exposing the authenticated operation primitive operations.
(2) Disclosure guard—space isolation: The guard obtains service requests on initialization in the form of cryptographic blobs. The blobs could be stored in memory, or alternatively obtained from external storage devices. The guard measures the identity of programs that it initializes, and discloses keys to programs according to the gating functions described above. Before relinquishing control to the next layer, the guard establishes mode-protection for itself and its secret resources.
(3) Disclosure guard—time isolation: The guard obtains service requests on initialization in the form of cryptographic blobs (groups of bits). The blobs could be stored in memory, or alternatively obtained from external storage devices. The guard measures the identity of programs that it initializes, and discloses keys to programs according to the gating functions described above. Before passing control to these programs, the guard deletes (or otherwise makes inaccessible) the keys used to implement the gating functions.
(4) Service guard—time isolation: In the service guard—time isolation model, the computing device securely preserves program state across the security reset. This model is similar to model (1) (service guard—space isolation), however, before passing control to the next layer, the service guard deletes its secret (rendering it non-functional until the next reboot). The next layer will now execute normally, until it needs to request a service from the guard. At that point, it stores the parameters of the request somewhere in memory where they will survive a reset and performs a reset. As the device reboots, the service guard obtains its secret, sees the request, executes it (using its key), destroys the key and any related information, and passes the result of the computation and control to the next layer (the layer that had originally requested the service).
In certain embodiments, if a computing device supports space isolation, then the security kernel should expose the primitives (operations) Seal, Unseal, GetRandom (to obtain a random number), and PKUnseal (or Quote). The security kernel can implement a disclosure guard or a service guard. On the other hand, if the platform supports time isolation, then the security kernel should provide a disclosure guard, and should implement the primitives (operations) Unseal, GenSeal, and PKUnseal (or Quote).
It should also be noted that Quote and PKUnseal functionality can be built on the Seal and Unseal or Unseal and GenSeal primitives. For example, manufacturers can build an l2 program(s) that implements Quote or PKUnseal and acts as a host for higher-level software (e.g., operating systems) upon GenSeal and Unseal implemented in l1. The manufacturer can generate and Seal the keys needed by the service layer and ship them with the device or CPU (or make them available online).
An exemplary description of a family of hardware implementations that will enable platforms to support authenticated operation follows. As with higher layers in the system, the characteristics of the lowest layer (l1 of
Authenticated operation provides a strong binding between programs and secret keys. At higher layers, guards in lower layers guarantee this binding. At the lowest layer, there is no underlying software guard that can gate access to the platform secrets. Thus, another mechanism is used to support the association of the l1 keys to the l1 program. One way of accomplishing this binding is having l1 software be platform microcode or firmware that is not changeable following manufacture, and give the l1 software unrestricted access to the l1 keys. This platform microcode or firmware can then be referred to as the security kernel, and the l1 keys referred to as the platform keys. The platform is designed to only pass control to a predetermined security kernel. The hardware behavior can also be explained as a simple resource guard that discloses the platform keys to the predefined security kernel.
The platform keys and the security kernel firmware can be part of the processor or alternatively implemented in one or more other components in the computing device (e.g., a security processor or coprocessor, which may also perform cryptographic operations). The platform keys and the security kernel firmware can be implemented in a single component, or alternatively implemented in multiple components of the computing device.
With authenticated operation, programs are started in a controlled initial state. At higher levels, the software running at lower levels can be entrusted to start execution at the correct entry point. At l1, however, hardware performs this function. Typically, on power-up or following reset, current processors begin execution by following some deterministic sequence. For example, in the simplest case the processor starts fetching and executing code from an architecturally-defined memory location. For lb programs can be started in a controlled initial state by the hardware ensuring that the security kernel is the code that executes on startup (as part of the deterministic sequence).
Additionally, no other platform state should be able to subvert execution of the security kernel. Reset and power-up provide a robust and a well-debugged state-clear for the processor. As used in this example, the platform state change that is used to start or invoke the security kernel is referred to as a security reset.
Furthermore, a device manufacturer should arrange for the generation or installation of the platform keys used by the l1 implementation of Seal and Unseal. If the device is to be recognized as part of a PKI (Public Key Infrastructure), the manufacturer should also certify a public key for the platform. This can be a platform key used directly by l1, or alternatively a key used by a higher layer.
Key generation and certification can be the responsibility of the CPU manufacturer or alternatively some other party, such as the OEM that assembles the CPU into a device. Alternatively, the responsibility can be shared by multiple such parties.
Once the security kernel is executing it can use the isolation mechanisms described above to protect itself from code executing at higher layers. Isolation in space will typically involve privilege mode support, and isolation in time will typically involve secrets being hidden from upper layers.
No additional platform support is needed to support space isolation on most current processors—an existing privilege mode or level will suffice (as long as the hardware resource that allows access to the platform key can be protected from higher layers).
To support time isolation, hardware assistance is used to allow the security kernel to conceal the platform key before passing control to higher layers. One way to provide platform key security in the time isolation model is to employ a stateful guard circuit that is referred to as a reset latch. A reset latch is a hardware circuit that has the property that it is open following reset or power-up, but any software at any time can programmatically close the latch. Once closed, the latch remains closed until the next reset or power-up. A platform that implements a time-isolated security kernel should gate platform key access on the state of a reset latch, and the security kernel should close the latch before passing control to higher layers. As mentioned above, the security kernel should also take additional actions such as clearing memory and registers before passing control, but these action are the same as those used at higher levels.
If the platform employs space isolation then the security kernel uses privilege modes to protect itself and its platform keys from programs (e.g., operating systems) that it hosts. Furthermore, the security kernel establishes a system call interface for invocation of the authentication operations.
If the platform employs space isolation, then the platform should also contain storage that survives a security reset to pass parameters to service routines. To invoke a service, an operating system prepares a command and parameter block in a memory location known to the security kernel and performs a security reset. If the OS wishes to continue execution following the service call (as opposed to a simple restart) then it and the security kernel should take extra measures to ensure that this can be done reliably and safely.
The authenticated operation discussed herein can be used for security in a variety of settings, such as protecting personal data from viruses, protecting confidential server data from network attacks, network administration, copy protection, trustworthy distributed computing, and so forth. The authenticated operation allows different programs, which can execute on the same computer without being in a particular trust relationship, to preserve their cryptographic resources irrespective of the actions of other software.
Some of the discussions below make reference to an SSP (Secure Service Processor). In one embodiment, an SSP is a processor (for use in a computing device) that provides basic cryptographic services to a computing device (e.g., the SSP supports the gating functions described herein (e.g., as layer l1 of
In a different embodiment, an SSP is an appropriately isolated software program that exposes the same functionality to its callers as the previous embodiment does. The SSP embodiment has access (directly or indirectly) to cryptographic keys. A number of implementation options exist for providing such access. For example, the SSP may call service or disclosure guards in lower layers. Or the SSP may have exclusive access to some part of persistent memory (e.g. hard disk, flash memory, ROM, etc.) that contains the required cryptographic key(s).
In summary, an SSP is defined by the functionality it exposes to principals in a higher layer. An SSP is a guard (as described above) with access (direct or indirect) to cryptographic keys. The SSP uses these keys to provide cryptographic services to its callers. The following sections will describe exemplary functionality an SSP exposes.
Example Operations
The following is a discussion of example implementations of sealed storage operations and of remote authentication operations. This section illustrates example implementations of the Seal, UnSeal, Quote, and PKUnseal operations discussed above.
The following definitions are used in this section:
Additionally, access policies are referred to in this section and the Bound Key Operations section below. The access policy describes when the particular operations are functional (that is, when they will work). The user of a computing device is able to selectively switch off certain functions. For example, the computing device (e.g., a SSP that implements the Seal operation) includes a register called FeatureEnable. One of the bits in the register is called MainEnable. If the user sets MainEnable to false then none of the functions in these sections will work any more. The access policy description included with each function describes under which FeatureEnable settings the function will work.
Comments
Access Policy
(FeatureEnable.UseSymmKey==All|
FeatureEnable.UseSymmKey==AuthSL
Actions
Unseal
Comments
Access Policy
(FeatureEnable.UseSymmKey==All|
FeatureEnable.UseSymmKey==AuthSL
Actions
Quote
Comments
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
PKUnseal
Comments
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
Additionally, a set of bound key functions or operations allow cryptographic keys to be created and certified locally (e.g., by the SSP), and also allow cryptographic keys to be communicated from trustworthy remote parties (e.g., communicated to the SSP).
Bound key functionality is characterized as follows:
Bound keys have the following benefits:
The bound key functions provide protection for cryptographic keys. Bound keys can be generated by remote parties or they can be created locally through the GenBoundKey command.
Bound keys that are generated locally may emit a “quote” certificate that can be used to provide remote parties with evidence of the type of the public key, the type of key generated, the state of the machine during generation, and the (optional) condition (e.g. digests) to which the key is bound.
Bound keys include one or more of the following elements:
Bound keys have the same cryptographic requirements as the sealed storage and attestation functions described above (Seal, UnSeal, PKUnseal). In particular, locally generated bound keys could be protected by any of the cryptographic implementations of the Store and Retrieve functions described above. In each case, the confidentiality of the bound key itself is protected and the integrity of the overall data structure is protected in order to ensure that the different conditions that govern the usage of the bound key have not been corrupted. As described earlier, this can be achieved by various combinations of symmetric ciphers or public key encryption algorithms with MACs or digital signatures. In one embodiment, the bound key data structure is public key encrypted.
Functions
In certain embodiments, bound keys can be used in one or more of the following functions:
In some implementations, the bound key is a private key of a public/private key pair. In such implementations, the bound key blob can contain the private key, or alternatively some data that allows the key to be computed. For example, a private key fragment may be contained in the bound key blob, and this fragment, in conjunction with the corresponding public key, can be used to reconstruct the private key of the public/private key pair.
The BoundSign operation receives a data input that is to be signed using the bound key, and also receives a bound key blob. The SSP recovers the private signing key from the bound key blob and then generates a digitally signed message over the data input using the recovered signing key. The SSP then outputs the digitally signed message. If the bound key blob is corrupted or the bound key usage condition(s), if any, are not satisfied, then the SSP does not perform the operation. The data input can thus be digitally signed using the recovered private key without the private key being revealed by the SSP.
The BoundQuote operation receives as an input data to be signed and a bound key blob. The SSP recovers the private key from the bound key blob and then uses the recovered signing key to generate a signature over the data input to the operation and the current PCR value (e.g., an identifier, such as a digest, of the program invoking the BoundQuote operation) as in the Quote operation described above. The SSP then outputs the digitally signed message. If the bound key blob is corrupted or the bound key usage condition(s), if any, are not satisfied, then the SSP does not perform the operation. In one implementation, the BoundQuote operation is similar to the BoundSign operation, but differs in that the current PCR value is used in the BoundQuote operation.
The BoundPkDecrypt operation receives as an input ciphertext and a bound key blob. The SSP recovers the private key from the bound key blob and then uses the recovered private bound key to decrypt the input ciphertext. The decrypted data is then output by the BoundPkDecrypt operation. If the bound key blob is corrupted or the bound key usage condition(s), if any, are not satisfied, then the SSP does not perform the operation.
The BoundPkUnseal operation receives as an input ciphertext and a bound key blob. The SSP recovers the private key from the bound key blob and then uses the private key to decrypt the input ciphertext as in the PKUnseal operation described above. The decrypted data is then output by the BoundPkUnseal operation. If the bound key blob is corrupted or the bound key usage condition(s), if any, are not satisfied, then the SSP does not perform the operation.
The BoundMAC operation receives a data input, over which the MAC is to be computed using the bound key, and also receives a bound key blob. If the bound key blob is corrupted or the bound key usage condition(s), if any, are not satisfied, then the SSP does not perform the operation. Otherwise, the SSP recovers the bound key from the bound key blob and then generates a message authentication code (MAC) over the data input using the recovered bound key. The SSP then outputs the computed MAC. Thus, a MAC for the data input can be computed using the recovered bound key without the bound key being revealed by the SSP.
The BoundEncrypt operation receives a data input, which is to be encrypted using the bound key, and also receives a bound key blob. If the bound key blob is corrupted or the bound key usage condition(s), if any, are not satisfied, then the SSP does not perform the operation. Otherwise, the SSP recovers the bound key from the bound key blob and then encrypts the data input using the recovered bound key. The SSP then outputs the computed ciphertext. Thus, the data input can be encrypted using the recovered bound key without the bound key being revealed by the SSP.
The BoundDecrypt operation receives a data input, which is to be decrypted using the bound key, and also receives a bound key blob. If the bound key blob is corrupted or the bound key usage condition(s), if any, are not satisfied, then the SSP does not perform the operation. Otherwise, the SSP recovers the bound key from the bound key blob and then decrypts the data input using the recovered bound key. The SSP then outputs the computed plaintext. Thus, the data input can be decrypted using the recovered bound key without the bound key being revealed by the SSP.
The GenBoundKey operation causes the SSP to create a new bound key. The new bound key is a cryptographic key, and a new bound key blob is generated that includes the newly generated key. It is to be appreciated that the bound key blob does not always have to include the entire key. For example, if the newly generated key is a public/private key pair, it may be sufficient to include the private key in the bound key blob.
The new bound key blob is bound to one or more guards—typically the SSP that is executing the operation (e.g., by cryptographically protecting the new bound key blob analogous to the Store function described above, or otherwise securing the new bound key blob so that it can be retrieved only by the SSP). The GenBoundKey operation may also have parameters that determine various aspects of the new bound key blob and data describing these parameters are attached to the newly generated private key in some integrity protected way (e.g., the data is made part of the new bound key blob). Examples of this data, as discussed above, include the migration condition, the bound key usage condition, and so forth. The new bound key blob is then output by the GenBoundKey operation.
In general, a bound key may be any kind of cryptographic key, including a symmetric key or a public-private key pair. The exact key type depends on the bound key operation(s) in which it is to be used. For example, a bound key to be used in BoundMAC would typically be a symmetric key, whereas a bound key to be used in BoundSign would typically be a public/private signature key pair. The key type may be specified as a parameter to GenBoundKey.
The BoundKeyMigrate operation allows the usage condition of a bound key to be changed. The SSP verifies that one or more migration conditions are satisfied. Any of a variety of conditions may be used with the BoundKeyMigrate operation (e.g., any condition, analogous to those discussed above with reference to the Seal and UnSeal operations, that identifies when and/or to whom the data can be migrated). If the verification is not successfully made, then the operation fails. If the verifications is successfully made, then the guard produces a new bound key blob, in which the bound key usage condition has been changed as requested.
The BoundKeyExport operation instructs the SSP to change the set of guards (SSPs) that can directly access the bound key. The SSP verifies that one or more conditions are satisfied. Any of a variety of conditions may be used with the BoundKeyExport operation (e.g., any condition, analogous to those discussed above with reference to the Seal and UnSeal operations, that identifies when and/or to whom the data can be exported). If the verification is not successfully made, then the operation fails. If the verification is successfully made, then the SSP changes the cryptographic protection on the bound key blob as requested. In one embodiment, the SSP encrypts the bound key data structure with one or more new keys.
An example of a class of conditions that the creator (whether local or remote) of a bound key can specify is that the bound key may only be used on behalf of principals whose program digests have a particular value(s). In this case, the bound key operations check the requesting principal's digest after internal retrieval of the bound key blob, and fail without performing additional computation if the digest is not as specified in the bound key blob.
A bound key blob is typically tied or bound to a particular SSP by means of a cryptographic operation that requires a unique key of the particular SSP to succeed. Examples of such operations are MAC, digital signatures, encryption, and combined encryption and integrity verification functions.
Example Bound Key Operations
In one implementation, migration is authorized by way of a local migration certificate or an export certificate issued by the authorizing entity. The local-migration certificate is a default of RSASSA-PSS-SIGN operation over the following data structure:
Local SSP-migration is requested using the BoundKeyMigrate operation. To authorize local-migration, the SSP is provided with a Bound-migration-info structure referring to this bound key, and a properly formed certificate over this structure provided by the authorized entity. If the migration authorization is acceptable, the SSP rebinds the key for the new PCR, with all other key attributes remaining unchanged (e.g., if the key was not originally bound to a PCR value, it will not be when rebound). The source-bound-blob-digest is the digest of the encrypted external form of the bound key.
Remote-migration is achieved through the BoundKeyExport function with, for example, a Bound-export-info structure signed by the authorizing entity:
The authorizing entity is in complete control of the device or software module to which the key is re-bound when a key is marked exportable.
The bound key operations use a PKCiphertext, which is a sequence of type Bound-key-blob encrypted with the platform public encryption key as follows:
The bound-to-PCR member is a flag that indicates whether the bound-to Digest field must match the current PCR value in order for the bound key to be used. {migrateable, migrate-auth} indicates whether the key is migrateable, and if so under the control of what authority (if migrateable is false, then the migrate-auth value is unimportant). {exportable, export-auth} indicates whether the key is exportable, and if so under the control of what authority (if exportable is false, then the export-auth value is unimportant). Pub-key-digest is the digest of the corresponding public key to provide a strong binding between the PKCompressedPrivateKey and the public key that is needed to recover the private key.
In one implementation, if a bound key is created locally with the GenBoundKey function, the SSP creates a signature over a data structure detailing the public properties of the key that was just generated, and the configuration of the system during bound key export.
In this data structure, key-type, bound-to-PCR, bound-to, migrateable, migrate-auth, exportable, and export-auth are the bound key characteristics of the newly generated key. Creator-PCR is the PCR that was active when the key was exported, and bound-pub-key is the digest of the newly created public key. sig-nonce is the digest-sized value passed in when bound-key generation was requested.
Exemplary definitions of the BoundSign, BoundQuote, BoundPkDecrypt, BoundPkUnseal, GenBoundKey, BoundKeyMigrate, and BoundKeyExport operations are as follows.
BoundSign
Comments
The BoundSign operation takes PKciphertext of type sspV1BoundKey containing a BoundKeyBlob of type BoundSignKey and the corresponding public key. If either of these conditions is not met, or if the sequence fails to decode, then the operation fails with SSP_CRYPTO_ERROR.
If Bound-to-PCR is set, the SSP checks that the current PCR value is as specified in the Bound-key-blob sequence. If it is not, the SSP returns SSP_CRYPTO_ERROR.
Finally, the SSP signs the input message with the decrypted private key.
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
5. If bound-to-PCR is TRUE, then the bound-to should be compared to the current PCR value. If the value is not the same, the SSP should output SSP_CRYPTO_ERROR.
BoundQuote
Comments
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
BoundPkDecrypt
Comments
If either of these conditions is not met, or if the sequence fails to decode, then the operation fails with SSP_CRYPTO_ERROR.
If Bound-to-PCR is set, the SSP checks that the current PCR value is as specified in the Bound-key-blob sequence. If it is not, the SSP returns SSP_CRYPTO_ERROR.
Finally, the SSP decrypts the input message with the decrypted private key from the bound-blob.
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
BoundPkUnseal
Comments
The BoundPkUnseal operation takes PKciphertext of type sspV1BoundKey containing a BoundKeyBlob of type BoundPKUnsealKey. If either of these conditions is not met, or if the sequence fails to decode, then the operation fails with SSP_CRYPTO_ERROR.
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
The BoundPkUnseal operation must implement the following steps:
GenBoundKey
Comments
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
BoundKeyMigrate
Comments
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
BoundKeyExport
Comments
Access Policy
(FeatureEnable.UsePrivKey==All|
FeatureEnable.UsePrivKey==AuthSL
Actions
Computer environment 400 includes a general-purpose computing device in the form of a computer 402. Computer 402 can be used, for example, to implement principal 102 and guard 104 of
The system bus 408 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus.
Computer 402 typically includes a variety of computer readable media. Such media can be any available media that is accessible by computer 402 and includes both volatile and non-volatile media, removable and non-removable media.
The system memory 406 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 410, and/or non-volatile memory, such as read only memory (ROM) 412. A basic input/output system (BIOS) 414, containing the basic routines that help to transfer information between elements within computer 402, such as during start-up, is stored in ROM 412. RAM 410 typically contains data and/or program modules that are immediately accessible to and/or presently operated on by the processing unit 404.
Computer 402 may also include other removable/non-removable, volatile/non-volatile computer storage media. By way of example,
The disk drives and their associated computer-readable media provide non-volatile storage of computer readable instructions, data structures, program modules, and other data for computer 402. Although the example illustrates a hard disk 416, a removable magnetic disk 420, and a removable optical disk 424, it is to be appreciated that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like, can also be utilized to implement the exemplary computing system and environment.
Any number of program modules can be stored on the hard disk 416, magnetic disk 420, optical disk 424, ROM 412, and/or RAM 410, including by way of example, an operating system 426, one or more application programs 428, other program modules 430, and program data 432. Each of such operating system 426, one or more application programs 428, other program modules 430, and program data 432 (or some combination thereof) may implement all or part of the resident components that support the distributed file system.
A user can enter commands and information into computer 402 via input devices such as a keyboard 434 and a pointing device 436 (e.g., a “mouse”). Other input devices 438 (not shown specifically) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, and/or the like. These and other input devices are connected to the processing unit 404 via input/output interfaces 440 that are coupled to the system bus 408, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).
A monitor 442 or other type of display device can also be connected to the system bus 408 via an interface, such as a video adapter 444. In addition to the monitor 442, other output peripheral devices can include components such as speakers (not shown) and a printer 446 which can be connected to computer 402 via the input/output interfaces 440.
Computer 402 can operate in a networked environment using logical connections to one or more remote computers, such as a remote computing device 448. By way of example, the remote computing device 448 can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and the like. The remote computing device 448 is illustrated as a portable computer that can include many or all of the elements and features described herein relative to computer 402.
Logical connections between computer 402 and the remote computer 448 are depicted as a local area network (LAN) 450 and a general wide area network (WAN) 452. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
When implemented in a LAN networking environment, the computer 402 is connected to a local network 450 via a network interface or adapter 454. When implemented in a WAN networking environment, the computer 402 typically includes a modem 456 or other means for establishing communications over the wide network 452. The modem 456, which can be internal or external to computer 402, can be connected to the system bus 408 via the input/output interfaces 440 or other appropriate mechanisms. It is to be appreciated that the illustrated network connections are exemplary and that other means of establishing communication link(s) between the computers 402 and 448 can be employed.
In a networked environment, such as that illustrated with computing environment 400, program modules depicted relative to the computer 402, or portions thereof, may be stored in a remote memory storage device. By way of example, remote application programs 458 reside on a memory device of remote computer 448. For purposes of illustration, application programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 402, and are executed by the data processor(s) of the computer.
Various modules and techniques may be described herein in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
An implementation of these modules and techniques may be stored on or transmitted across some form of computer readable media. Computer readable media can be any available media that can be accessed by a computer. By way of example, and not limitation, computer readable media may comprise “computer storage media” and “communications media.”
“Computer storage media” includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
“Communication media” typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier wave or other transport mechanism. Communication media also includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention.
This application is a continuation of U.S. patent application Ser. No. 10/407,117, filed Apr. 4, 2003, entitled “Saving and Retrieving Data Based on Public Key Encryption”, now U.S. Pat. No. 7,890,771, which is hereby incorporated by reference herein. U.S. Pat. No. 7,890,771 claims priority to U.S. Provisional Application No. 60/373,505, filed Apr. 17, 2002, entitled “Secure Store Processor”, which is hereby incorporated by reference. A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Number | Name | Date | Kind |
---|---|---|---|
4817140 | Chandra et al. | Mar 1989 | A |
4827508 | Shear | May 1989 | A |
4908861 | Brachtl et al. | Mar 1990 | A |
4969189 | Ohta et al. | Nov 1990 | A |
4977594 | Shear | Dec 1990 | A |
5007082 | Cummins | Apr 1991 | A |
5023907 | Johnson et al. | Jun 1991 | A |
5050213 | Shear | Sep 1991 | A |
5140634 | Guillou et al. | Aug 1992 | A |
5276311 | Hennige | Jan 1994 | A |
5283830 | Hinsley et al. | Feb 1994 | A |
5335334 | Takahashi et al. | Aug 1994 | A |
5339403 | Parker | Aug 1994 | A |
5349643 | Cox | Sep 1994 | A |
5365589 | Gutowitz | Nov 1994 | A |
5375169 | Seheidt et al. | Dec 1994 | A |
5404403 | Bright et al. | Apr 1995 | A |
5410598 | Shear | Apr 1995 | A |
5418947 | Hsu et al. | May 1995 | A |
5421006 | Jablon et al. | May 1995 | A |
5448716 | Hardell et al. | Sep 1995 | A |
5473690 | Grimonprez et al. | Dec 1995 | A |
5473692 | Davis | Dec 1995 | A |
5483649 | Kuznetsov et al. | Jan 1996 | A |
5491827 | Holtey | Feb 1996 | A |
5530749 | Easter et al. | Jun 1996 | A |
5530758 | Marino, Jr. et al. | Jun 1996 | A |
5544246 | Mandelbaum et al. | Aug 1996 | A |
5557518 | Rosen | Sep 1996 | A |
5557765 | Lipner | Sep 1996 | A |
5559957 | Balk | Sep 1996 | A |
5615263 | Takahashi | Mar 1997 | A |
5623637 | Jones et al. | Apr 1997 | A |
5638446 | Rubin | Jun 1997 | A |
5654746 | McMullan et al. | Aug 1997 | A |
5664016 | Preneel | Sep 1997 | A |
5671280 | Rosen | Sep 1997 | A |
5673319 | Bellare et al. | Sep 1997 | A |
5721781 | Deo et al. | Feb 1998 | A |
5724425 | Chang et al. | Mar 1998 | A |
5724527 | Karnik et al. | Mar 1998 | A |
5745886 | Rosen | Apr 1998 | A |
5757919 | Herbert | May 1998 | A |
5778069 | Thomlinson et al. | Jul 1998 | A |
5796824 | Hasebe et al. | Aug 1998 | A |
5802592 | Chess et al. | Sep 1998 | A |
5812662 | Hsu et al. | Sep 1998 | A |
5812980 | Asai | Sep 1998 | A |
5825876 | Peterson, Jr. | Oct 1998 | A |
5841869 | Merkling et al. | Nov 1998 | A |
5844986 | Davis | Dec 1998 | A |
5857020 | Peterson, Jr. | Jan 1999 | A |
5860099 | Milios et al. | Jan 1999 | A |
5870467 | Imai et al. | Feb 1999 | A |
5872847 | Boyle et al. | Feb 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5892902 | Clark | Apr 1999 | A |
5892904 | Atkinson et al. | Apr 1999 | A |
5910987 | Ginter et al. | Jun 1999 | A |
5915019 | Ginter et al. | Jun 1999 | A |
5917912 | Ginter et al. | Jun 1999 | A |
5919257 | Trostle | Jul 1999 | A |
5920861 | Hall et al. | Jul 1999 | A |
5933498 | Schneck et al. | Aug 1999 | A |
5937063 | Davis | Aug 1999 | A |
5940504 | Griswold | Aug 1999 | A |
5943422 | Van Wie et al. | Aug 1999 | A |
5944821 | Angelo | Aug 1999 | A |
5949876 | Ginter et al. | Sep 1999 | A |
5953502 | Helbig, Sr. | Sep 1999 | A |
5958050 | Griffin et al. | Sep 1999 | A |
5963980 | Coulier et al. | Oct 1999 | A |
5974546 | Anderson | Oct 1999 | A |
5978484 | Apperson et al. | Nov 1999 | A |
5982891 | Ginter et al. | Nov 1999 | A |
5991406 | Lipner et al. | Nov 1999 | A |
5991876 | Johnson et al. | Nov 1999 | A |
5995625 | Sudia et al. | Nov 1999 | A |
6006328 | Drake | Dec 1999 | A |
6006332 | Rabne et al. | Dec 1999 | A |
6009274 | Fletcher et al. | Dec 1999 | A |
6009401 | Horstmann | Dec 1999 | A |
6026166 | LeBourgeois | Feb 2000 | A |
6028933 | Heer et al. | Feb 2000 | A |
6032257 | Olarig et al. | Feb 2000 | A |
6038551 | Barlow et al. | Mar 2000 | A |
6073124 | Krishnan et al. | Jun 2000 | A |
6092189 | Fisher et al. | Jul 2000 | A |
6105137 | Graunke | Aug 2000 | A |
6112181 | Shear et al. | Aug 2000 | A |
6118873 | Lotspiech et al. | Sep 2000 | A |
6138119 | Hall et al. | Oct 2000 | A |
6148083 | Fieres | Nov 2000 | A |
6148387 | Galasso et al. | Nov 2000 | A |
6148402 | Campbell | Nov 2000 | A |
6157721 | Shear et al. | Dec 2000 | A |
6175917 | Arrow et al. | Jan 2001 | B1 |
6185678 | Arbaugh | Feb 2001 | B1 |
6185683 | Ginter et al. | Feb 2001 | B1 |
6189100 | Barr | Feb 2001 | B1 |
6189103 | Nevarez et al. | Feb 2001 | B1 |
6212636 | Boyle et al. | Apr 2001 | B1 |
6223284 | Novoa et al. | Apr 2001 | B1 |
6229894 | Van Oorschot et al. | May 2001 | B1 |
6230285 | Sadowsky et al. | May 2001 | B1 |
6237786 | Ginter et al. | May 2001 | B1 |
6240185 | Van Wie et al. | May 2001 | B1 |
6253193 | Ginter et al. | Jun 2001 | B1 |
6263431 | Lovelace et al. | Jul 2001 | B1 |
6263445 | Blumenau | Jul 2001 | B1 |
6192473 | Stwart | Aug 2001 | B1 |
6272629 | Stewart | Aug 2001 | B1 |
6292569 | Shear et al. | Sep 2001 | B1 |
6308317 | Wilkinson et al. | Oct 2001 | B1 |
6327652 | England et al. | Dec 2001 | B1 |
6327660 | Patel | Dec 2001 | B1 |
6330588 | Freeman | Dec 2001 | B1 |
6335972 | Chandersekaran et al. | Jan 2002 | B1 |
6338139 | Ando et al. | Jan 2002 | B1 |
6339830 | See et al. | Jan 2002 | B1 |
6341373 | Shaw | Jan 2002 | B1 |
6351536 | Sasaki | Feb 2002 | B1 |
6363486 | Knapton, III | Mar 2002 | B1 |
6363488 | Ginter et al. | Mar 2002 | B1 |
6367012 | Atkinson et al. | Apr 2002 | B1 |
6381741 | Shaw | Apr 2002 | B1 |
6389402 | Ginter | May 2002 | B1 |
6389537 | Davis et al. | May 2002 | B1 |
6401208 | Davis et al. | Jun 2002 | B2 |
6425011 | Otani et al. | Jul 2002 | B1 |
6427140 | Ginter et al. | Jul 2002 | B1 |
6449367 | Van Wie et al. | Sep 2002 | B2 |
6453334 | Vinson et al. | Sep 2002 | B1 |
6470085 | Uranaka et al. | Oct 2002 | B1 |
6477252 | Faber et al. | Nov 2002 | B1 |
6477648 | Schell et al. | Nov 2002 | B1 |
6480961 | Rajasekharan et al. | Nov 2002 | B2 |
6557104 | Vu et al. | Apr 2003 | B2 |
6560706 | Carbajal et al. | May 2003 | B1 |
6574609 | Downs et al. | Jun 2003 | B1 |
6609199 | DeTreville | Aug 2003 | B1 |
6640304 | Ginter et al. | Oct 2003 | B2 |
6675298 | Folmsbee | Jan 2004 | B1 |
6681214 | Doljack | Jan 2004 | B1 |
6694025 | Epstein et al. | Feb 2004 | B1 |
6735696 | Hannah | May 2004 | B1 |
6745936 | Movalli et al. | Jun 2004 | B1 |
6754815 | Ellison et al. | Jun 2004 | B1 |
6757831 | Folmsbee | Jun 2004 | B1 |
6772133 | Kambayashi et al. | Aug 2004 | B1 |
6775655 | Peinado et al. | Aug 2004 | B1 |
6820063 | England et al. | Nov 2004 | B1 |
6857071 | Nakae | Feb 2005 | B1 |
6901385 | Okamoto et al. | May 2005 | B2 |
6950941 | Lee et al. | Sep 2005 | B1 |
6983371 | Hurtado et al. | Jan 2006 | B1 |
7007025 | Nason et al. | Feb 2006 | B1 |
7017188 | Schmeidler et al. | Mar 2006 | B1 |
7047411 | DeMello et al. | May 2006 | B1 |
7047414 | Wheeler et al. | May 2006 | B2 |
7062471 | Matsuyama et al. | Jun 2006 | B1 |
7073073 | Nonaka et al. | Jul 2006 | B1 |
7079649 | Bramhill et al. | Jul 2006 | B1 |
7103574 | Peinado et al. | Sep 2006 | B1 |
7103771 | Grawrock | Sep 2006 | B2 |
7111285 | Smith et al. | Sep 2006 | B2 |
7136488 | Hashimoto et al. | Nov 2006 | B2 |
7142676 | Hillier et al. | Nov 2006 | B1 |
7143289 | Denning et al. | Nov 2006 | B2 |
7188240 | Berstis et al. | Mar 2007 | B1 |
7194092 | England et al. | Mar 2007 | B1 |
7263188 | Kohno | Aug 2007 | B2 |
7263722 | Luo et al. | Aug 2007 | B1 |
7302709 | England | Nov 2007 | B2 |
7333615 | Jarboe et al. | Feb 2008 | B1 |
7379550 | Merzenich | May 2008 | B2 |
7401231 | Taki et al. | Jul 2008 | B2 |
7487365 | England et al. | Feb 2009 | B2 |
7529919 | Lampson et al. | May 2009 | B2 |
7543336 | Lampson et al. | Jun 2009 | B2 |
7587589 | England et al. | Sep 2009 | B2 |
7752456 | England | Jul 2010 | B2 |
7765397 | England | Jul 2010 | B2 |
7769174 | Cho et al. | Aug 2010 | B2 |
7890771 | England et al. | Feb 2011 | B2 |
8589701 | England et al. | Nov 2013 | B2 |
8601286 | England et al. | Dec 2013 | B2 |
8621243 | England et al. | Dec 2013 | B2 |
8683230 | England et al. | Mar 2014 | B2 |
20010044901 | Grawrock | Nov 2001 | A1 |
20020007452 | Traw et al. | Jan 2002 | A1 |
20020018566 | Kawatsura et al. | Feb 2002 | A1 |
20020057799 | Kohno | May 2002 | A1 |
20020069365 | Howard et al. | Jun 2002 | A1 |
20020071565 | Kurn et al. | Jun 2002 | A1 |
20020076042 | Sandhu et al. | Jun 2002 | A1 |
20020077985 | Kobata et al. | Jun 2002 | A1 |
20020077986 | Kobata et al. | Jun 2002 | A1 |
20020078361 | Giroux et al. | Jun 2002 | A1 |
20020082997 | Kobata et al. | Jun 2002 | A1 |
20020085714 | Inoha et al. | Jul 2002 | A1 |
20020094089 | Kamiya et al. | Jul 2002 | A1 |
20020094111 | Puchek et al. | Jul 2002 | A1 |
20020101995 | Hashimoto et al. | Aug 2002 | A1 |
20020106086 | Kamiya et al. | Aug 2002 | A1 |
20020107803 | Lisanke et al. | Aug 2002 | A1 |
20020120936 | Del Beccaro et al. | Aug 2002 | A1 |
20020136407 | Denning et al. | Sep 2002 | A1 |
20020138442 | Hori et al. | Sep 2002 | A1 |
20020150243 | Craft et al. | Oct 2002 | A1 |
20020152173 | Rudd | Oct 2002 | A1 |
20020164022 | Strasser et al. | Nov 2002 | A1 |
20020170053 | Peterka et al. | Nov 2002 | A1 |
20020184568 | Kurrasch | Dec 2002 | A1 |
20020186847 | Bischoff et al. | Dec 2002 | A1 |
20030004888 | Kambayashi et al. | Jan 2003 | A1 |
20030009681 | Harada et al. | Jan 2003 | A1 |
20030014315 | Jaalinoja et al. | Jan 2003 | A1 |
20030018906 | Smith et al. | Jan 2003 | A1 |
20030028592 | Ooho et al. | Feb 2003 | A1 |
20030031320 | Fan et al. | Feb 2003 | A1 |
20030051149 | Robert | Mar 2003 | A1 |
20030056107 | Cammack et al. | Mar 2003 | A1 |
20030056112 | Vinson et al. | Mar 2003 | A1 |
20030072450 | Maggenti | Apr 2003 | A1 |
20030079133 | Breiter et al. | Apr 2003 | A1 |
20030084258 | Tashiro | May 2003 | A1 |
20030097340 | Okamoto et al. | May 2003 | A1 |
20030105734 | Hitchen et al. | Jun 2003 | A1 |
20030105835 | Hori et al. | Jun 2003 | A1 |
20030108202 | Clapper | Jun 2003 | A1 |
20030110130 | Pelletier | Jun 2003 | A1 |
20030110131 | Alain et al. | Jun 2003 | A1 |
20030126454 | Glew et al. | Jul 2003 | A1 |
20030163711 | Grawrock | Aug 2003 | A1 |
20030174838 | Bremer | Sep 2003 | A1 |
20030177374 | Yung et al. | Sep 2003 | A1 |
20030177383 | Ofek et al. | Sep 2003 | A1 |
20030177391 | Ofek et al. | Sep 2003 | A1 |
20030188179 | Challener et al. | Oct 2003 | A1 |
20040003273 | Grawrock et al. | Jan 2004 | A1 |
20040052377 | Mattox et al. | Mar 2004 | A1 |
20040093506 | Grawrock et al. | May 2004 | A1 |
20040103202 | Hildebrand et al. | May 2004 | A1 |
20040111600 | Kaler et al. | Jun 2004 | A1 |
20040117625 | Grawrock | Jun 2004 | A1 |
20040172544 | Luo et al. | Sep 2004 | A1 |
20040230797 | Ofek et al. | Nov 2004 | A1 |
20050010818 | Paff et al. | Jan 2005 | A1 |
20050138111 | Aton et al. | Jun 2005 | A1 |
20050149486 | Nason et al. | Jul 2005 | A1 |
20050187674 | Ando | Aug 2005 | A1 |
20050204165 | Nason et al. | Sep 2005 | A1 |
20050256806 | Tien et al. | Nov 2005 | A1 |
20070067624 | England et al. | Mar 2007 | A1 |
20070086588 | England et al. | Apr 2007 | A1 |
20070088946 | England et al. | Apr 2007 | A1 |
20070088949 | England et al. | Apr 2007 | A1 |
20070104329 | England et al. | May 2007 | A1 |
20110119500 | England | May 2011 | A1 |
20110119501 | England | May 2011 | A1 |
20110119502 | England | May 2011 | A1 |
20110119505 | England | May 2011 | A1 |
Number | Date | Country |
---|---|---|
2425010 | Oct 2003 | CA |
2425006 | Jun 2012 | CA |
2425010 | Nov 2013 | CA |
0695985 | Feb 1996 | EP |
0789361 | Aug 1997 | EP |
1132828 | Sep 2001 | EP |
2260629 | Apr 1993 | GB |
06318167 | Nov 1994 | JP |
08137686 | May 1996 | JP |
09251426 | Sep 1997 | JP |
1040172 | Feb 1998 | JP |
11265317 | Sep 1999 | JP |
2000260121 | Sep 2000 | JP |
2000311114 | Nov 2000 | JP |
2001209583 | Aug 2001 | JP |
2001282375 | Oct 2001 | JP |
2001318787 | Nov 2001 | JP |
2002049499 | Feb 2002 | JP |
2003271254 | Sep 2003 | JP |
5060652 | Aug 2012 | JP |
WO-9938070 | Jul 1999 | WO |
WO-0219609 | Mar 2002 | WO |
Entry |
---|
Adrian Perrig, The BiBa One-Time Signature and Broadcast Authentication Protocol, ACM, 2001. |
“Department of Defense Trusted Computer System Evaluation Criteria”, Department of Defense Standard, (Dec. 1985),pp. 1-79. |
“Final Office Action”, U.S. Appl. No. 10/406,861, (Aug. 17, 2007),11 pages. |
“Final Office Action”, U.S. Appl. No. 10/407,117, (Jul. 12, 2007),18 pages. |
“Final Office Action”, U.S. Appl. No. 10/430,994, (Jul. 10, 2007),21 pages. |
“Final Office Action”, U.S. Appl. No. 11/557,595, (Sep. 17, 2009),6 pages. |
“Final Office Action”, U.S. Appl. No. 09/227,568, (Aug. 13, 2002),5 Pages. |
“Final Office Action”, U.S. Appl. No. 09/266,207, (Feb. 9, 2005),22 Pages. |
“Final Office Action”, U.S. Appl. No. 10/430,999, (Jun. 14, 2007),27 Pages. |
“First Office Action”, Chinese Application No. 200710152961.4, (Jan. 8, 2010),8 pages. |
“Foreign Notice of Rejection”, Japanese Patent Appln No. 2003-113502, (Sep. 15, 2009),13 pages. |
“Foreign Office Action”, Application Serial No. 10-2003-0024377, (Feb. 22, 2010),7 pages. |
“Foreign Office Action”, Application Serial No. 2003-113503, (Sep. 15, 2009),14 pages. |
“Foreign Office Action”, Application Serial No. 200610059571.8, (Feb. 20, 2009),7 pages. |
“Foreign Office Action”, Application Serial No. 200610059598.7, (Feb. 20, 2009),9 pages. |
“Foreign Office Action”, Application Serial No. 200710152963.3, (Mar. 1, 2010),21 pages. |
“Foreign Office Action”, Chinese Application No. 03131208.X, (Nov. 4, 2005),11 pages. |
“Foreign Office Action”, Japanese Application No. 2003-113502, (Oct. 26, 2010),7 pages. |
“Foreign Office Action”, Japanese Application No. 2003-113503, (Oct. 26, 2010),7 pages. |
“Foreign Office Action”, Korean Application No. 10-2003-24377, (Sep. 30, 2010),9 pages. |
“Handbook of Applied Cryptography”, CRC Press Series on Discrete Mathematics and Its Applications, XP-002265828, (1997),pp. 33, 493, 509, 510, 546-552. |
“Internet Security: SanDisk and New Microsoft Technology Provide Copy Protected Music for Internet Music Player Market”, (Product Announcement) Edge: Work-Group Computing Report, (Apr. 19, 1999),2 Pages. |
“Microsoft Press Computer Dictionary”, Third Edition, published by Microsoft Press, Copyright 1997,(1997),pp. 27, 34, 341. |
“Non Final Office Action”, U.S. Appl. No. 11/557,595, (Feb. 25, 2009),6 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/406,861, (Jan. 10, 2007),13 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/407,117, (Apr. 10, 2009),26 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/407,117, (Feb. 7, 2008),6 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/430,994, (Jan. 24, 2007),19 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/430,994, (Jun. 16, 2005),6 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/431,011, (Jun. 1, 2007),6 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/431,011, (Jun. 30, 2006),19 pages. |
“Non Final Office Action”, U.S. Appl. No. 11/557,581, (Feb. 13, 2009),21 pages. |
“Non Final Office Action”, U.S. Appl. No. 10/407,117, (Sep. 18, 2008),28 pages. |
“Non-Final Office Action”, U.S. Appl. No. 10/407,117, (Nov. 25, 2009),6 pages. |
“Non-Final Office Action”, U.S. Appl. No. 10/407,117, (Dec. 11, 2006),16 pages. |
“Non-Final Office Action”, U.S. Appl. No. 10/407,117, (Jun. 24, 2010),6 pages. |
“Non-Final Office Action”, U.S. Appl. No. 10/430,994, (Aug. 9, 2006),17 pages. |
“Non-Final Office Action”, U.S. Appl. No. 10/430,994, (Dec. 14, 2005),6 pages. |
“Non-Final Office Action”, U.S. Appl. No. 10/431,011, (Nov. 27, 2006),9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 11/557,581, (Oct. 20, 2009),6 pages. |
“Non-Final Office Action”, U.S. Appl. No. 11/557,595, (Feb. 25, 2009),6 pages. |
“Non-final Office Action”, U.S. Appl. No. 09/227,568, (Dec. 4, 2001),7 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/227,568, (Mar. 27, 2006),7 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/227,568, (Jun. 3, 2005),4 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/227,568, (Jun. 18, 2003),6 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/227,568, (Aug. 9, 2004),5 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/227,568, (Sep. 22, 2005),4 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/266,207, (Jan. 29, 2004),16 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/266,207, (Nov. 1, 2005),12 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/266,207, (Apr. 25, 2006),9 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/266,207, (Jul. 19, 2004),20 Pages. |
“Non-final Office Action”, U.S. Appl. No. 09/266,207, (Aug. 13, 2003),12 Pages. |
“Non-final Office Action”, U.S. Appl. No. 10/430,999, (Jan. 24, 2008),9 Pages. |
“Non-final Office Action”, U.S. Appl. No. 10/430,999, (Dec. 5, 2006),22 Pages. |
“Non-final Office Action”, U.S. Appl. No. 10/430,999, (Jul. 12, 2006),22 Pages. |
“Non-Final Office Action”, U.S. Appl. No. 10/431,011, (Jun. 1, 2007),6 pages. |
“Notice of Allowance”, U.S. Appl. No. 10/407,117, (Oct. 6, 2010),6 pages. |
“Notice of Allowance”, U.S. Appl. No. 11/557,581, (Apr. 6, 2010),8 pages. |
“Notice of Allowance”, U.S. Appl. No. 11/557,595, (Mar. 11, 2010),4 pages. |
“Notice of Allowance”, U.S. Appl. No. 11/557,641, (Jun. 2, 2009),6 pages. |
“Notice of Preliminary Rejection”, Korean Application No. 10-2003-0024374, (Mar. 26, 2010),5 Pages. |
“Notice on Grant of Patent Rights”, Application Serial No. 200610059571.8, (Jun. 19, 2009),4 pages. |
“Notice on Grant of Patent”, Application Serial No. 200610059598.7, (Jun. 19, 2009),4 pages. |
“Phoenix Technologies Partners with Secure Computing in Enterprise Security Marketplace”, Business Wife Courtesy of Dialog Text Search, (Jul. 12, 2001),2 pages. |
“Restriction Requirement”, U.S. Appl. No. 10/406,861, (Jan. 10, 2007),13 pages. |
“Restriction Requirement”, U.S. Appl. No. 10/407,117, (Sep. 14, 2006),4 pages. |
“Trusted Computing Group (TCG) Main Specification Version 1.1a”, Retrieved from http://www.trustedpc.org, Trusted Computing Platform Alliance, (Sep. 2001),332 pages. |
“Trusted Computing Platform Alliance Main Specification Version 1.1b”, Trusted Computing Group, Available at <http://www.trustedcomputinggroup.org/files/resource—files/64795356-1D09-3519- ADAB12F595B5FCDF/TCPA—Main—TCG—Architecture—v1—1b.pdf>,(Feb. 22, 2002),25 pages. |
Abadi, M. et al., “Authentication and Delegation with Smart-cards”, Science of Computer Programming, vol. 21, Issue 2 (Oct. 1993), Available at: <http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-67.html>,(Jul. 30, 1992),30 pages. |
Arbaugh, et al., “A Secure and Reliable Bootstrap Architecture”, IEEE 1997, Distributed Systems Laboratory, Philadelphia, PA, 1997, (1997),pp. 65-71. |
Arbaugh, William et al., “Automated Recovery in a Secure Bootstrap Process”, Network and Distributed System Security Symposium, Internet Society, (1998),16 pages. |
Aucsmith, David “Tamper Resistent Softward: An Implementation”, Lecture Notes in Computer Science, 1996, vol. 1174, (1996),pp. 317-333. |
Barak, Boaz et al., “On the (Im)possbility of Obfuscating Programs”, J. Kilian(Ed.): CRYPTO 2001, LNCS 2130, (2001),pp. 1-18. |
Bellare, Mihir et al., “Keying Hash Functions for Message Authentication”, Advances in Cryptology—CRYPTO, 1996, LNCS 1109, (1996),pp. 1-15. |
Blaze, Matt “A Cryptographic File System for Unix”, Conference on Computer and Communications Security, Proceedings of the 1st ACM conference on Computer and communications security, ACM, pp. 9-16, 1993,(1993),8 pages. |
Clark, Paul C., et al., “BITS: A Smartcard Protected Operating System”, Communications of the ACM, vol. 37, Issue 11, Available at <http://portal.acm.org/ft—gateway.cfm?id=188371&type=pdf&coll=GUIDE&dl=GUIDE&CFID=59665731&CFTOKEN=23983236>,(Nov. 1994),pp. 66-70, 94. |
Coffey, Tom et al., “Non-Repudiation with Mandatory Proof of Receipt”, ACM SIGCOMM Computer Communication Review archive vol. 26 , Issue 1, (Jan. 1996),pp. 6-17. |
Davida, George I., et al., “Defending Systems Against Viruses through Cryptographic Authentication”, IEEE Computer Society Symposium on Security and Privacy, (1989),pp. 312-318. |
England, Paul et al., “Authenticated Operation of Open Computing Devices”, In Proceedings of 7th Australian Conference on Information Security and Privacy, (Jul. 2002),16 pages. |
Feiertag, Richard J., et al., “The Foundations of a Provably Secure Operating System (PSOS)”, California Proceedings of the National Computer Conference AFIPS Press, Available at <http://www.csl.sri.com/˜neumann/psos.pdf>,(1979),pp. 1-9. |
Housley, R. “Internet X.509 Public Key Infrastructure Certificate and CRL Profile”, (Jan. 1999),3 pages. |
Itoi, Naomaru et al., “Personal Secure Booting”, ACISP 2001, LNCS 2119, (2001),pp. 130-144. |
Kuhn, Markus “The TrustNo1 Cryptoprocessor Concept”, Technical Report, Purdue University, (Apr. 30, 1997),6 pages. |
Lampson, Butler et al., “Authentication in Distributed Systems: Theory and Practice”, Digital Equipment Corporation ACM Transactions on Computer Systems, vol. 10, No. 4, (Nov. 1992),pp. 265-310. |
Lampson, Butler W., et al., “Protection”, Fifth Princeton Symposium on Information Sciences and Systems, Princeton University, Mar. 1971, reprinted in Operating Systems Review, 8, Jan. 1, 1974, (Mar. 1971),pp. 18-24. |
McKenkie, Matt “Seybold Report on Internet Publishing”, vol. 1, No. 4, p. 6(9), (Dec. 1996),12 pages. |
Muller-Schloer, Christian “A Microprocessor-based Cryptoprocessor”, IEEE Micro, vol. 3 Issue 5, (Oct. 31, 1983),pp. 5-15. |
Murphy, Kathleen et al., “Preventing Piracy: Authorization Software May Ease Hollywood's Fear of the Net”, Internet World Magazine, (Apr. 1, 2000),3 Pages. |
Schneier, “Applied Cryptography”, Protocols Alogrith and Source Code in C, (1996),pp. 574-577. |
Smith, Sean W., et al., “Building a High-Performance, Programmable Secure Coprossor”, Computer Networks, vol. 31, No. 8, (Apr. 1999),pp. 831-860. |
Smith, Sean W., et al., “Trusting Trusted Hardware: Towards a Formal Model for Programmable Secure Coprocessors”, Third USENIX Workshop on Electronic Commerce, (1998),pp. 83-98. |
Smith, Sean W., et al., “Using a High-Performance, Programmable Secure Coprocessor”, R. Hirschfeld (Ed.): FC'98, LNCS 1465, (1998),pp. 73-89. |
Stallings, “Cryptography and Network Security: Principles and Practice”, 2nd Edition, (1992),pp. 186-187. |
Stallings, “Cryptography and Network Security: Principles and Practice”, Prentice Hall, 2nd Edition, (1999),pp. 143-147. |
Suh, G. E., et al., “AEGIS: Architecture for Tamper-Evident and Tamper-Resistent Processing”, MIT Laboratory for Computer Science, XP-002265826, (2003),pp. 1-16. |
Yee, “Using Secure Coprocessors”, School of Computer Science, Carnegie Mellon University, (1994),104 pages. |
Young, “Facing an Internet Security Minefield Microsoft Hardens NT Server's Defenses”, Windows Watcher, Sep. 12, 1997, vol. 7, Issue 9, p1, 6p, 1 chart, (Sep. 12, 1997),8 Pages. |
“Final Office Action”, U.S. Appl. No. 13/015,402, (Dec. 29, 2011), 8 pages. |
“Foreign Notice of Allowance”, Japanese Application No. 2011-158404, (Jan. 20, 2012), 6 pages. |
“Foreign Office Action”, Canadian Application No. 2425010, (Dec. 15, 2011), 8 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/015,403, (Mar. 2, 2012), 5 pages. |
“Final Office Action”, U.S. Appl. No. 13/015,360, (Mar. 13, 2012),13 pages. |
“Final Office Action”, U.S. Appl. No. 13/015,440, (May 7, 2012),10 pages. |
“Foreign Office Action”, Japanese Application No. 2011-280456, (Apr. 13, 2012),10 pages. |
“Foreign Office Action”, Canadian Application No. 2425006, (May 2, 2011),3 pages. |
“Foreign Office Action”, Chinese Application No. 200710152961.4, (May 10, 2011),7 pages. |
“Foreign Office Action”, Chinese Application No. 200710152963.3, (May 6, 2011),7 pages. |
“Foreign Office Action”, Japanese Application No. 2010-057733, (Apr. 19, 2011),4 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/015,402, (Jul. 13, 2011),9 pages. |
“Foreign Notice of Allowance”, Canadian Application No. 2425006, (Nov. 3, 2011), 1 page. |
“Foreign Office Action”, Chinese Application No. 200710152961.4, (Aug. 25, 2011), 10 pages. |
“Foreign Office Action”, Japanese Application No. 2011-158404, (Oct. 21, 2011), 4 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/015,360, (Sep. 16, 2011), 9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/015,440, (Nov. 15, 2011), 8 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/015,360, (Jan. 3, 2013), 9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/015,440, (Feb. 5, 2013), 7 pages. |
“Final Office Action”, U.S. Appl. No. 13/015,403, (Sep. 13, 2012), 8 pages. |
“Foreign Notice of Allowance”, Japanese Application No. 2011-280456, (Jul. 31, 2012), 6 pages. |
“Foreign Office Action”, Canadian Application No. 2425010, (Nov. 9, 2012), 3 pages. |
“Foreign Office Action”, Chinese Application No. 200710152961.4, (Jun. 5, 2012), 6 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/015,402, (Nov. 13, 2012), 10 pages. |
“Foreign Notice of Allowance”, Canadian Application No. 2,425,010, (Aug. 1, 2013), 1 Page. |
“Notice of Allowance”, U.S. Appl. No. 13/015,402, (Aug. 20, 2013), 5 pages. |
“Final Office Action”, U.S. Appl. No. 13/015,360, (Sep. 10, 2013), 6 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/015,360, (Oct. 25, 2013), 5 pages. |
Bloom, et al., “Copy Protection for DVD Video”, In Proceedings of the IEEE, vol. 87, Issue: 7, Jul. 1999, pp. 1267-1276. |
“Foreign Office Action”, CA Application No. 2,778,805, Feb. 20, 2014, 2 pages. |
“Foreign Office Action”, CA Application No. 2,778,805, Aug. 14, 2014, 2 pages. |
“Foreign Notice of Allowance”, CA Application No. 2,778,805, Sep. 25, 2014, 2 pages. |
“Final Office Action”, U.S. Appl. No. 13/015,402, (May 7, 2013), 7 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/015,403, (Jul. 19, 2013), 8 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/015,440, (Jul. 22, 2013), 5 pages. |
Number | Date | Country | |
---|---|---|---|
20110154057 A1 | Jun 2011 | US |
Number | Date | Country | |
---|---|---|---|
60373505 | Apr 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10407117 | Apr 2003 | US |
Child | 13012573 | US |