Certificate verification

Information

  • Patent Grant
  • 8621188
  • Patent Number
    8,621,188
  • Date Filed
    Friday, October 9, 2009
    15 years ago
  • Date Issued
    Tuesday, December 31, 2013
    10 years ago
Abstract
An improved secure programming technique involves reducing the size of bits programmed in on-chip secret non-volatile memory, at the same time enabling the typical secure applications supported by secure devices. A technique for secure programming involves de-coupling chip manufacture from the later process of connecting to ticket servers to obtain tickets. A method according to the technique may involve sending a (manufacturing) server signed certificate from the device prior to any communication to receive tickets. A device according to the technique may include chip-internal non-volatile memory to store the certificate along with the private key, in the manufacturing process.
Description
BACKGROUND

A secure processor typically includes an ID and/or stored secret key. In order to enhance the level of security, the quantities could be programmed in chip-internal non-volatile memory to build a secure processor. The programming of the ID and secret key happen during the secure manufacturing process of the chip. Each ID is unique, and so is the private key. These quantities are used in applications on the device, to implement digital rights management and other security related applications. Typically, the chip includes mechanisms to generate cryptographically strong random numbers to use as nonces in network protocols, secret keys etc.


In a typical infrastructure used for implementing digital rights management, a server is used to supply digitally signed tickets to enable rights for the device. Such tickets use the device identities and/or secret key mechanisms to bind the tickets to the devices. In order to ensure the uniqueness of each device ID/key the server typically uses a secure database to store the IDs, (and/or signed certificates) corresponding to each chip that is manufactured. These certificates contain public keys corresponding to each secret key (private key of a (private, public) key pair) programmed in the chip. In order to populate the database with certificates, the infrastructure associated with the database should be securely coupled with the manufacturing process to maintain a one-to-one correspondence between manufactured chips and certificates in the database.


The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.


SUMMARY

The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools, and methods that are meant to be exemplary and illustrative, not limiting in scope. In various embodiments, one or more of the above-described problems have been reduced or eliminated, while other embodiments are directed to other improvements.


An improved secure programming technique involves reducing the size of bits programmed in on-chip secret non-volatile memory, at the same time enabling the typical secure applications supported by secure devices. Another improved secure programming technique involves simplifying the process of manufacture of the system. In an embodiment, programming the secrets is isolated to on-chip programming, and, specifically, is isolated from the processes of system integration and infrastructure setup.


A technique for secure programming involves de-coupling chip manufacture from the later process of connecting to ticket servers to obtain tickets. A method according to the technique may involve sending a (manufacturing) server signed certificate from the device prior to any communication to receive tickets. The method may further include populating a database to facilitate performing ticket services later, for example just when the ticket services are needed.


A device according to the technique may include chip-internal non-volatile memory to store the certificate along with the private key, in the manufacturing process. The private key may or may not be an elliptic curve based private key. An advantage of the elliptic curve cryptography based key is that it is smaller than many other types of keys for the relative cryptographic strength. Further, it is possible, using elliptic curve algorithms, to store a random private key and compute the public key by a run-time computation.


Advantageously, especially considering the value of on-chip real estate, a compressed certificate can be provided in the non-volatile memory. Using a smaller data-set (than what would be required to store a device certificate) the device dynamically generates a certificate on the device, to provide to a requesting application. The device certificate may or may not be generated multiple times. For example, the device certificate could be generated once and stored in system external storage for further use. This is not particularly insecure because the certificate is public data.


A device constructed according to the technique may have applicability in other areas. For example, the device could be authenticated to a peer, or to any application that requires a first device certificate. In another alternative, the non-volatile memory may include a secure random number generator for the device, using the secure manufacturing process to program the non-volatile memory.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the inventions are illustrated in the figures. However, the embodiments and figures are illustrative rather than limiting; they provide examples of the invention.



FIG. 1 depicts an example of a system for validating a client at a server.



FIG. 2 depicts a flowchart of an example of a method for power up and power down of a device appropriate for use in the system.



FIG. 3 depicts a flowchart of an example of a method for generating a device certificate only once.



FIG. 4 depicts a computer system suitable for implementation of the techniques described above with reference to FIGS. 1-3.



FIG. 5 depicts an example of a secure system suitable for implementation of the techniques described above with reference to FIGS. 1-3.



FIG. 6 depicts a flowchart of an example of a method for manufacturing a secure device.



FIG. 7 depicts a flowchart of an example of a method for construction of a secure certificate.





DETAILED DESCRIPTION

In the following description, several specific details are presented to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or in combination with other components, etc. In other instances, well-known implementations or operations are not shown or described in detail to avoid obscuring aspects of various embodiments, of the invention.



FIG. 1 depicts an example of a system 100 for validating a client at a server. In the example of FIG. 1, the system 100 includes a server 102, a network 104, and a client 106. The server 102 includes a certificate request module 110, a certificate verification module 112, a Cert database 114, a pseudo-random number (PRN) generator 116, and an interface 118. The client 106 includes a certificate generation module 120, non-volatile (NV) memory 122, and an interface 124.


The server 102 may be any applicable known or convenient computer. The network 104 may be any communications network including, by way of example but not limitation, the Internet. The client 106 may be any applicable known or convenient computer that has secure storage. The NV memory 122 may include a secure key store and, in an embodiment, the NV memory 122 is on-chip memory.


In the example of FIG. 1, in operation, a protocol for registration or activation is initiated by the server 102. (The client 106 may, in an alternative, initiate the registration or activation.) In an embodiment, the protocol serves to register a device identity and certificate into the cert database 114. To do so, the PRN generator 116 generates a PRN, R, and the certificate request module 110 of the server 102 generates a request for a device certificate. R and the request for a device certificate are sent via the interface 118 to the network 104.


R and the request for a device certificate are received at the interface 124 of the client 106. The certificate generation module 120 of the client 106 generates a certificate Cert. An example of the algorithm used to generate Cert is described with reference to FIG. 7, below. The certificate generation module 120 computes a signature Sig, over random number R, using a device private key. Operands are stored in the NV memory 122, which may reside in, for example, a secure kernel (see e.g., FIG. 5). In an alternative, the computation could include a device ID, serial number, region code, or some other value. The interface 124 of the client 106 returns R, any optional data, Cert, and Sig to the network 104.


The interface 118 receives at the server 102 R, any optional data, Cert, and Sig. The certificate verification module 112 at the server 102 validates Cert using a trusted certificate chain, validates Sig using Cert, and verifies that R is the same as the value, R, that was originally sent by the server 102 to the client 106. If successfully validated and verified, the server 102 imports Cert into the Cert database 116. At this point, the client 106 is presumably authorized to obtain from the server 102—or some other location that can use the certificate to authorize the client 106—digital licenses for rights managed content, and other operations.


In another embodiment, the device could generate a new key pair {pvt1,pub1} using a RNG, and a certificate could be created for the new public key pub1, using the device programmed private key as signer. This new key pvt1 could be used to sign the message having the random R.


It should be noted that secure networking protocols such as SSL and other services that require ephemeral secret keys typically make use of a source of a string of random numbers. A secure manufacturing process, such as is described by way of example but not limitation with reference to FIG. 6, below, can be used to seed a secret random number S in a device. A PRN generating algorithm using cryptographic primitives such as the functions in AES or SHA can be used to generate PRNs. The sequence should not repeat after power-cycle of the device. Using a state-saving mechanism involving the chip non-volatile memory ensures a high level of security. The device uses a part of re-writeable non-volatile memory to store a sequence number.



FIG. 2 depicts a flowchart 200 of an example of a method for power up and power down of a device appropriate for use in the system 100. In the example of FIG. 2, the flowchart 200 starts at module 202 where a device is powered on. In the example of FIG. 2, the flowchart 200 continues to module 204 where runtime state is initialized to 1. Since the runtime state is incremented over time, the runtime state should be stored in writable memory, such as on-chip writable memory.


In the example of FIG. 2, the flowchart 200 continues to module 206 where the device increments the sequence number and computes key=fn(S, sequence number), where S=a programmed secret seed random number. Since S is programmed, it can be stored in on-chip NV read-only memory (ROM). At this point, the device is presumed to be “up and running.”


In the example of FIG. 2, the flowchart 200 continues to module 208 where, in response to a request for a random number, the device generates random=fn(key, state) and increments state: state++. In the example of FIG. 2, the flowchart 200 continues to decision point 210 where it is determined whether another random number request is received. If it is determined that another random number request has been received (210-Y), then the flowchart 200 returns to module 208. In this way, module 208 may be repeated multiple times for multiple random number requests.


When it is determined there are no other random number requests (210-N), the flowchart 200 continues to module 212 where the device is powered off, and the state is lost. Thus, the flowchart 200 illustrates the state of the device from power on to power off. If the device is powered on again, a new key must be computed, and state initialized again.



FIG. 3 depicts a flowchart 300 of an example of a method for generating a device certificate only once. In the example of FIG. 3, the flowchart 300 starts at module 302 where a device certificate is generated at a secure device. The flowchart 300 continues to module 304 where the device certificate is stored in system external storage. This variation is notable because the device is secure, but the device certificate is public. Accordingly, the certificate is still secure, even though it is not regenerated each time.



FIG. 4 depicts a computer system 400 suitable for implementation of the techniques described above with reference to FIGS. 1-3. The computer system 400 includes a computer 402, I/O devices 404, and a display device 406. The computer 402 includes a processor 408, a communications interface 410, memory 412, display controller 414, non-volatile storage 416, and I/O controller 418. The computer 402 may be coupled to or include the I/O devices 404 and display device 406.


The computer 402 interfaces to external systems through the communications interface 410, which may include a modem or network interface. The communications interface 410 can be considered to be part of the computer system 400 or a part of the computer 402. The communications interface 410 can be an analog modem, ISDN modem, cable modem, token ring interface, satellite transmission interface (e.g. “direct PC”), or other interfaces for coupling a computer system to other computer systems. Although conventional computers typically include a communications interface of some type, it is possible to create a computer that does not include one, thereby making the communications interface 410 optional in the strictest sense of the word.


The processor 408 may include, by way of example but not limitation, a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor. While the processor 408 is a critical component of all conventional computers, any applicable known or convenient processor could be used for the purposes of implementing the techniques described herein. The memory 412 is coupled to the processor 408 by a bus 420. The memory 412, which may be referred to as “primary memory,” can include Dynamic Random Access Memory (DRAM) and can also include Static RAM (SRAM). The bus 220 couples the processor 408 to the memory 412, and also to the non-volatile storage 416, to the display controller 414, and to the I/O controller 418.


The I/O devices 404 can include a keyboard, disk drives, printers, a scanner, and other input and output devices, including a mouse or other pointing device. For illustrative purposes, at least one of the I/O devices is assumed to be a block-based media device, such as a DVD player. The display controller 414 may control, in a known or convenient manner, a display on the display device 406, which can be, for example, a cathode ray tube (CRT) or liquid crystal display (LCD).


The display controller 414 and I/O controller 418 may include device drivers. A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through a bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the OS and software applications.


The device driver may include a hardware-dependent computer program that is also OS-specific. The computer program enables another program, typically an OS or applications software package or computer program running under the OS kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.


The non-volatile storage 416, which may be referred to as “secondary memory,” is often a magnetic hard disk, an optical disk, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory 412 during execution of software in the computer 402. The non-volatile storage 416 may include a block-based media device. The terms “machine-readable medium” or “computer-readable medium” include any known or convenient storage device that is accessible by the processor 408 and also encompasses a carrier wave that encodes a data signal.


The computer system 400 is one example of many possible computer systems which have different architectures. For example, personal computers based on an Intel microprocessor often have multiple buses, one of which can be an I/O bus for the peripherals and one that directly connects the processor 408 and the memory 412 (often referred to as a memory bus). The buses are connected together through bridge components that perform any necessary translation due to differing bus protocols.


Network computers are another type of computer system that can be used in conjunction with the teachings provided herein. Network computers do not usually include a hard disk or other mass storage, and the executable programs are loaded from a network connection into the memory 412 for execution by the processor 408. A Web TV system, which is known in the art, is also considered to be a computer system, but it may lack some of the features shown in FIG. 4, such as certain input or output devices. A typical computer system will usually include at least a processor, memory, and a bus coupling the memory to the processor.


The computer system 400 may be controlled by an operating system (OS). An OS is a software program—used on most, but not all, computer systems—that manages the hardware and software resources of a computer. Typically, the OS performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating networking, and managing files. Examples of operating systems for personal computers include Microsoft Windows®, Linux, and Mac OS®. Delineating between the OS and application software is sometimes rather difficult. Fortunately, delineation is not necessary to understand the techniques described herein, since any reasonable delineation should suffice.


The lowest level of an OS may be its kernel. The kernel is typically the first layer of software loaded into memory when a system boots or starts up. The kernel provides access to various common core services to other system and application programs.


As used herein, algorithmic descriptions and symbolic representations of operations on data bits within a computer memory are believed to most effectively convey the techniques to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.


An apparatus for performing techniques described herein may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, by way of example but not limitation, read-only memories (ROMs), RAMs, EPROMs, EEPROMs, magnetic or optical cards, any type of disk including floppy disks, optical disks, CD-ROMs, DVDs, and magnetic-optical disks, or any known or convenient type of media suitable for storing electronic instructions.


The algorithms and displays presented herein are not inherently related to any particular computer architecture. The techniques may be implemented using any known or convenient programming language, whether high level (e.g., C/C++) or low level (e.g., assembly language), and whether interpreted (e.g., Perl), compiled (e.g., C/C++), or Just-In-Time (JIT) compiled from bytecode (e.g., Java). Any known or convenient computer, regardless of architecture, should be capable of executing machine code compiled or otherwise assembled from any language into machine code that is compatible with the computer's architecture.



FIG. 5 depicts an example of a secure system 500 suitable for implementation of the techniques described above with reference to FIGS. 1-3. A typical secure system 500 may include a game console, media player, an embedded secure device, a “conventional” PC with a secure processor, or some other computer system that includes a secure processor.


In the example of FIG. 5, the secure system 500 includes a secure processor 502, an OS 504, ticket services 506, a calling application 508, and protected memory 510. In the example of FIG. 5, the OS 504 includes a security kernel 514, which in turn includes a key store 516, an encryption/decryption engine 517, and a security API 518. It should be noted that one or more of the described components, or portions thereof, may reside in the protected memory 510, or in unprotected memory (not shown).


It should further be noted that the security kernel 514 is depicted as residing inside the OS 504 by convention only. It may or may not actually be part of the OS 504, and could exist outside of an OS or on a system that does not include an OS. For the purposes of illustrative simplicity, it is assumed that the OS 504 is capable of authentication. In an embodiment, the ticket services 506 may also be part of the OS 504. This may be desirable because loading the ticket services 506 with authentication can improve security. Thus, in such an embodiment, the OS 504 is loaded with authentication and includes the ticket services 506.


For illustrative simplicity, protected memory is represented as a single memory. However protected memory may include protected primary memory, protected secondary memory, and/or secret memory. It is assumed that known or convenient mechanisms are in place to ensure that memory is protected. The interplay between primary and secondary memory and/or volatile and non-volatile storage is known so a distinction between the various types of memory and storage is not drawn with reference to FIG. 5.


The ticket services 506 may be thought of as, for example, “digital license validation services” and, in a non-limiting embodiment, may include known or convenient procedures associated with license validation. For example, the ticket services 506 may include procedures for validating digital licenses, PKI validation procedures, etc. In the example of FIG. 5, the ticket services 506 can validate a ticket from the calling application 508. In operation, the ticket services 506 obtains the ticket from the calling application 508, which proceeds to validate the ticket.


It is possible that the ticket is personalized. In that case, it could be decrypted using the device private key (programmed as discussed before) to compute a secret shared encryption key. The ticket may or may not be obtained using an Internet download mechanism and stored on re-writable flash memory.


In an embodiment, the security kernel 514 may be loaded at start-up. In another embodiment, a portion of the security kernel may be loaded at start-up, and the remainder loaded later. An example of this technique is described in application Ser. No. 10/360,827 entitled “Secure and Backward-Compatible Processor and Secure Software Execution Thereon,” which was filed on Feb. 7, 2003, by Srinivasan et al., and which is incorporated by reference. Any known or convenient technique may be used to load the security kernel 514 in a secure manner.


The key store 516 is a set of storage locations for keys. The key store 516 may be thought of as an array of keys, though the data structure used to store the keys is not critical. Any applicable known or convenient structure may be used to store the keys. In a non-limiting embodiment, the key store 516 is initialized with static keys, but variable keys are not initialized (or are initialized to a value that is not secure). For example, some of the key store locations are pre-filled with trusted values (e.g., a trusted root key) as part of the authenticated loading of the security kernel 514. The private key in the non-volatile memory could be retrieved and stored in the keystore for future use.


The encryption/decryption engine 517 is, in an embodiment, capable of both encryption and decryption. For example, in operation, an application may request of the security API 518 a key handle that the application can use for encryption. The encryption/decryption engine 517 may be used to encrypt data using the key handle. Advantageously, although the security API 518 provides the key handle in the clear, the key itself never leaves the security kernel 514.


The security API 518 is capable of performing operations using the keys in the key store 516 without bringing the keys out into the clear (i.e., the keys do not leave the security kernel 514 or the keys leave the security kernel 514 only when encrypted). The security API 518 may include services to create, populate and use keys (and potentially other security material) in the key store 516. In an embodiment, the security API 518 also provides access to internal secrets and non-volatile data, including secret keys and device private key. For example, the device private key might be stored in the keystore and used by the security API. One API call could be used to return a device certificate (using an algorithm discussed herein to generate the certificate). Another API call can be constructed to use the private key to compute a shared key for decryption, or use the private key to sign a message or certificate. Depending upon the implementation, the security API 518 may support AES and SHA operations using hardware acceleration.


In the example of FIG. 5, the ticket services 506 and the security API 518 may execute in a separate execution space for system security. In order to validate data blocks, the ticket services 506 may validate the ticket using data in the header. The ticket may include an encrypted key. The ticket services 506 decrypts the key using services in the security kernel 514 (e.g., the encryption/decryption engine 517).


In an embodiment, the encryption/decryption engine 517 uses secret common keys from the key store 518 to perform this decryption. In another embodiment, the ticket services 506 could use a device personalized ticket obtained from flash or network (not shown), validate some rights to content, and then return the key. In any case, this process returns the key. The personalized ticket could be encrypted by a key that is a function of the device private key, programmed in the non-volatile memory.


An example of data flow in the system 500 is provided for illustrative purposes as arrows 520-528. Receiving the certificate request at the ticket services 506 is represented by a certificate request arrow 520 from the calling application 508 to the ticket services 506.


Forwarding the certificate request from the ticket services 506 to the security API 516 is represented by a certificate request arrow 522. Within the security kernel 514, the public key/device certificate construction engine 517 accesses keys/signature data from the key/signature store 518. The access is represented by the private key/signature access arrow 524. The security API 516 returns a device certificate to the ticket services 506, as represented by the device certificate arrow 526, which is forwarded to the calling application 508, as represented by the device certificate arrow 528.



FIG. 6 depicts a flowchart 600 of an example of a method for manufacturing a secure device. This method and other methods are depicted as serially arranged modules. However, modules of the methods may be reordered, or arranged for parallel execution as appropriate. In the example of FIG. 6, the flowchart 600 begins at module 602 where a device ID is obtained. The device ID may be a serial number or some other unique identifier for the device.


In the example of FIG. 6, the flowchart 600 continues to module 604 where a pseudo-random number is provided for use as a small-signature private key for the device. To date, truly random numbers are not generable on a computer; of course, a pseudo-random number generator or an external secured hardware true random number generator could work for the intended purpose. A small-signature private key may be, by way of example but not limitation, an elliptic curve private key, or some other private key with a relatively small footprint.


In the example of FIG. 6, the flowchart 600 continues to module 606 where a public key is computed from the private key using common parameters. For example, a multiple of a base point may be computed, where a scalar multiple is the private key.


In the example of FIG. 6, the flowchart 600 continues to module 608 where a fixed certificate structure is used to construct a certificate. The certificate is signed using a small signature algorithm such as elliptic curve DSA. In an embodiment, the fixed certificate structure may include at least the device ID, issuer name, and device public key. A small-signature algorithm is used to minimize the size of the signature. By way of example but not limitation, an elliptic curve signature algorithm may be used.


In the example of FIG. 6, the flowchart 600 continues to module 610 where {device ID, private key, issuer ID, signature} is programmed into the non-volatile memory of the device. This set includes these four items because the items provide sufficient security for most purposes, and the set has a relatively small footprint due to the relatively small size of the private key and signature. (The device ID and issuer ID also, presumably, have relatively small footprints.) In an embodiment, any other data that is needed to construct the device certificate such as the public key may be generated programmatically on demand. However, more items could be programmed into the non-volatile memory, or fewer, as appropriate for a given embodiment or implementation.


In the example of FIG. 6, the flowchart 600 continues to module 612 where a secret random number is programmed into the ROM of the device. The secret random number may be pseudo-randomly generated or arbitrarily assigned. This secret random number can be used to support secure pseudo-random number generation. In an alternative, the ROM may be replaced with some other known or convenient NV storage.



FIG. 7 depicts a flowchart 700 of an example of a method for construction of a secure certificate. Advantageously, the method enables the device having the non-volatile programmed key and required software to construct a full device certificate that can be used to validate the device. In the example of FIG. 7, the flowchart 700 starts at module 702 where a request for a device certificate is received from a calling application.


In the example of FIG. 7, the flowchart 700 continues to module 704 where {device ID, private key, issuer ID, signature} is read from non-volatile memory. In an embodiment, a security kernel module accesses and reads the non-volatile memory. An example of a security kernel module that is appropriate for this purpose is described in U.S. patent application Ser. No. 10/360,827 entitled “Secure and Backward-Compatible Processor and Secure Software Execution Thereon,” which was filed on Feb. 7, 2003, by Srinivasan et al., and/or in U.S. patent application Ser. No. 11/586,446 entitled “Secure Device Authentication System and Method,” which was filed on Oct. 24, 2006, by Srinivasan et al., both of which are incorporated by reference. However, any applicable known or convenient security kernel module could be used.


In the example of FIG. 7, the flowchart 700 continues to module 706 where the public key is computed from the private key and common parameters, if any. In an embodiment, the computation makes use of the same algorithm that was used in a manufacturing process, such as the method described with reference to FIG. 6, above. The public key may be computed in a security kernel.


In the example of FIG. 7, the flowchart 700 continues to module 708 where a device certificate is constructed from device ID, issuer ID, public key, signature, and common parameters. In an embodiment, a security kernel module is aware of the structure of the device certificate, as is used in a manufacturing process, such as the method described with reference to FIG. 6, above. Advantageously, the device certificate can be constructed on demand.


In the example of FIG. 7, the flowchart 700 continues to module 710 where the device certificate is provided to the calling application. The flowchart 700 ends when the device certificate is provided to the calling application. The method could be started again by another calling application (or by the same calling application if, for some reason, the device certificate was needed again.)


As used herein, the term “content” is intended to broadly include any data that can be stored in memory.


As used herein, the term “embodiment” means an embodiment that serves to illustrate by way of example but not limitation.


It will be appreciated to those skilled in the art that the preceding examples and embodiments are exemplary and not limiting to the scope of the present invention. It is intended that all permutations, enhancements, equivalents, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present invention. It is therefore intended that the following appended claims include all such modifications, permutations and equivalents as fall within the true spirit and scope of the present invention.

Claims
  • 1. A server comprising: a number generator;a certificate request module;a certificate verification module;an interface, coupled to the number generator, the certificate request module, and the certificate verification module,wherein, in operation: the number generator generates a first number;the certificate request module generates a request for a device certificate;the first number and the request for a device certificate are sent via the interface;a response that includes a second number, a second signature that is generated using the second number, and a device certificate computed as a function of a device identifier (ID), an issuer ID, the second signature, and a public key are received at the interface; andthe certificate verification module validates the device certificate and the second signature, and verifies that the first number and the second number match.
  • 2. The server of claim 1, wherein the public key is computed from a private key.
  • 3. The server of claim 2, wherein the private key is an elliptic curve private key.
  • 4. The server of claim 1, wherein the interface receives the signature and the certificate validation module validates the signature using the device certificate.
  • 5. The server of claim 1, wherein the number generator is a pseudo-random number generator.
  • 6. The server of claim 1, wherein the number generator is a true random number generator.
  • 7. The server of claim 1, wherein, in operation, the certificate verification module validates the device certificate using a trusted certificate chain.
  • 8. The server of claim 1, further comprising a certificate database, wherein the device certificate is imported to the certificate database if validated by the certificate verification module.
  • 9. A server comprising: a means for generating a first number;a means for generating a request for a device certificate;a means for sending the first number and the request for a device certificate;a means for receiving a response that includes a second number, a second signature that is generated using the second number, and a device certificate computed as a function of a device identifier (ID), an issuer ID, the second signature, and a public key;a means for validating the device certificate and the second signature;a means for verifying that the first number and the second number match.
  • 10. The server of claim 9, further comprising: a means for receiving the signature;a means for validating the signature using the device certificate.
  • 11. The server of claim 9, further comprising generating a pseudo-random number as the first number.
  • 12. The server of claim 9, further comprising generating a true random number as the first number.
  • 13. The server of claim 9, further comprising validating the device certificate using a trusted certificate chain.
  • 14. The server of claim 9, further comprising importing the device certificate to a certificate database after the device certificate is validated.
  • 15. A computer program product including memory storing instructions and a processor for executing the instructions in memory: a processor;memory storing modules having instructions, coupled to the processor, including: a number generation module;a certificate request module;a certificate verification module;wherein, in operation, the processor executes the instructions such that: the number generation module generates a first number;the certificate request module generates a request for a device certificate and sends the first number and the request for a device certificate;the certificate verification module: receives a response that includes a second number, a second signature that is generated using the second number, and a device certificate computed as a function of a device identifier (ID), an issuer ID, the second signature, and a public key;validates the device certificate and the second signature; andverifies that the first number and the second number match.
  • 16. The computer program product of claim 15, wherein the certificate verification module receives the signature and validates the signature using the device certificate.
  • 17. The computer program product of claim 15, wherein the number generation module is a pseudo-random number generation module.
  • 18. The computer program product of claim 15, wherein the number generation module is a true random number generation module.
  • 19. The computer program product of claim 15, wherein the certificate verification module validates the device certificate using a trusted certificate chain.
  • 20. The computer program product of claim 15, further comprising a certificate database, wherein the device certificate is imported to the certificate database if validated by the certificate verification module.
CROSS-REFERENCE TO RELATED APPLICATION

This Divisional Application claims priority to U.S. patent application Ser. No. 11/601,323, filed Nov. 16, 2006, entitled METHOD FOR PROGRAMMING ON-CHIP NON-VOLATILE MEMORY IN A SECURE PROCESSOR, AND A DEVICE SO PROGRAMMED, which claims priority to U.S. Provisional Patent Application No. 60/857,840, filed Nov. 9, 2006, entitled METHOD FOR PROGRAMMING ON-CHIP NON-VOLATILE MEMORY IN A SECURE PROCESSOR, AND A DEVICE SO PROGRAMMED and each of the aforementioned applications is incorporated in its entirety.

US Referenced Citations (217)
Number Name Date Kind
4723284 Munck et al. Feb 1988 A
5095798 Okada et al. Mar 1992 A
5184830 Okada et al. Feb 1993 A
5238250 Leung et al. Aug 1993 A
5261069 Wilkinson et al. Nov 1993 A
5367698 Webber et al. Nov 1994 A
5400402 Garfinkle Mar 1995 A
5404505 Levinson Apr 1995 A
5426763 Okada et al. Jun 1995 A
5497422 Tysen et al. Mar 1996 A
5528513 Vaitzblit et al. Jun 1996 A
5577209 Boyle et al. Nov 1996 A
5586264 Belknap et al. Dec 1996 A
5590199 Krajewski, Jr. et al. Dec 1996 A
5610839 Karolak et al. Mar 1997 A
5638443 Stefik et al. Jun 1997 A
5715398 Lubenow et al. Feb 1998 A
5715403 Stefik Feb 1998 A
5765152 Erickson Jun 1998 A
5781901 Kuzma Jul 1998 A
5786587 Colgate, Jr. Jul 1998 A
5787172 Arnold Jul 1998 A
5790170 Suzuki Aug 1998 A
5799086 Sudia Aug 1998 A
5805712 Davis Sep 1998 A
5809242 Shaw et al. Sep 1998 A
5815662 Ong Sep 1998 A
5818512 Fuller Oct 1998 A
5829046 Tzelnic et al. Oct 1998 A
5867223 Schindler et al. Feb 1999 A
5892900 Ginter et al. Apr 1999 A
5903723 Beck et al. May 1999 A
5905860 Olsen et al. May 1999 A
5909491 Luo Jun 1999 A
5913039 Nakamura et al. Jun 1999 A
5933498 Schneck et al. Aug 1999 A
5983227 Nazem et al. Nov 1999 A
6014558 Thomas Jan 2000 A
6016348 Blatter et al. Jan 2000 A
6029046 Khan et al. Feb 2000 A
6032200 Lin Feb 2000 A
6038601 Lambert et al. Mar 2000 A
6044157 Uesaka et al. Mar 2000 A
6049821 Theriault et al. Apr 2000 A
6052720 Traversat et al. Apr 2000 A
6085193 Malkin et al. Jul 2000 A
6141756 Bright et al. Oct 2000 A
6148340 Bittinger et al. Nov 2000 A
6157721 Shear et al. Dec 2000 A
6167441 Himmel Dec 2000 A
6185625 Tso et al. Feb 2001 B1
6195433 Vanstone et al. Feb 2001 B1
6205475 Pitts Mar 2001 B1
6212657 Wang et al. Apr 2001 B1
6219680 Bernardo et al. Apr 2001 B1
6219708 Martenson Apr 2001 B1
6226618 Downs et al. May 2001 B1
6243467 Reiter et al. Jun 2001 B1
6243719 Ikuta et al. Jun 2001 B1
6252961 Hogan Jun 2001 B1
6256637 Venkatesh et al. Jul 2001 B1
6259471 Peters et al. Jul 2001 B1
6278782 Ober et al. Aug 2001 B1
6289452 Arnold et al. Sep 2001 B1
6292899 McBride Sep 2001 B1
6310956 Morito et al. Oct 2001 B1
6321209 Pasquali Nov 2001 B1
6330566 Durham Dec 2001 B1
6338050 Conklin et al. Jan 2002 B1
6351539 Djakovic Feb 2002 B1
6371854 Ikeda et al. Apr 2002 B1
6377972 Guo et al. Apr 2002 B1
6389460 Stewart et al. May 2002 B1
6389538 Gruse et al. May 2002 B1
6397186 Bush et al. May 2002 B1
6412008 Fields et al. Jun 2002 B1
6412011 Agraharam et al. Jun 2002 B1
6427238 Goodman et al. Jul 2002 B1
6442691 Blandford Aug 2002 B1
6446113 Ozzie et al. Sep 2002 B1
6466048 Goodman et al. Oct 2002 B1
6480883 Tsutsumitake et al. Nov 2002 B1
6500070 Tomizawa et al. Dec 2002 B1
6510502 Shimizu Jan 2003 B1
6526581 Edson Feb 2003 B1
6544126 Sawano et al. Apr 2003 B2
6557104 Vu et al. Apr 2003 B2
6571279 Herz et al. May 2003 B1
6574605 Sanders et al. Jun 2003 B1
6594682 Peterson et al. Jul 2003 B2
6606644 Ford et al. Aug 2003 B1
6637029 Maissel et al. Oct 2003 B1
6654388 Lexenberg et al. Nov 2003 B1
6669096 Saphar et al. Dec 2003 B1
6675350 Abrams et al. Jan 2004 B1
6691312 Sen et al. Feb 2004 B1
6697948 Rabin et al. Feb 2004 B1
6704797 Fields et al. Mar 2004 B1
6711400 Aura Mar 2004 B1
6751729 Giniger et al. Jun 2004 B1
6760324 Scott et al. Jul 2004 B1
6785712 Hogan et al. Aug 2004 B1
6805629 Weiss Oct 2004 B1
6811486 Luciano, Jr. Nov 2004 B1
6823454 Hind et al. Nov 2004 B1
6826593 Acharya et al. Nov 2004 B1
6832241 Tracton et al. Dec 2004 B2
6859535 Tatebayashi et al. Feb 2005 B1
6873975 Hatakeyama et al. Mar 2005 B1
6892301 Hansmann et al. May 2005 B1
6901386 Dedrick et al. May 2005 B1
6920567 Doherty et al. Jul 2005 B1
6928551 Lee et al. Aug 2005 B1
6938021 Shear et al. Aug 2005 B2
6948070 Ginter et al. Sep 2005 B1
6993557 Yen Jan 2006 B1
7020480 Coskun et al. Mar 2006 B2
7024394 Ashour et al. Apr 2006 B1
7039708 Knobl et al. May 2006 B1
7051212 Ginter et al. May 2006 B2
7062500 Hall et al. Jun 2006 B1
7069451 Ginter et al. Jun 2006 B1
7092953 Haynes Aug 2006 B1
7099479 Ishibashi et al. Aug 2006 B1
7120802 Shear et al. Oct 2006 B2
7124304 Bel et al. Oct 2006 B2
7231371 Cantini et al. Jun 2007 B1
7322042 Srinivasan et al. Jan 2008 B2
7330717 Gidron et al. Feb 2008 B2
7380275 Srinivasan et al. May 2008 B2
7398396 Arditi et al. Jul 2008 B2
7415620 England et al. Aug 2008 B2
7424615 Jalbert et al. Sep 2008 B1
7440452 Giniger et al. Oct 2008 B1
7636843 Asano et al. Dec 2009 B1
7644429 Bayassi et al. Jan 2010 B2
8122244 Azema et al. Feb 2012 B2
20010011255 Asay et al. Aug 2001 A1
20010014882 Stefik et al. Aug 2001 A1
20010026287 Watanabe Oct 2001 A1
20020016818 Kirani et al. Feb 2002 A1
20020032784 Darago et al. Mar 2002 A1
20020049909 Jackson et al. Apr 2002 A1
20020057799 Kohno May 2002 A1
20020059384 Kaars May 2002 A1
20020071557 Nguyen Jun 2002 A1
20020085720 Okada et al. Jul 2002 A1
20020116615 Nguyen et al. Aug 2002 A1
20020137566 Tomizawa et al. Sep 2002 A1
20020138764 Jacobs et al. Sep 2002 A1
20020144121 Ellison et al. Oct 2002 A1
20020154799 Anderson et al. Oct 2002 A1
20020160833 Lloyd et al. Oct 2002 A1
20020161673 Lee et al. Oct 2002 A1
20020162115 Bruckner et al. Oct 2002 A1
20020165022 Hiraoka Nov 2002 A1
20020165028 Miyamoto et al. Nov 2002 A1
20020169974 McKune Nov 2002 A1
20020184160 Tadayon et al. Dec 2002 A1
20030009423 Wang et al. Jan 2003 A1
20030023427 Cassin et al. Jan 2003 A1
20030023564 Padhye et al. Jan 2003 A1
20030028622 Inoue et al. Feb 2003 A1
20030041110 Wenocur et al. Feb 2003 A1
20030045355 Comair Mar 2003 A1
20030107951 Sartschev et al. Jun 2003 A1
20030114227 Rubin Jun 2003 A1
20030120541 Siann et al. Jun 2003 A1
20030144869 Fung et al. Jul 2003 A1
20030157985 Shteyn Aug 2003 A1
20030163701 Ochi et al. Aug 2003 A1
20030166398 Netanel Sep 2003 A1
20030182142 Valenzuela et al. Sep 2003 A1
20030196085 Lampson et al. Oct 2003 A1
20030220142 Siegel Nov 2003 A1
20030225691 Ruellan et al. Dec 2003 A1
20040015426 Tadayon et al. Jan 2004 A1
20040039929 Decime Feb 2004 A1
20040044901 Serkowski et al. Mar 2004 A1
20040054923 Seago et al. Mar 2004 A1
20040083388 Nguyen Apr 2004 A1
20040098297 Borthwick May 2004 A1
20040098580 DeTreville May 2004 A1
20040098610 Hrastar May 2004 A1
20040102987 Takahashi et al. May 2004 A1
20040116119 Lewis et al. Jun 2004 A1
20040193890 Girault Sep 2004 A1
20050004875 Kontio et al. Jan 2005 A1
20050010801 Spies et al. Jan 2005 A1
20050027991 DiFonzo Feb 2005 A1
20050038753 Yen et al. Feb 2005 A1
20050066164 Simon Mar 2005 A1
20050071640 Sprunk et al. Mar 2005 A1
20050097618 Arling et al. May 2005 A1
20050122977 Lieberman Jun 2005 A1
20050135608 Zheng Jun 2005 A1
20050232284 Karaoguz et al. Oct 2005 A1
20060026691 Kim et al. Feb 2006 A1
20060031222 Hannsmann Feb 2006 A1
20060080529 Yoon et al. Apr 2006 A1
20060090084 Buer Apr 2006 A1
20060106836 Masugi et al. May 2006 A1
20060129848 Paksoy et al. Jun 2006 A1
20060136570 Pandya Jun 2006 A1
20060153368 Beeson Jul 2006 A1
20060236122 Field et al. Oct 2006 A1
20070005504 Chen et al. Jan 2007 A1
20070016832 Weiss Jan 2007 A1
20070028095 Allen et al. Feb 2007 A1
20070055867 Kanungo et al. Mar 2007 A1
20070067826 Conti Mar 2007 A1
20070150730 Conti Jun 2007 A1
20080096608 Wendling Apr 2008 A1
20080275750 Robinson et al. Nov 2008 A1
20090031409 Murray Jan 2009 A1
20090106548 Arditti et al. Apr 2009 A1
20120198229 Takashima Aug 2012 A1
Foreign Referenced Citations (7)
Number Date Country
0992922 Apr 2000 EP
1091274 Apr 2001 EP
2002024178 Jan 2002 JP
0213445 Feb 2002 WO
WO-0229642 Apr 2002 WO
WO-0230088 Apr 2002 WO
2004040397 May 2004 WO
Non-Patent Literature Citations (130)
Entry
Gligor (Virgil D. Gligor, “20 Years of Operating Systems Security” University of Maryland, published by IEEE on 1999).
Smith (Smith et al., “Validating a High-Perforance, Programmable Secure Coprocessor,” Secure Systems and Smart Cards, IBM T.J. Watson Research Center, NY, Oct. 1999).
Tygar (Tygar et al., “Strongbox: A System for Self-Securing Programs”, 1991).
Gligor, Virgil D., “20 Years of Operating Systems Security,” University of Maryland.
Smith, Sean W., et al., “Using a High-Performance, Programmable Secure Coprocessor,” Proceedings of the Second International Conference on Financial Cryptography.
Smith, Sean, et al., “Validating a High-Performance, Programmable Secure Coprocessor,” Secure Systems and Smart Cards, IBM T.J. Watson Research Center, NY.
Tygar, J.D., et al., “Strongbox: A System for Self-Securing Programs,” pp. 163-197.
Van Doorn Leendert, “A Secure Java.TM. Virtual Machine,” Proceedings of the 9.sup.th USENIX Security Symposium (2000).
Co-pending U.S. Appl. No. 10/360,827 ,filed Feb. 7, 2003.
Search Report and Written Opinion mailed Oct. 28, 2008 from International Serial No. PCT/US2007/020074 filed Sep. 13, 2007.
Menezes, Alfred J. et al., “Handbook of Applied Cryptography,” ISBN 0849385237, pp. 397-402, Oct. 1996.
Office Action mailed Mar. 27, 2012 from U.S. Appl. No. 12/576,356, filed Oct. 9, 2009.
Schneier, Bruce, “Applied Cryptography: Protocols, Algorithms and Source Code in C,” 2nd edition, ISBN 0471128457, p. 175 (1996).
Arbaugh, William A., et al., “A Secure and Reliable Bootstrap Architecture,” University of Pennsylvania (1996).
Aziz, Ashar, et al., “Privacy and Authentication for Wireless Local Area Networks,” Sun Microsystems, Inc., (1993).
Bharadvaj et al., Proceedings of the 17.sup.th IEEE Symposium on Reliable Distributed Systems, pp. 118-123 (1998).
Davida, George I., et al., “Defending Systems Against Viruses through Cryptographic Authentication,” IEEE pp. 312-318 (1989).
Diffie, Whitfield, “The First Ten Years of Public-Key Cryptography,” Proceedings of the IEEE, vol. 96, No. 5, pp. 560-577 (May 1988).
Diffie, Whitfield, et al., “New Directions in Cryptography,” (1976).
Dodson, David A, “Gain Some Perspective With Innovation's GBA to TV Converter” Jun. 6, 2002, http://www.viewonline.com/page/articles/innovationsGBATV.htm>, Accessed Mar. 29, 2008.
Dyer, Joan G., et al., “Building the IBM 4758 Secure Coprocessor,” Computer, pp. 2-12 (Oct. 2001).
Frantzen, Mike, et al., “StackGhost: Hardware Facilitated Stack Protection,” Proceedings of the 10.sup.th USENIX Security Symposium (2001).
Fujimura, Ko., et al., “Digital-Ticket-Controlled Digital Ticket Circulation,” Proceedings of the 8.sup.th USENIX Security Symposium (1999).
Game Boy, <http://en.wikipedia.org/wiki/Game—Boy—Advanced> Accessed Mar. 30, 2008.
Game Boy Advance, <http://en.wikipedia.org/wiki/Game—Boy—Advanced> Accessed Mar. 30, 2008.
Game Cube, <http://en.wikipedia.org/wiki/Game—Cube> Accessed Mar. 28, 2008.
Gligor, Virgil D., “20 Years of Operating Systems Security,” University of Maryland, 1999.
Gutmann, Peter, “The Design of a Cryptographic Security Architecture,” Proceedings of the 8.sup.th USENIX Security Symposium (1999).
Hori et al., Computer Networks, 33(1-6):197-211 (2000).
Itoi, Naomaru, “SC-CFS: Smartcard Secured Cryptographic File System,” Proceedings of the 10.sup.th USENIX Security Symposium (2001).
Jaeger, Trent, et al., “Building Systems that Flexibly Control Downloaded Executable Context,” Proceedings of the 6.sup.th USENIX UNIX Security Symposium (1996).
Karger, Paul A., “New Methods for Immediate Revocation,” IEEE (1989).
Kent, Stephen Thomas, “Protecting Externally Supplied Software in Small Computers,” Massachusetts Institute of Technology (1980).
Kogan, Noam, et al., “A Practical Revocation Scheme for Broadcast Encryption Using Smart Cards,” Proceedings of the 2003 IEEE Symposium on Security and Privacy (2003).
Lampson, Butler, et al., “Authentication in Distributed Systems Theory and Practice,” Digital Equipment Corporation (1992).
Lotspiech, Jeffrey, et al., “Anonymous Trust: Digital Rights Management Using Broadcast Encryption,” Proceedings of the IEEE, vol. 92, No. 6, pp. 898-909 (Jun. 2004).
Lotspiech, Jeffrey, et al., “Broadcast Encryption's Bright Future,” Computer, pp. 57-63 (Aug. 2002).
Monrose, et al., “Toward Speech-Generated Cryptographic Keys on Resource Constrained Devices,” Proceedings of the 11.sup.th USENIX Security Symposium (2002).
Neboyskey, “A leap Forward: Why States Should Ratify the Uniform Computer Information Transaction Act”, May 2000, Federal Communications Law Journal, v52n3, pp. 793-820.
Neumann, P.G., et al., “A Provably Secure Operating System,” Stanford Research Institute (1975).
Nonnenmacher, Jorg et al., “Asynchronous Multicast Push: AMP.” 13.sup.th International Conference on Computer Commnication, Nov. 18-21, 1997, pp. 419-430,13, Proceedings of International Conference on Computer Communication, Cannes.
Palmer, Elaine R., “An Introduction to Citadel—A Secure Crypto Coprocessor for Workstations,” IBM Research Division (1992).
Peterson, David S., et al., “A Flexible Containment Mechanism for Executing Untrusted Code,” Proceedings of the 11.sup.th USENIX Security Symposium (2002).
Rodriguez, Pablo et al. Improving the WWW: Caching or Multicast? Computer Networks and ISDN Systems, Nov. 25, 1998, 30(22-23):2223-2243.
Rubin, Aviel D., “Trusted Distribution of Software Over the Internet,” Internet Society 1995 Symposium on Network and Distributed System Security.
Smith, Sean W., “Secure Coprocessing Applications and Research Issues,” Los Alamos Unclassified Release LA-UR-96-2805 (1996).
Smith, Sean W., et al., “Building a High-Performance, Programmable Secure Coprocessor,” Secure Systems and Smart Cards, IBM T.J. Watson Research Center, NY (1998).
Smith, Sean W., et al., “Using a High-Performance, Programmable Secure Coprocessor,” Proceedings of the Second International Conference on Financial Cryptography, 1997.
Smith, Sean, et al., “Validating a High-Performance, Programmable Secure Coprocessor,” Secure Systems and Smart Cards, IBM T.J. Watson Research Center, NY, Oct. 1999.
Stefik, Mark, “Trusted Systems,” Scientific American, pp. 78-81 (Mar. 1997).
Traylor, Scott, “Graphic Resolution and File Sizes”, http://www.traylormm.com/harvard/53graphicresolution/, no date provided.
Tygar, J.D. et al., “Dyad: A System for Using Physically Secure Coprocessors,” School of Computer Science, Carnegie Mellon University (1991).
Tygar, J.D., et al., “Strongbox: A System for Self-Securing Programs,” pp. 163-197, 1991.
Van Doom, Leendert, “A Secure Java.TM. Virtual Machine,” Proceedings of the 9.sup.th USENIX Security Symposium (2000).
Wang, Zheng et al. “Prefetching in World Wide Web.” Global TeleCommnications Conference, Nov. 18-22, 1996, pp. 28-32, London.
White, et al., “ABYSS: An Architecture for Software Protection,” IEEE Transactions on Software Engineering, vol. 16, No. 6, pp. 619-629(1990).
White, Steve R., et al., “Introduction to the Citadel Architecture: Security in Physically Exposed Environments,” IBM Research Division (1991).
Wobber, Edward, et al., “Authentication in the Taso Operating System,” Digital Systems Research Center (1993).
Yee, B., “Using Secure Coprocessors,” PhD Thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA (1994).
Yee, B., et al., “Secure Coprocessors in Electronic Commerce Applications,” Proceedings of the First USENIX Workshop on Electronic Commerce (1995).
International Search Report of PCT Application No. PCT/US04/03413, Jun. 22, 2007, 2 pages.
Written Opinion of PCT Application No. PCT/US04/03413, Jun. 22, 2007, 3 pages.
International Search Report of PCT Application No. PCT/US04/37050, Jun. 14, 2005, 1 page.
Written Opinion of PCT Application No. PCT/US04/37050, Jun. 14, 2005, 3 pages.
International Search Report of PCT Application No. PCT/US2004/040486, May 8, 2007, 1 page.
Written Opinion of PCT Application No. PCT/US2004/040486, May 8, 2007, 8 pages.
International Search Report of PCT Application No. PCT/US2007/010797, Aug. 5, 2008, 1 page.
Written Opinion of PCT Application No. PCT/US2007/010797, Aug. 5, 2008, 3 pages.
International Search Report of PCT Application No. PCT/US2007/010601, Apr. 24, 2008, 1 page.
Written Opinion of PCT Application No. PCT/US2007/010601, Apr. 24, 2008, 4 pages.
International Search Report of PCT Application No. PCT/US07/19862, May 28, 2008, 1 page.
Written Opinion of PCT Application No. PCT/US07/19862, May 28, 2008, 6 pages.
International Search Report of PCT Application No. PCT/US2007/020074, Oct. 8, 2008, 3 pages.
Written Opinion of PCT Application No. PCT/US07/19862, Oct. 8, 2008, 6 pages.
Co-pending U.S. Appl. No. 10/360,827, filed Feb. 7, 2003.
Co-pending U.S. Appl. No. 11/048,515, filed Jan. 31, 2005.
Co-pending U.S. Appl. No. 10/463,224, filed Jun. 16, 2003.
Co-pending U.S. Appl. No. 10/703,149, filed Nov. 5, 2003.
Co-pending U.S. Appl. No. 11/203,357, filed Aug. 12, 2005.
Co-pending U.S. Appl. No. 11/203,358, filed Aug. 12, 2005.
Co-pending U.S. Appl. No. 12/330,487, filed Dec. 8, 2008.
Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Co-pending U.S. Appl. No. 11/416,361, filed May 1, 2006.
Co-pending U.S. Appl. No. 12/281,977, filed Jul. 13, 2009.
Co-pending U.S. Appl. No. 11/586,446, filed Oct. 24, 2006.
Co-pending U.S. Appl. No. 12/576,243, filed Oct. 9, 2009.
Co-pending U.S. Appl. No. 11/601,323, filed Nov. 16, 2006.
Co-pending U.S. Appl. No. 12/576,356, filed Oct. 9, 2009.
Co-pending U.S. Appl. No. 12/576,904, filed Oct. 9, 2009.
Co-pending U.S. Appl. No. 12/507,050, filed Jul. 21, 2009.
Notice of Allowance Mailed Aug. 28, 2007 in Co-Pending U.S. Appl. No. 10/360,827, filed Feb. 7, 2003.
Final Office Action Mailed Mar. 8, 2007 in Co-Pending U.S. Appl. No. 10/360,827, filed Feb. 7, 2003.
Non-Final Office Action Mailed Sep. 7, 2006 in Co-Pending U.S. Appl. No. 10/360,827, filed Feb. 7, 2003.
Notice of Allowance Mailed Dec. 20, 2007 in Co-Pending U.S. Appl. No. 11/048,515, filed Jan. 31, 2005.
Non-Final Office Action Mailed Sep. 7, 2007 in Co-Pending U.S. Appl. No. 11/048,515, filed Jan. 31, 2005.
Final Office Action Mailed Mar. 8, 2007 in Co-Pending U.S. Appl. No. 11/048,515, filed Jan. 31, 2005.
Non-Final Office Action Mailed Sep. 7, 2006 in Co-Pending U.S. Appl. No. 11/048,515, filed Jan. 31, 2005.
Final Office Action Mailed Apr. 10, 2009 in Co-pending U.S. Appl. No. 10/463,224, filed Jun. 16, 2003.
Non-Final Office Action Mailed Oct. 3, 2008 in Co-pending U.S. Appl. No. 10/463,224, filed Jun. 16, 2003.
Final Office Action Mailed Apr. 28, 2008 in Co-pending U.S. Appl. No. 10/463,224, filed Jun. 16, 2003.
Non-Final Office Action Mailed Apr. 18, 2007 in Co-pending U.S. Appl. No. 10/463,224, filed Jun. 16, 2003.
Advisory Action Mailed Jan. 18, 2008 in Co-pending U.S. Appl. No. 10/703,149, filed Nov. 5, 2003.
Final Office Action Mailed Sep. 10, 2007 in Co-pending U.S. Appl. No. 10/703,149, filed Nov. 5, 2003.
Non-Final Office Action Mailed Mar. 22, 2007 in Co-pending U.S. Appl. No. 10/703,149, filed Nov. 5, 2003.
Final Office Action Mailed May 4, 2006 in Co-pending U.S. Appl. No. 10/703,149, filed Nov. 5, 2003.
Non-Final Office Action Mailed Nov. 2, 2005 in Co-pending U.S. Appl. No. 10/703,149, filed Nov. 5, 2003.
Notice of Allowance Mailed Oct. 3, 2008 in Co-pending U.S. Appl. No. 11/203,357, filed Aug. 12, 2005.
Final Office Action Mailed Oct. 31, 2007 in Co-pending U.S. Appl. No. 11/203,357, filed Aug. 12, 2005.
Non-Final Office Action Mailed May 8, 2007 in Co-pending U.S. Appl. No. 11/203,357, filed Aug. 12, 2005.
Final Office Action Mailed Dec. 11, 2006 in Co-pending U.S. Appl. No. 11/203,357, filed Aug. 12, 2005.
Non-Final Office Action Mailed Jun. 14, 2006 in Co-pending U.S. Appl. No. 11/203,357, filed Aug. 12, 2005.
Final Office Action Mailed Jun. 18, 2007 in Co-pending U.S. Appl. No. 11/203,358, filed Aug. 12, 2005.
Non-Final Office Action Mailed Dec. 14, 2006 in Co-pending U.S. Appl. No. 11/203,358, filed Aug. 12, 2005.
Non-Final Office Action Mailed May 18, 2006 in Co-pending U.S. Appl. No. 11/203,358, filed Aug. 12, 2005.
Non-Final Office Action Mailed Apr. 1, 2010 in Co-pending U.S. Appl. No. 12/330,487, filed Dec. 8, 2008.
Notice of Allowance Mailed Apr. 6, 2010 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Advisory Action Mailed May 11, 2009 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Final Office Action Mailed Feb. 3, 2009 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Non-Final Office Action Mailed Jul. 9, 2008 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Final Office Action Mailed Nov. 26, 2007 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Non-Final Office Action Mailed May 9, 2007 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Final Office Action Mailed Nov. 9, 2006 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Non-Final Office Action Mailed Mar. 29, 2006 in Co-pending U.S. Appl. No. 10/727,332, filed Dec. 2, 2003.
Non-Final Office Action Mailed Mar. 13, 2009 in Co-pending U.S. Appl. No. 11/416,361, filed May 1, 2006.
Notice of Allowance Mailed Jul. 24, 2009 in Co-pending U.S. Appl. No. 11/586,446, filed Oct. 24, 2006.
Final Office Action Mailed Jan. 12, 2009 in Co-pending U.S. Appl. No. 11/586,446, filed Oct. 24, 2006.
Non-Final Office Action Mailed May 21, 2008 in Co-pending U.S. Appl. No. 11/586,446, filed Oct. 24, 2006.
Notice of Allowance Mailed Aug. 13, 2009 in Co-pending U.S. Appl. No. 11/601,323, filed Nov. 16, 2006.
Final Office Action Mailed Apr. 30, 2009 in Co-pending U.S. Appl. No. 11/601,323, filed Nov. 16, 2006.
Non-Final Office Action Mailed Sep. 12, 2008 in Co-pending U.S. Appl. No. 11/601,323, filed Nov. 16, 2006.
Related Publications (1)
Number Date Country
20100095125 A1 Apr 2010 US
Provisional Applications (1)
Number Date Country
60857840 Nov 2006 US
Divisions (1)
Number Date Country
Parent 11601323 Nov 2006 US
Child 12576344 US