Protected computing environment

Information

  • Patent Grant
  • 9189605
  • Patent Number
    9,189,605
  • Date Filed
    Monday, February 23, 2009
    15 years ago
  • Date Issued
    Tuesday, November 17, 2015
    8 years ago
Abstract
A method of establishing a protected environment within a computing device including validating a kernel component loaded into a kernel of the computing device, establishing a security state for the kernel based on the validation, creating a secure process and loading a software component into the secure process, periodically checking the security state of the kernel, and notifying the secure process when the security state of the kernel has changed.
Description
DESCRIPTION OF THE DRAWINGS

These and other features and advantages of the present example will be better understood from the following detailed description read in light of the accompanying drawings, wherein:



FIG. 1 is a block diagram showing a conventional media application processing media content operating in a conventional computing environment with an indication of an attack against the system.



FIG. 2 is a block diagram showing a trusted application processing media content and utilizing a protected environment that tends to be resistant to attacks.



FIG. 3 is a block diagram showing exemplary components of a trusted application that may be included in the protected environment.



FIG. 4 is a block diagram showing a system for downloading digital media content from a service provider that utilizes an exemplary trusted application utilizing a protected environment.



FIG. 5 is a block diagram showing exemplary attack vectors that may be exploited by a user or mechanism attempting to access media content and other data typically present in a computing environment in an unauthorized manner.



FIG. 6 is a flow diagram showing the process for creating and maintaining a protected environment that tends to limit unauthorized access to media content and other data.



FIG. 7 is a block diagram showing exemplary kernel components and other components utilized for creating an exemplary secure computing environment.



FIG. 8 and FIG. 9 are flow diagrams showing an exemplary process for loading kernel components to create an exemplary secure computing environment.



FIG. 10 is a block diagram showing a secure computing environment loading an application into an exemplary protected environment to form a trusted application that is typically resistant to attacks.



FIG. 11 is a flow diagram showing an exemplary process for creating a protected environment and loading an application into the protected environment.



FIG. 12 is a block diagram showing an exemplary trusted application utilizing an exemplary protected environment periodically checking the security state of the secure computing environment.



FIG. 13 is a flow diagram showing an exemplary process for periodically checking the security state of the secure computing environment.



FIG. 14 is a block diagram showing an exemplary computing environment in which the processes, systems and methods for establishing a secure computing environment including a protected environment may be implemented.







Like reference numerals are used to designate like elements in the accompanying drawings.


DETAILED DESCRIPTION

The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present examples may be constructed or utilized. The description sets forth the functions of the examples and the sequence of steps for constructing and operating the examples in connection with the examples illustrated. However, the same or equivalent functions and sequences may be accomplished by different examples.


Although the present examples are described and illustrated herein as being implemented in a computer operating system, the system described is provided as an example and not a limitation. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of computer systems.


Introduction



FIG. 1 is a block diagram showing a conventional media application 105 processing media content 106 operating in a conventional computing environment 100 with an indication of an attack 107 against the system 101. A conventional computing environment 100 may be provided by a personal computer (“PC”) or consumer electronics (“CE”) device 101 that may include operating system (“OS”) 102. Typical operating systems often partition their operation into a user mode 103, and a kernel mode 104. User mode 103 and kernel mode 104 may be used by one or more application programs 105. An application program 105 may be used to process media content 106 that may be transferred to the device 101 via some mechanism, such as a CD ROM drive, Internet connection or the like. An example of content 106 would be media files that may be used to reproduce audio and video information.


The computing environment 100 may typically include an operating system (“OS”) 102 that facilitates operation of the application 105, in conjunction with the one or more central processing units (“CPU”). Many operating systems 102 may allow multiple users to have access to the operation of the CPU. Multiple users may have ranges of access privileges typically ranging from those of a typical user to those of an administrator. Administrators typically have a range of access privileges to applications 105 running on the system, the user mode 103 and the kernel 104. Such a computing environment 100 may be susceptible to various types of attacks 107. Attacks may include not only outsiders seeking to gain access to the device 101 and the content 106 on it, but also attackers having administrative rights to the device 101 or other types of users having whatever access rights granted them.



FIG. 2 is a block diagram showing a trusted application 202 processing media content 106 and utilizing a protected environment 203 that tends to be resistant to attack 205. The term “trusted application”, as used here, may be defined as an application that utilizes processes operating in a protected environment such that they tend to be resistant to attack 205 and limit unauthorized access to any media content 106 or other data being processed. Thus, components or elements of an application operating in a protected environment are typically considered “trusted” as they tend to limit unauthorized access and tend to be resistant to attack. Such an application 202 may be considered a trusted application itself or it may utilize another trusted application to protect a portion of its processes and/or data.


For example, a trusted media player 202 may be designed to play media content 106 that is typically licensed only for use such that the media content 106 cannot be accessed in an unauthorized manner. Such a trusted application 202 may not operate and/or process the media content 106 unless the computing environment 200 can provide the required level of security, such as by providing a protected environment 203 resistant to attack 205.


As used herein, the term “process” can be defined as an instance of a program (including executable code, machine instructions, variables, data, state information, etc.) residing and/or operating in a kernel space, user space and/or any other space of an operating system and/or computing environment.


A digital rights management system 204 or the like may be utilized with the protected environment 203. The use of a digital rights management system 204 is merely provided as an example and may not be utilized with a protected environment or a secure computing environment. Typically a digital rights management system utilizes tamper-resistant software (“TRS”) which tends to be expensive to produce and may negatively impact computing performance. Utilizing a trusted application 202 may minimize the amount of TRS functionality required to provide enhanced protection.


Various mechanisms known to those skilled in this technology area may be utilized in place of, in addition to, or in conjunction with a typical digital rights management system. These mechanisms may include, but are not limited to, encryption/decryption, key exchanges, passwords, licenses, and the like. Thus, digital right management as used herein may be a mechanism as simple as decrypting an encrypted media, utilizing a password to access data, or other tamper-resistant mechanisms. The mechanisms to perform these tasks may be very simple and entirely contained within the trusted application 202 or may be accessed via interfaces that communicate with complex systems otherwise distinct from the trusted application 202.



FIG. 3 is a block diagram showing exemplary components of a trusted application 202 that may be included in the protected environment 203. A trusted application 202 will typically utilize a protected environment 203 for at least a potion of its subcomponents 302-304. Other components 301 of the trusted application may not utilize a protected environment. Components 302-204 involved in the processing of media content or data that may call for an enhanced level of protection from attack or unauthorized access may operate within a protected environment 203. A protected environment 203 may be utilized by a single trusted application 202 or, possibly, by a plurality of trusted applications. Alternatively, a trusted application 202 may utilize a plurality of protected environments. A trusted application 202 may also couple to and/or utilize a digital rights management system 204.


In the example shown, source 302 and sink 303 are shown as part of a media pipeline 304 operating in the protected environment 203. A protected environment 203 tends to ensure that, once protected and/or encrypted content 309 has been received and decrypted, the trusted application 202 and its components prevent unauthorized access to the content 309.


Digital rights management 204 may provide a further avenue of protection for the trusted application 202 and the content 309 it processes. Through a system of licenses 308, device certificates 311, and other security mechanisms a content provider is typically able to have confidence that encrypted content 309 has been delivered to the properly authorized device and that the content 309 is used as intended.



FIG. 4 is a block diagram showing a system for downloading digital media content 410 from a service provider 407 to an exemplary trusted application 202 utilizing a protected environment 203. In the example shown the trusted application 202 is shown being employed in two places 401, 403. The trusted application 202 may be used in a CE device 401 or a PC 403. Digital media 410 may be downloaded via a service provider 407 and the Internet 405 for use by the trusted application 202. Alternatively, digital media may be made available to the trusted application via other mechanisms such as a network, a CD or DVD disk, or other storage media. Further, the digital media 410 may be provided in an encrypted form 309 requiring a system of decryption keys, licenses, certificates and/or the like which may take the form of a digital rights management system 204. The data or media content 410 provided to the trusted application may or may not be protected, i.e, encrypted or the like.


In one example, a trusted application 202 may utilize a digital rights management (“DRM”) system 204 or the like along with a protected environment 203. In this case, the trusted application 202 is typically designed to acknowledge, and adhere to, the content's usage policies by limiting usage of the content to that authorized by the content provider via the policies. Implementing this may involve executing code which typically interrogates content licenses and subsequently makes decisions about whether or not a requested action can be taken on a piece of content. This functionality may be provided, at least in part, by a digital rights management system 204. An example of a Digital Rights Management system is provided in U.S. patent application Ser. No. 09/290,363, filed Apr. 12, 1999, U.S. patent applications Ser. Nos. 10/185,527, 10/185,278, and 10/185,511, each filed on Jun. 28, 2002 which are hereby incorporated by reference in its entirety.


Building a trusted application 202 that may be utilized in the CE device 401 or the PC 403 may include making sure the trusted application 202 which decrypts and processes the content 309 may be “secure” from malicious attacks. Thus, a protected environment 203 typically refers to an environment that may not be easy to attack.


As shown, the trusted applications 202 operate in a consumer electronics device 401, which may be periodically synced to a PC 403 that also provides a trusted application. The PC 403 is in turn coupled 404 to the internet 405. The internet connection allows digital media 410 to be provided by a service provider 407. The service provider 407 may transmit licenses and encrypted media 406 over the internet 405 to trusted application 202. Once encrypted media is delivered and decrypted it may be susceptible to various forms of attack.


Protected Environments and Potential Attacks


A protected computing environment tends to provide an environment that limits hackers from gaining access to unauthorized content. A hacker may include hackers acting as a systems administrator. A systems administrator typically has full control of virtually all of the processes being executed on a computer, but this access may not be desirable. For example, if a system user has been granted a license to use a media file should not be acceptable for a system administrator different from the user to be able to access the media file. A protected environment tends to contribute to the creation of a process in which code that decrypts and processes content can operate without giving hackers access to the decrypted content. A protected environment may also limit unauthorized access to users of privilege, such as administrators, and/or any other user, who may otherwise gain unauthorized access to protected content. Protection may include securing typical user mode processes (FIG. 1, 103) and kernel mode processes (FIG. 1, 104) and any data they may be processing.


Processes operating in the kernel may be susceptible to attack. For example, in the kernel of a typical operating system objects are created, including processes, that may allow unlimited access by an administrator. Thus, an administrator, typically with full access privileges, may access virtually all processes.


Protected content may include policy or similar information indicating the authorized use of the content. Such policy may be enforced via a DRM system or other security mechanism. Typically, access to protected content is granted through the DRM system or other mechanism, which may enforce policy. However, a system administrator, with full access to the system, may alter the state of the DRM system or mechanism to disregard the content policy.


A protected environment tends to provide a protected space that restricts unauthorized access to media content being processed therein, even for high-privilege users such as an administrator. When a protected environment is used in conjunction with a system of digital rights management or the like, a trusted application may be created in which a content provider may feel that adequate security is provided to protect digital media from unauthorized access and may also protect the content's policy from be tampered with along with any other data, keys or protection mechanisms that may be associated with the media content.


Attack Vectors


Current operating system (“OS”) architectures typically present numerous possible attack vectors that could compromise a media application and any digital media content being processed. For purposes of this example, attacks that may occur in an OS are grouped into two types of attacks, which are kernel mode attacks and user mode attacks.


The first type of attack is the kernel mode attack. Kernel mode is typically considered to be the trusted base of the operating system. The core of the operating system and most system and peripheral drivers may operate in kernel mode. Typically any piece of code running in the kernel is susceptible to intrusion by any other piece of code running in the kernel, which tends not to be the case for user mode. Also, code running in kernel mode typically has access to substantially all user mode processes. A CPU may also provide privilege levels for various code types. Kernel mode code is typically assigned the highest level of privilege by such a CPU, typically giving it full access to the system.


The second type of attack is the user mode attack. Code that runs in user mode may or may not be considered trusted code by the system depending on the level of privilege it has been assigned. This level of privilege may be determined by the user context or account in which it is operating. User mode code running in the context of an administrator account may have full access to the other code running on the system. In addition, code that runs in user mode may be partitioned to prevent one user from accessing another's processes.


These attacks may be further broken down into specific attack vectors. The protected environment is typically designed to protect against unauthorized access that may otherwise be obtained via one or more of these attack vectors. The protected environment may protect against attack vectors that may include: process creation, malicious user mode applications, loading malicious code into a process, malicious kernel code, invalid trust authorities, and external attack vectors.


Process creation is a possible attack vector. An operating system typically includes a “create process” mechanism that allows a parent process to create a child process. A malicious parent process may, by modifying the create process code or by altering the data it creates, make unauthorized modifications to the child process being created. This could result in compromising digital media that may be processed by a child process created by a malicious parent process.


Malicious user mode applications are a possible attack vector. An operating system typically includes administrator level privileges. Processes running with administrator privileges may have unlimited access to many operating system mechanisms and to nearly all processes running on the computer. Thus, in Windows for example, a malicious user mode application running with administrator privileges may gain access to many other processes running on the computer and may thus compromise digital media. Similarly, processes operating in the context of any user may be attacked by any malicious process operating in the same context.


Loading malicious code into a secure process is a possible attack vector. It may be possible to append or add malicious code to a process. Such a compromised process cannot be trusted and may obtain unauthorized access to any media content or other data being processed by the modified process.


Malicious kernel mode code is a possible attack vector. An operating system typically includes a “system level” of privilege. In Windows, for example, all code running in kernel mode is typically running as system and therefore may have maximum privileges. The usual result is that drivers running in kernel mode may have maximum opportunity to attack any user mode application, for example. Such an attack by malicious kernel mode code may compromise digital media.


Invalid trust authorities (TAs) are a possible attack vector. TAs may participate in the validation of media licenses and may subsequently “unlock” the content of a digital media. TAs may be specific to a media type or format and may be implemented by media providers or their partners. As such, TAs may be pluggable and/or may be provided as dynamic link libraries (“DLL”) or the like. A DLL may be loaded by executable code, including malicious code. In order for a TA to ensure that the media is properly utilized it needs to be able to ensure that the process in which it is running is secure. Otherwise the digital media may be compromised.


External attacks are another possible attack vector. There are a set of attacks that don't require malicious code running in a system in order to attack it. For instance, attaching a debugger to a process or a kernel debugger to the machine, looking for sensitive data in a binary file on a disk, etc., are all possible mechanisms for finding and compromising digital media or the processes that can access digital media.



FIG. 5 is a block diagram showing exemplary attack vectors 507-510 that may be exploited by a user or mechanism attempting to access media content and other data 500 typically present in a computing environment 100 in an unauthorized manner. A protected environment may protect against these attack vectors such that unauthorized access to trusted applications and the data they process is limited and resistance to attack is provided. Such attacks may be waged by users of the system or mechanisms that may include executable code. The media application 105 is shown at the center of the diagram and the attack vectors 507-510 tend to focus on accessing sensitive data 500 being stored and/or processed by the application 105.


A possible attack vector 509 may be initiated via a malicious user mode application 502. In the exemplary operating system architecture both the parent of a process, and any process with administrative privileges, typically have unlimited access to other processes, such as one processing media content, and the data they process. Such access to media content may be unauthorized. Thus a protected environment may ensure that a trusted application and the media content it processes are resistant to attacks by other user mode applications.


A possible attack vector 508 is the loading of malicious code 503 into a process 501. Having a secure process that is resistant to attacks from the outside is typically only as secure as the code running on the inside forming the process. Given that DLLs and other code are typically loaded into processes for execution, a mechanism that may ensure that the code being loaded is trusted to run inside a process before loading it into the process may be provided in a protected environment.


A possible vector of attack 510 is through malicious kernel mode code 504. Code running in kernel mode 104 typically has maximum privileges. The result may be that drivers running in kernel mode may have a number of opportunities to attack other applications. For instance, a driver may be able to access memory directly in another process. The result of this is that a driver could, once running, get access to a processes memory which may contain decrypted “encrypted media content” (FIG. 3, 309). Kernel Mode attacks may be prevented by ensuring that the code running in the kernel is non-malicious code, as provided by this example.


A possible attack vector 507 is by external attacks 506 to the system 100. This group represents the set of attacks that typically do not require malicious code to be running on the system 100. For instance, attaching a debugger to an application and/or a process on the system, searching a machine for sensitive data, etc. A protected environment may be created to resist these types of attacks.


Creating and Maintaining Protected Environments



FIG. 6 is a flow diagram showing the process 600 for creating and maintaining a protected environment that tends to limit unauthorized access to media content and other data. The sequence 600 begins when a computer system is started 602 and the kernel of the operating system is loaded and a kernel secure flag is set 604 to an initial value. The process continues through the time that a protected environment is typically created and an application is typically loaded into it 606. The process includes periodic checking 608 via the protected environment that seeks to ensure the system remains secure through the time the secure process is needed.


The term “kernel”, as used here, is defined as the central module of an operating system for a computing environment, system or device. The kernel module may be implemented in the form of computer-executable instructions and/or electronic logic circuits. Typically, the kernel is responsible for memory management, process and task management, and storage media management of a computing environment. The term “kernel component”, as used here, is defined to be a basic controlling mechanism, module, computer-executable instructions and/or electronic logic circuit that forms a portion of the kernel. For example, a kernel component may be a “loader”, which may be responsible for loading other kernel components in order to establish a fully operational kernel.


To summarize the process of creating and maintaining a protected environment:


1. Block 602 represents the start-up of a computer system. This typically begins what is commonly known as the boot process and includes loading of an operating system from disk or some other storage media.


2. Typically one of the first operations during the boot process is the loading of the kernel and its components. This example provides the validation of kernel components and, if all are successfully validated as secure, the setting of a flag indicating the kernel is secure. This is shown in block 604.


3. After the computer system is considered fully operational a user may start an application such as a trusted media player which may require a protected environment. This example provides a secure kernel with an application operating in a protected environment, as shown in block 606.


4. Once the protected environment has been created and one or more of the processes of the application have been loaded into it and are operating, the trusted environment may periodically check the kernel secure flag to ensure the kernel remains secure, as shown in block 608. That is, from the point in time that the trusted application begins operation, a check may be made periodically to determine whether any unauthorized kernel components have been loaded. Such unauthorized kernel components could attack the trusted application or the data it may be processing. Therefore, if any such components are loaded, the kernel secure flag may be set appropriately.


Loading and Validating a Secure Kernel



FIG. 7 is a block diagram showing exemplary kernel components 720-730 and other components 710-714 utilized in creating an exemplary secure computing environment 200. This figure shows a computer system containing several components 710-730 typically stored on a disk or the like, several of which are used to form the kernel of an operating system when a computer is started. Arrow 604 indicates the process of loading the kernel components into memory forming the operational kernel of the system. The loaded kernel 750 is shown containing its various components 751-762 and a kernel secure flag 790 indicating whether or not the kernel is considered secure for a protected environment. The kernel secure flag 790 being described as a “flag” is not meant to be limiting; it may be implemented as a boolean variable or as a more complex data structure or mechanism.


Kernel components 720-730 are typically “signed” and may include a certificate data 738 that may allow the kernel to validate that they are the components they claim to be, that they have not been modified and/or are not malicious. A signature block and/or certificate data 738 may be present in each kernel component 720-730 and/or each loaded kernel component 760, 762. The signature and/or certificate data 738 may be unique to each component. The signature and/or certificate data 738 may be used in the creation and maintenance of protected environments as indicated below. Typically a component is “signed” by its provider in such as way as to securely identify the source of the component and/or indicate whether it may have been tampered with. A signature may be implemented as a hash of the component's header or by using other techniques. A conventional certificate or certificate chain may also be included with a component that may be used to determine if the component can be trusted. The signature and/or certificate data 738 are typically added to a component before it is distributed for public use. Those skilled in the art will be familiar with these technologies and their use.


When a typical computer system is started or “booted” the operating system's loading process or “kernel loader” 751 may typically load the components of the kernel from disk or the like into a portion of system memory to form the kernel of the operating system. Once all of the kernel components are loaded and operational the computer and operating system are considered “booted” and ready for normal operation.


Kernel component #1720 thru kernel component #n 730, in the computing environment, may be stored on a disk or other storage media, along with a revocation list 714, a kernel dump flag 712 and a debugger 710 along with a debug credential 711. Arrow 604 indicates the kernel loading process which reads the various components 714-730 from their storage location and loads them into system memory forming a functional operating system kernel 750. The kernel dump flag 712 being described as a “flag” is not meant to be limiting; it may be implemented as a boolean variable or as a more complex data structure or mechanism.


The kernel loader 751 along with the PE management portion of the kernel 752, the revocation list 754 and two of the kernel components 720 and 722 are shown loaded into the kernel, the latter as blocks 760 and 762, along with an indication of space for additional kernel components yet to be loaded into the kernel, 764 and 770. Finally, the kernel 750 includes a kernel secure flag 790 which may be used to indicate whether or not the kernel 750 is currently considered secure or not. This illustration is provided as an example and is not intended to be limiting or complete. The kernel loader 751, the PE management portion of the kernel 752 and/or the other components of the kernel are shown as distinct kernel components for clarity of explanation but, in actual practice, may or may not be distinguishable from other portions of the kernel.


Included in the computing environment 200 may be a revocation list 714 that may be used in conjunction with the signature and certificate data 738 associated with the kernel components 760 and 762. This object 714 may retain a list of signatures, certificates and/or certificate chains that are no longer considered valid as of the creation date of the list 714. The revocation list 714 is shown loaded into the kernel as object 754. Such lists are maintained because a validly-signed and certified component, for example components 760 and 762, may later be discovered to have some problem. The system may use such a list 754 to check kernel components 720-730 as they are loaded, which may be properly signed and/or have trusted certificate data 738, but that may have subsequently been deemed untrustworthy. Such a revocation list 754 will typically include version information 755 so that it can more easily be identified, managed and updated as required.


Another component of the system that may impact kernel security is a debugger 710. Debuggers may not typically be considered a part of the kernel but may be present in a computing environment 200. Debuggers, including those known as kernel debuggers, system analyzers, and the like, may have broad access to the system and the processes running on the system along with any data. A debugger 710 may be able access any data in a computing environment 200, including media content that should not be accessed in a manner other than that authorized. On the other hand, debugging is typically a part of developing new functionality and it typically is possible to debug within protected environments the code intended to process protected media content. A debugger 710 may thus include debug credentials 711 which may indicate that the presence of the debugger 710 on a system is authorized. Thus detection of the presence of a debugger 710 along with any accompanying credentials 711 may be a part of the creation and maintenance of protected environments (FIG. 6, 600).


The computing environment 200 may include a kernel dump flag 712. This flag 712 may be used to indicate how much of kernel memory is available for inspection in case of a catastrophic system failure. Such kernel dumps may be used for postmortem debugging after such as failure. If such a flag 712 indicates that substantially all memory is available for inspection upon a dump then the kernel 750 may be considered insecure as hacker could run an application which exposes protected media in system memory and then force a catastrophic failure condition which may result in the memory being available for inspection including that containing the exposed media content. Thus a kernel dump flag 712 may be used in the creation and maintenance of a protected environments (FIG. 6, 600).



FIG. 8 and FIG. 9 are flow diagrams showing an exemplary process 604 for loading kernel components to create an exemplary secure computing environment. This process 604 begins after the kernel loader has been started and the PE management portion of the kernel has been loaded and made operational. Not shown in these figures, the PE management portion of the kernel may validate the kernel loader itself and/or any other kernel elements that may have been previously loaded. Validation may be defined as determining whether or not a given component is considered secure and trustworthy as illustrate in part 2 of this process 604.


The term “authorized for secure use” and the like as used below with respect to kernel components has the following specific meaning. A kernel containing any components that are not authorized for secure use does not provide a secure computing environment within which protected environments may operate. The opposite may not be true as it depends on other factors such as attack vectors.


1. Block 801 shows the start of the loading process 604 after the PE management portion of the kernel has been loaded and made operational. Any component loaded in the kernel prior to this may be validated as described above.


2. Block 802 shows that the kernel secure flag initially set to TRUE unless any component loaded prior to the PE management portion of the kernel, or that component itself, is found to be insecure at which point the kernel secure flag may be set to FALSE. In practice the indication of TRUE or FALSE may take various forms; the use of TRUE or FALSE here is only an example and is not meant to be limiting.


3. Block 804 indicates a check for the presence of a debugger in the computing environment. Alternatively a debugger could reside remotely and be attached to the computing environment via a network or other communications media to a process in the computing environment. If no debugger is detected the loading process 604 continues at block 810. Otherwise it continues at block 809. Not shown in the diagram, this check may be performed periodically and the state of the kernel secure flag updated accordingly.


4. If a debugger is detected, block 806 shows a check for debug credentials which may indicate that debugging may be authorized on the system in the presence of a protected environment. If such credentials are not present, the kernel secure flag may be set to FALSE as shown in block 808. Otherwise the loading process 604 continues at block 810.


5. Block 810 shows a check of the kernel dump flag. If this flag indicates that a full kernel memory dump or the like may be possible then the kernel secure flag may be set to FALSE as shown in block 808. Otherwise the loading process 604 continues at block 812. Not shown in the diagram, this check may be performed periodically and the state of the kernel secure flag updated accordingly.


6. Block 812 shows the loading of the revocation list into the kernel. In cases where the revocation list may be used to check debug credentials, or other previously loaded credentials, signatures, certificate data, or the like, this step may take place earlier in the sequence (prior to the loading of credentials and the like to be checked) than shown. Not shown in the diagram is that, once this component is loaded, any and all previously loaded kernel components may be checked to see if their signature and/or certificate data has been revoked per the revocation list. If any have been revoked, the kernel secure flag may be set to FALSE and the loading process 604 continues at block 814. Note that a revocation list may or may not be loaded into the kernel to be used in the creation and maintenance of a protected environments.


7. Block 814 shows the transition to part 2 of this diagram shown in FIG. 9 and continuing at block 901.


8. Block 902 shows a check for any additional kernel components to be loaded. If all components have been loaded then the load process 604 is usually complete and the kernel secure flag remains in whatever state it was last set to, either TRUE or FALSE. If there are additional kernel components to be loaded the load process 604 continues at block 906.


9. Block 906 shows a check for a valid signature of the next component to be loaded. If the signature is invalid then the kernel secure flag may be set to FALSE as shown in block 918. Otherwise the loading process 604 continues at block 908. If no component signature is available the component may be considered insecure and the kernel secure flag may be set to FALSE as shown in block 918. Signature validity may be determined by checking for a match on a list of valid signatures and/or by checking whether the signer's identity is a trusted identity. As familiar to those skilled in the security technology area, other methods could also be used to validate component signatures.


10. Block 908 shows a check of the component's certificate data. If the certificate data is invalid then the kernel secure flag may be set to FALSE as shown in block 918. Otherwise the loading process 604 continues at block 910. If no component certificate data is available the component may be considered insecure and the kernel secure flag may be set to FALSE as shown in block 918. Certificate data validity may be determined by checking the component's certificate data to see if the component is authorized for secure use. As familiar to those skilled in the art, other methods could also be used to validate component certificate data.


11. Block 910 shows a check of the component's signature against a revocation list loaded in the kernel. If the signature is present on the list, indicating that it has been revoked, then the kernel secure flag may be set to FALSE as shown in block 918. Otherwise the loading process 604 continues at block 912.


12. Block 912 shows a check of the component's certificate data against a revocation list. If the certificate data is present on the list, indicating that it has been revoked, then the kernel secure flag may be set to FALSE as shown in block 918. Otherwise the loading process 604 continues at block 914.


13. Block 914 shows a check of the component's signature to determine if it is OK for use. This check may be made by inspecting the component's leaf certificate data to see if the component is authorized for secure use. Certain attributes in the certificate data may indicate if the component is approved for protected environment usage. If not the component may not be appropriately signed and the kernel secure flag may be set to FALSE as shown in block 918. Otherwise the loading process 604 continues at block 916.


14. Block 916 shows a check of the component's root certificate data. This check may be made by inspecting the component's root certificate data to see if it is listed on a list of trusted root certificates. If not the component may be considered insecure and the kernel secure flag may be set to FALSE as shown in block 918. Otherwise the loading process 604 continues at block 920.


15. Block 920 shows the loading of the component into the kernel where it is now considered operational. Then the loading process 604 returns to block 902 to check for any further components to be loaded.


Creating Protected Environments



FIG. 10 is a block diagram showing a secure computing environment 200 loading an application 105 into an exemplary protected environment 203 to form a trusted application that is typically resistant to attacks. In this example the kernel may be the same as that described in FIG. 7, has already been loaded and the system 200 is considered fully operational. At this point, as an example, a user starts media application 105. The media application 105 may call for the creation of a protected environment 203 for one or more of its processes and/or components to operate within. The protected environment creation process 606 creates the protected environment 203 and loads the application 105 and/or its components as described below.



FIG. 11 is a flow diagram showing an exemplary process 606 for creating a protected environment and loading an application into the protected environment. This process 606 includes the initial step of creating a secure process followed by validating the software component to be loaded into it and then loading the software component into the new secure process and making it operational. Upon success, the result may be a software component operating in a protected environment supported by a secure kernel. Such a software component, along with any digital media content or other data it processes, may be protected from various attacks, including those described above.


1. Block 1101 shows the start of the protected environment creation process 606. This point is usually reached when some application or code calls for a protected environment to operate.


2. Block 1102 shows the establishment of a protected environment. While not shown in the diagram, this may be accomplished by requesting the operating system to create a new secure process. Code later loaded and operating in this secure process may be considered to be operating in a protected environment. If the kernel secure flag is set to FALSE then the “create new secure process” request may fail. This may be because the system as a whole may be considered insecure and unsuitable for a protected environment and any application or data requiring a protected environment. Alternatively, the “create new secure process” request may succeed and the component loaded into the new process may be informed that the system is considered insecure so that it can modify its operations accordingly. Otherwise the process 606 continues at block 1106.


3. Block 1106 shows a check for a valid signature of the software component to be loaded into the new secure process or protected environment. If the signature is invalid then the process 606 may fail as shown in block 1118. Otherwise the process 606 continues at block 1108. Not shown in the process is that the program, or its equivalent, creating the new secure process may also be checked for a valid signature. Thus, for either the component itself and/or the program creating the new secure process, if no signature is available the component may be considered insecure and the process 606 may fail as shown in block 1118. Signature validity may be determined by checking for a match on a list of valid signatures and/or by checking whether the signer's identity is a trusted identity. As familiar to those skilled in the security technology area, other methods could also be used to validate component signatures.


4. Block 1108 shows a check of the software component's certificate data. If the certificate data is invalid then the process 606 may fail as shown in block 1118. Otherwise the process 606 continues at block 1110. If no component certificate data is available the component may be considered insecure and the process 606 may fail as shown in block 1118. Certificate data validity may be determined by checking the component's certificate data to see if the component is authorized for secure use. As familiar to those skilled in the art, other methods could also be used to validate component certificate data.


5. Block 1110 shows a check of the component's signature against a revocation list. If the signature is present on the list, indicating that it has been revoked, then the process 606 may fail as shown in block 1118. Otherwise the process 606 continues at block 1112.


12. Block 1112 shows a check of the component's certificate data against a revocation list. If the certificate data is present on the list, indicating that it has been revoked, then the process 606 may fail as shown in block 1118. Otherwise the process 606 continues at block 1114.


13. Block 1114 shows a check of the component's signature to determine if it is acceptable for use. This check may be made by inspecting the component's leaf certificate data to see if the component is authorized for secure use. Certain attributes in the certificate data may indicate if the component is approved for protected environment usage. If not the component may be considered to not be appropriately signed and the process 606 may fail as shown in block 1118. Otherwise the process 606 continues at block 1116.


14. Block 1116 shows a check of the component's root certificate data. This check may be made by inspecting the component's root certificate data to see if it is listed on a list of trusted root certificates. If not the component may be considered insecure and the process 606 may fail as shown in block 1118. Otherwise the process 606 continues at block 1120.


15. Block 1118 shows the failure of the software component to load followed by block 1130, the end of the protected environment creation process 606.


16. Block 1120 shows the software component being loaded into the protected environment, where it is considered operational, followed by block 1130, the end of the protected environment creation process 606.


Validating a Secure Kernel Over Time



FIG. 12 is a block diagram showing an exemplary trusted application utilizing an exemplary protected environment 202 periodically checking 608 the security state 790 of the secure computing environment 200. In this example, the computing environment 200 and the kernel 750 may be the same as those described in FIGS. 7 and 8. The kernel 750 has already been loaded and the computer 200 is considered fully operational. Further, a protected environment has been created and the appropriate components of the trusted application have been loaded into it and made operational, establishing a trusted application utilizing a protected environment 202, hereafter referred to simply as the “protected environment”.


The protected environment 202 may periodically check with the PE management portion of the kernel 752 to determine whether the kernel 750 remains secure over time. This periodic check may be performed because it is possible for a new component to be loaded into the kernel 750 at any time, including a component that may be considered insecure. If this were to occur, the state of the kernel secure flag 790 would change to FALSE and the code operating in the protected environment 202 has the opportunity to respond appropriately.


For example, consider a media player application that was started on a PC 200 with a secure kernel 750 and a portion of the media player application operating in a protected environment 202 processing digital media content that is licensed only for secure use. In this example, if a new kernel component that is considered insecure is loaded while the media player application is processing the media content, then the check kernel secure state process 240 would note the kernel secure flag 790 has changed to FALSE indicating the kernel 750 may no longer be secure.


Alternatively, the revocation list 745 may be updated and a kernel component previously considered secure may no longer be considered secure, resulting in the kernel secure flag 790 being set to FALSE. At this point the application may receive notification that the system 200 is no longer considered secure and can terminate operation, or take other appropriate action to protect itself and/or the media content it is processing.



FIG. 13 is a flow diagram showing an exemplary process 608 for periodically checking the security state of the secure computing environment. This process 608 may be used by a protected environment 202 to determine if the kernel remains secure over time. The protected environment 202 may periodically use this process 608 to check the current security status of the kernel. The protected environment 202 and/or the software component operating within it may use the current security status information to modify its operation appropriately. Periodic activation of the process may be implemented using conventional techniques.


The diagram shows a sequence of communications 608, illustrated with exemplary pseudo code, between the protected environment 202 and the PE management portion of the kernel 752. This communication may include a check of the version of a revocation list which may give an application the ability to specify a revocation list of at least a certain version. This communications sequence may be cryptographically secured using conventional techniques.


1. The protected environment 202 makes a IsKernelSecure(MinRLVer) call 1320 to the PE management portion of the kernel to query the current security state of the kernel. Included in this call 1320 may be the minimum version (MinRLVer) of the revocation list expected to be utilized.


2. The PE management portion of the kernel checks to see if the protected environment, which is the calling process, is secure. If not, then it may provide a Return(SecureFlag=FALSE) indication 1322 to the protected environment and the communications sequence 608 is complete. This security check may be done by the PE management portion of the kernel checking the protected environment for a valid signature and/or certificate data as described above.


3. Otherwise, the PE management portion of the kernel checks the kernel secure flag in response to the call 1320. If the state of the flag is FALSE then it may provide a Return(SecureFlag=FALSE) indication 1324 to the protected environment and the communications sequence 608 is complete.


4. Otherwise, the PE management portion of the kernel checks the revocation list version information for the revocation list. If the revocation list has version information that is older than that requested in the IsKernelSecure(MinRLVer) call 1320 then several options are possible. First, as indicated in the diagram, the PE management portion of the kernel may provide a Return(SecureFlag=FALSE) indication 1326 to the protected environment and the communications sequence 608 is complete.


Alternatively, and not shown in the diagram, an appropriate version revocation list may be located and loaded into the kernel, all kernel components could be re-validated using this new or updated list, the kernel secure flag updated as appropriate and the previous step #3 of this communications sequence 608 repeated.


5. Otherwise, the PE management portion of the kernel may provide a Return(SecureFlag=TRUE) indication 1328 to the protected environment and the communications sequence 608 is complete.


Exemplary Computing Environment



FIG. 14 is a block diagram showing an exemplary computing environment 1400 in which the processes, systems and methods for establishing a secure computing environment including a protected environment 203 may be implemented. Exemplary personal computer 1400 is only one example of a computing system or device that may provide secure computing environment and/or a protected environment and is not intended to limit the examples described in this application to this particular computing environment or device type.


A suitable computing environment can be implemented with numerous other general purpose or special purpose systems. Examples of well known systems may include, but are not limited to, personal computers (“PC”) 1400, hand-held or laptop devices, microprocessor-based systems, multiprocessor systems, set top boxes, programmable consumer electronics, gaming consoles, consumer electronic devices, cellular telephones, PDAs, and the like.


The PC 1400 includes a general-purpose computing system in the form of a computing device 1401 couple to various peripheral devices 1403, 1404, 1415, 1416 and the like. The components of computing device 1401 may include one or more processors (including CPUs, GPUs, microprocessors and the like) 1407, a system memory 1409, and a system bus 1408 that couples the various system components. Processor 1407 processes various computer executable instructions to control the operation of computing device 1401 and to communicate with other electronic and/or computing devices (not shown) via various communications connections such as a network connection 1414 an the like. The system bus 1408 represents any number of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and/or a processor or local bus using any of a variety of bus architectures.


The system memory 1409 may include computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). A basic input/output system (BIOS) may be stored in ROM. RAM typically contains data and/or program modules that are immediately accessible to and/or presently operated on by one or more of the processors 1407. By way of example, shown loaded in system memory for operation is a trusted application 202 utilizing a protected environment 203 and the media content being processed 106.


Mass storage devices 1404 and 1410 may be coupled to the computing device 1401 or incorporated into the computing device 1401 by coupling to the system bus. Such mass storage devices 1404 and 1410 may include a magnetic disk drive which reads from and writes to a removable, non volatile magnetic disk (e.g., a “floppy disk”) 1405, and/or an optical disk drive that reads from and/or writes to a non-volatile optical disk such as a CD ROM, DVD ROM or the like 1406. Computer readable media 1405 and 1406 typically embody computer readable instructions, data structures, program modules and the like supplied on floppy disks, CDs, DVDs, portable memory sticks and the like.


Any number of program programs or modules may be stored on the hard disk 1410, other mass storage devices 1404, and system memory 1409 (typically limited by available space) including, by way of example, an operating system(s), one or more application programs, other program modules, and/or program data. Each of such operating system, application program, other program modules and program data (or some combination thereof) may include an embodiment of the systems and methods described herein. Kernel components 720-730 may be stored on the disk 1410 along with other operating system code. Media application 105 and/or a digital rights management system 204 may be stored on the disk 1410 along with other application programs. These components 720-730 and applications 105, 204 may be loaded into system memory 1409 and made operational.


A display device 1416 may be coupled to the system bus 1408 via an interface, such as a video adapter 1411. A user can interface with computing device 1400 via any number of different input devices 1403 such as a keyboard, pointing device, joystick, game pad, serial port, and/or the like. These and other input devices may be coupled to the processors 1407 via input/output interfaces 1412 that may be coupled to the system bus 1408, and may be coupled by other interface and bus structures, such as a parallel port(s), game port(s), and/or a universal serial bus (USB) and the like.


Computing device 1400 may operate in a networked environment using communications connections to one or more remote computers and/or devices through one or more local area networks (LANs), wide area networks (WANs), the Internet, radio links, optical links and the like. The computing device 1400 may be coupled to a network via a network adapter 1413 or alternatively via a modem, DSL, ISDN interface or the like.


Communications connection 1414 is an example of communications media. Communications media typically embody computer readable instructions, data structures, program modules and/or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communications media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media.


Those skilled in the art will realize that storage devices utilized to store computer-readable program instructions can be distributed across a network. For example a remote computer or device may store an example of the system described as software. A local or terminal computer or device may access the remote computer(s) or device(s) and download a part or all of the software to run a program(s). Alternatively the local computer may download pieces of the software as needed, or distributively process the software by executing some of the software instructions at the local terminal and some at remote computers and/or devices.


Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion, of the software instructions may be carried out by a dedicated electronic circuit such as a digital signal processor (“DSP”), programmable logic array (“PLA”), discrete circuits, or the like. The term electronic apparatus as used herein includes computing devices and consumer electronic devices comprising any software and/or firmware and the like, and/or electronic devices or circuits comprising no software and/or firmware and the like.


The term computer readable medium may include system memory, hard disks, mass storage devices and their associated media, communications media, and the like.

Claims
  • 1. A method of loading a plurality of kernel components to create a secure computing environment within a computing device, the method comprising: loading, by a kernel loader of the plurality of kernel components of a kernel of an operating system of the computing device, a protected environment (“PE”) management component into the kernel, where the loaded PE management component is operational on the computing device, where the PE management component is one of the plurality of kernel components, and where the kernel loader is operational on the computing device;validating, by the loaded and operational PE management component, that the operational kernel loader is secure, the validating based on a valid signature of the operational kernel loader;determining that a debugger is coupled to the computing device; anddetermining that a debug credential that corresponds to the debugger indicates that debugging via the debugger is authorized on the computing device with the secure computing environment.
  • 2. The method of claim 1, further comprising setting, based on the debug credential, a kernel secure flag.
  • 3. The method of claim 1, further comprising: loading a revocation list into the kernel of the operating system;determining that there is another component of the plurality of kernel components to load into the kernel;validating a signature of the another component;verifying that a certificate of the another component is valid;determining that the signature is not on the loaded revocation list; anddetermining that the certificate in not on the loaded revocation list.
  • 4. The method of claim 3, further comprising: determining that the signature is acceptable for use;determining that the certificate is acceptable for use; andloading, in response to the determinings, the validating, and the verifying, the another component into the kernel.
  • 5. A computer comprising: a memory;an operating system including a kernel that includes a plurality of kernel components;a kernel loader configured for loading a protected environment (“PE”) management component into the kernel, where the kernel loader and the PE management component are each being one of the plurality of kernel components;the protected environment (“PE”) management component configured for validating that the kernel loader is secure based on a valid signature of the operational kernel loader; anda kernel secure flag maintained in the kernel and configured to indicate, in response to the validating, that the computer is allowed to load a trusted application into the memory, and where the kernel secure flag is further configured to indicate, in response to the validating, that the computer is not allowed to load the trusted application into the memory.
  • 6. The system of claim 5 further comprising a revocation list.
  • 7. At least one storage device that is not a signal per se, the at least one storage device storing computer-executable instruction that, when executed by a computer, cause the computer to perform a method of loading a plurality of kernel components to create a protected environment (“PE”), the method comprising: loading, by a kernel loader of the plurality of kernel components of a kernel of an operating system of the computer, a PE management component into the kernel, where the loaded PE management component is operational on the computer, where the PE management component is one of the plurality of kernel components, and where the kernel loader is operational on the computer;validating, by the loaded and operational PE management component, that the operational kernel loader is secure based on a valid signature of the operational kernel loader;determining that a debugger is coupled to the computer; anddetermining that a debug credential that corresponds to the debugger indicates that debugging via the debugger is authorized on the computer computing device with the protected environment.
  • 8. The at least one storage device of claim 7, the method further comprising setting, based on the debug credential, the kernel secure flag.
  • 9. The at least one storage device of claim 7, the method further comprising: loading a revocation list into the kernel;determining that there is another component to load into the kernel;validating a signature of the another component of the plurality of kernel components;verifying that a certificate associated with the component is valid;determining that the signature is not on the loaded revocation list; anddetermining that the certificate in not on the loaded revocation list.
  • 10. The at least one storage device of claim 9, the method further comprising: determining that the signature is acceptable for use;determining that the certificate is acceptable for use; andloading, in response to the determinings, the validating, and the verifying, the loaded component into the kernel.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a divisional of U.S. patent application Ser. No. 11/116,598 filed Apr. 27, 2005 and claims benefit to U.S. Provisional Patent Application No. 60/673,979 filed Friday, Apr. 22, 2005.

US Referenced Citations (612)
Number Name Date Kind
3718906 Lightner Feb 1973 A
4183085 Roberts Jan 1980 A
4323921 Guillou Apr 1982 A
4405829 Rivest Sep 1983 A
4528643 Freeny Jul 1985 A
4558176 Arnold et al. Dec 1985 A
4620150 Germer et al. Oct 1986 A
4658093 Hellman Apr 1987 A
4683553 Mollier Jul 1987 A
4750034 Lem Jun 1988 A
4817094 Lebizay et al. Mar 1989 A
4827508 Shear May 1989 A
4855730 Venners et al. Aug 1989 A
4855922 Huddleston et al. Aug 1989 A
4857999 Welsh Aug 1989 A
4910692 Outram Mar 1990 A
4916738 Chandra Apr 1990 A
4953209 Ryder Aug 1990 A
4959774 Davis Sep 1990 A
4967273 Greenberg Oct 1990 A
4977594 Shear Dec 1990 A
5001752 Fischer Mar 1991 A
5012514 Renton Apr 1991 A
5050213 Shear Sep 1991 A
5103392 Mori Apr 1992 A
5103476 Waite Apr 1992 A
5109413 Comerford Apr 1992 A
5117457 Comerford May 1992 A
5193573 Chronister Mar 1993 A
5222134 Waite Jun 1993 A
5249184 Woest et al. Sep 1993 A
5269019 Peterson et al. Dec 1993 A
5274368 Breeden et al. Dec 1993 A
5295266 Hinsley Mar 1994 A
5301268 Takeda Apr 1994 A
5303370 Brosh Apr 1994 A
5319705 Halter Jun 1994 A
5355161 Bird et al. Oct 1994 A
5369262 Dvorkis et al. Nov 1994 A
5373561 Haber Dec 1994 A
5406630 Piosenka et al. Apr 1995 A
5410598 Shear Apr 1995 A
5414861 Horning May 1995 A
5437040 Campbell Jul 1995 A
5442704 Holtey Aug 1995 A
5448045 Clark Sep 1995 A
5457699 Bode Oct 1995 A
5459867 Adams et al. Oct 1995 A
5473692 Davis Dec 1995 A
5490216 Richardson, III Feb 1996 A
5500897 Hartman, Jr. Mar 1996 A
5509070 Schull Apr 1996 A
5513319 Finch et al. Apr 1996 A
5522040 Hofsass et al. May 1996 A
5530846 Strong Jun 1996 A
5552776 Wade et al. Sep 1996 A
5563799 Brehmer et al. Oct 1996 A
5568552 Davis Oct 1996 A
5586291 Lasker et al. Dec 1996 A
5615268 Bisbee Mar 1997 A
5629980 Stefik May 1997 A
5634012 Stefik May 1997 A
5636292 Rhoads Jun 1997 A
5638443 Stefik Jun 1997 A
5638513 Ananda Jun 1997 A
5644364 Kurtze Jul 1997 A
5671412 Christiano Sep 1997 A
5673316 Auerbach Sep 1997 A
5710706 Markl et al. Jan 1998 A
5710887 Chelliah Jan 1998 A
5715403 Stefik Feb 1998 A
5717926 Browning Feb 1998 A
5721788 Powell Feb 1998 A
5724425 Chang et al. Mar 1998 A
5745879 Wyman Apr 1998 A
5754763 Bereiter May 1998 A
5758068 Brandt et al. May 1998 A
5763832 Anselm Jun 1998 A
5765152 Erickson Jun 1998 A
5768382 Schneier et al. Jun 1998 A
5771354 Crawford Jun 1998 A
5774870 Storey Jun 1998 A
5793839 Farris et al. Aug 1998 A
5799088 Raike Aug 1998 A
5802592 Chess Sep 1998 A
5809144 Sirbu Sep 1998 A
5809145 Slik Sep 1998 A
5812930 Zavrel Sep 1998 A
5825877 Dan Oct 1998 A
5825879 Davis Oct 1998 A
5825883 Archibald et al. Oct 1998 A
5841865 Sudia Nov 1998 A
5844986 Davis Dec 1998 A
5845065 Conte et al. Dec 1998 A
5845281 Benson Dec 1998 A
5875236 Jankowitz et al. Feb 1999 A
5883670 Sporer et al. Mar 1999 A
5883955 Ronning Mar 1999 A
5892900 Ginter Apr 1999 A
5892906 Chou et al. Apr 1999 A
5913038 Griffiths Jun 1999 A
5925127 Ahmad Jul 1999 A
5926624 Katz Jul 1999 A
5935248 Kuroda Aug 1999 A
5943248 Clapp Aug 1999 A
5948061 Merriman Sep 1999 A
5949879 Berson Sep 1999 A
5953502 Helbig et al. Sep 1999 A
5956408 Arnold Sep 1999 A
5983238 Becker et al. Nov 1999 A
5983350 Minear Nov 1999 A
5994710 Knee et al. Nov 1999 A
6021438 Duvvoori Feb 2000 A
6026293 Osborn Feb 2000 A
6049789 Frison et al. Apr 2000 A
6058476 Matsuzaki May 2000 A
6061451 Muratani May 2000 A
6061794 Angelo et al. May 2000 A
6069647 Sullivan May 2000 A
6078909 Knutson Jun 2000 A
6101606 Diersch et al. Aug 2000 A
6119229 Martinez et al. Sep 2000 A
6128740 Curry Oct 2000 A
6134659 Sprong Oct 2000 A
6147773 Taylor Nov 2000 A
6148417 Da Silva Nov 2000 A
6157721 Shear Dec 2000 A
6158011 Chen Dec 2000 A
6158657 Hall, III et al. Dec 2000 A
6170060 Mott Jan 2001 B1
6185678 Arbaugh et al. Feb 2001 B1
6188995 Garst et al. Feb 2001 B1
6189146 Misra et al. Feb 2001 B1
6192392 Ginter Feb 2001 B1
6209099 Saunders Mar 2001 B1
6219652 Carter et al. Apr 2001 B1
6219788 Flavin Apr 2001 B1
6223291 Puhl Apr 2001 B1
6226618 Downs May 2001 B1
6226747 Larsson et al. May 2001 B1
6230185 Salas et al. May 2001 B1
6230272 Lockhart May 2001 B1
6233600 Salas et al. May 2001 B1
6233685 Smith May 2001 B1
6243439 Arai et al. Jun 2001 B1
6253224 Brice, Jr. et al. Jun 2001 B1
6263431 Lovelace et al. Jul 2001 B1
6266420 Langford Jul 2001 B1
6266480 Ezaki Jul 2001 B1
6272469 Koritzinsky et al. Aug 2001 B1
6279111 Jensenworth et al. Aug 2001 B1
6279156 Amberg et al. Aug 2001 B1
6286051 Becker et al. Sep 2001 B1
6289319 Lockwood et al. Sep 2001 B1
6295577 Anderson et al. Sep 2001 B1
6298446 Schreiber Oct 2001 B1
6303924 Adan et al. Oct 2001 B1
6314408 Salas et al. Nov 2001 B1
6314409 Schneck et al. Nov 2001 B2
6321335 Chu Nov 2001 B1
6327652 England Dec 2001 B1
6330670 England et al. Dec 2001 B1
6334189 Granger Dec 2001 B1
6345294 O'Toole et al. Feb 2002 B1
6363488 Ginter Mar 2002 B1
6367017 Gray Apr 2002 B1
6373047 Adan et al. Apr 2002 B1
6374355 Patel Apr 2002 B1
6385727 Cassagnol et al. May 2002 B1
6405923 Seyson Jun 2002 B1
6408170 Schmidt et al. Jun 2002 B1
6409089 Eskicioglu Jun 2002 B1
6411941 Mullor et al. Jun 2002 B1
6424714 Wasilewski et al. Jul 2002 B1
6425081 Iwamura Jul 2002 B1
6441813 Ishibashi Aug 2002 B1
6442529 Krishan et al. Aug 2002 B1
6442690 Howard Aug 2002 B1
6449598 Green Sep 2002 B1
6460140 Schoch et al. Oct 2002 B1
6463534 Geiger et al. Oct 2002 B1
6496858 Frailong et al. Dec 2002 B1
6507909 Zurko Jan 2003 B1
6567793 Hicks et al. May 2003 B1
6571216 Garg et al. May 2003 B1
6581102 Amini Jun 2003 B1
6585158 Norskog Jul 2003 B2
6587684 Hsu et al. Jul 2003 B1
6609201 Folmsbee Aug 2003 B1
6625729 Angelo Sep 2003 B1
6631478 Wang et al. Oct 2003 B1
6646244 Aas et al. Nov 2003 B2
6664948 Crane et al. Dec 2003 B2
6671803 Pasieka Dec 2003 B1
6678828 Pham et al. Jan 2004 B1
6690556 Smola et al. Feb 2004 B2
6694000 Ung et al. Feb 2004 B2
6704873 Underwood Mar 2004 B1
6708176 Strunk et al. Mar 2004 B2
6711263 Nordenstam et al. Mar 2004 B1
6716652 Ortlieb Apr 2004 B1
6738810 Kramer et al. May 2004 B1
6763458 Watanabe Jul 2004 B1
6765470 Shinzaki Jul 2004 B2
6772340 Peinado Aug 2004 B1
6791157 Casto et al. Sep 2004 B1
6799270 Bull Sep 2004 B1
6816596 Peinado Nov 2004 B1
6816809 Circenis Nov 2004 B2
6816900 Vogel et al. Nov 2004 B1
6829708 Peinado Dec 2004 B1
6834352 Shin Dec 2004 B2
6839841 Medvinsky et al. Jan 2005 B1
6844871 Hinckley et al. Jan 2005 B1
6847942 Land et al. Jan 2005 B1
6850252 Hoffberg Feb 2005 B1
6851051 Bolle et al. Feb 2005 B1
6853380 Alcorn Feb 2005 B2
6868433 Philyaw Mar 2005 B1
6871283 Zurko et al. Mar 2005 B1
6895504 Zhang May 2005 B1
6920567 Doherty et al. Jul 2005 B1
6931545 Ta Aug 2005 B1
6934942 Chilimbi Aug 2005 B1
6954728 Kusumoto et al. Oct 2005 B1
6957186 Guheen et al. Oct 2005 B1
6976162 Ellison et al. Dec 2005 B1
6983050 Yacobi et al. Jan 2006 B1
6986042 Griffin Jan 2006 B2
6990174 Eskelinen Jan 2006 B2
6993648 Goodman et al. Jan 2006 B2
7000100 Lacombe et al. Feb 2006 B2
7000829 Harris et al. Feb 2006 B1
7013384 Challener et al. Mar 2006 B2
7016498 Peinado Mar 2006 B2
7028149 Grawrock Apr 2006 B2
7039801 Narin May 2006 B2
7043633 Fink May 2006 B1
7052530 Edlund et al. May 2006 B2
7054468 Yang May 2006 B2
7069442 Sutton, II Jun 2006 B2
7069595 Cognigni et al. Jun 2006 B2
7076652 Ginter et al. Jul 2006 B2
7096469 Kubala et al. Aug 2006 B1
7097357 Johnson et al. Aug 2006 B2
7103574 Peinado Sep 2006 B1
7113912 Stefik Sep 2006 B2
7117183 Blair et al. Oct 2006 B2
7120250 Candelore Oct 2006 B2
7121460 Parsons et al. Oct 2006 B1
7124938 Marsh Oct 2006 B1
7127579 Zimmer Oct 2006 B2
7130951 Christie et al. Oct 2006 B1
7131004 Lyle Oct 2006 B1
7143297 Buchheit et al. Nov 2006 B2
7162645 Iguchi et al. Jan 2007 B2
7171539 Mansell et al. Jan 2007 B2
7174457 England et al. Feb 2007 B1
7200760 Riebe Apr 2007 B2
7203310 England Apr 2007 B2
7207039 Komarla et al. Apr 2007 B2
7213266 Maher May 2007 B1
7222062 Goud May 2007 B2
7233666 Lee Jun 2007 B2
7233948 Shamoon Jun 2007 B1
7234144 Wilt et al. Jun 2007 B2
7236455 Proudler et al. Jun 2007 B1
7254836 Alkove Aug 2007 B2
7266569 Cutter et al. Sep 2007 B2
7278165 Molaro Oct 2007 B2
7290699 Reddy Nov 2007 B2
7296154 Evans Nov 2007 B2
7296296 Dunbar Nov 2007 B2
7299358 Chateau et al. Nov 2007 B2
7299504 Tiller Nov 2007 B1
7340055 Hori Mar 2008 B2
7343496 Hsiang Mar 2008 B1
7350228 Peled Mar 2008 B2
7353209 Peinado Apr 2008 B1
7353402 Bourne et al. Apr 2008 B2
7356709 Gunyakti et al. Apr 2008 B2
7359807 Frank et al. Apr 2008 B2
7360253 Frank et al. Apr 2008 B2
7376976 Fierstein May 2008 B2
7382879 Miller Jun 2008 B1
7392429 Westerinen et al. Jun 2008 B2
7395245 Okamoto et al. Jul 2008 B2
7395452 Nicholson et al. Jul 2008 B2
7406446 Frank et al. Jul 2008 B2
7406603 MacKay Jul 2008 B1
7421413 Frank et al. Sep 2008 B2
7426752 Agrawal Sep 2008 B2
7441121 Cutter Oct 2008 B2
7441246 Auerbach et al. Oct 2008 B2
7461249 Pearson et al. Dec 2008 B1
7464103 Siu Dec 2008 B2
7490356 Lieblich et al. Feb 2009 B2
7493487 Phillips et al. Feb 2009 B2
7494277 Setala Feb 2009 B2
7499545 Bagshaw Mar 2009 B1
7500267 McKune Mar 2009 B2
7519816 Phillips et al. Apr 2009 B2
7526649 Wiseman Apr 2009 B2
7539863 Phillips May 2009 B2
7540024 Phillips et al. May 2009 B2
7549060 Bourne et al. Jun 2009 B2
7552331 Evans Jun 2009 B2
7558463 Jain Jul 2009 B2
7562220 Frank et al. Jul 2009 B2
7565325 Lenard Jul 2009 B2
7568096 Evans et al. Jul 2009 B2
7574747 Oliveira Aug 2009 B2
7584502 Alkove Sep 2009 B2
7590841 Sherwani Sep 2009 B2
7596784 Abrams Sep 2009 B2
7609653 Amin Oct 2009 B2
7610631 Frank et al. Oct 2009 B2
7617401 Marsh Nov 2009 B2
7644239 Westerinen et al. Jan 2010 B2
7653943 Evans Jan 2010 B2
7665143 Havens Feb 2010 B2
7669056 Frank Feb 2010 B2
7694153 Ahdout Apr 2010 B2
7703141 Alkove Apr 2010 B2
7739505 Reneris Jun 2010 B2
7752674 Evans Jul 2010 B2
7770205 Frank Aug 2010 B2
7810163 Evans Oct 2010 B2
7814532 Cromer et al. Oct 2010 B2
7822863 Balfanz Oct 2010 B2
7860250 Russ Dec 2010 B2
7877607 Circenis Jan 2011 B2
7881315 Haveson Feb 2011 B2
7891007 Waxman et al. Feb 2011 B2
7900140 Mohammed Mar 2011 B2
7903117 Howell Mar 2011 B2
7958029 Bobich et al. Jun 2011 B1
7979721 Westerinen Jul 2011 B2
20010021252 Carter Sep 2001 A1
20010034711 Tashenberg Oct 2001 A1
20010044782 Hughes Nov 2001 A1
20010056413 Satoru et al. Dec 2001 A1
20010056539 Pavlin et al. Dec 2001 A1
20020002597 Morrell, Jr. Jan 2002 A1
20020002674 Grimes Jan 2002 A1
20020007310 Long Jan 2002 A1
20020010863 Mankefors Jan 2002 A1
20020012432 England Jan 2002 A1
20020023207 Olik Feb 2002 A1
20020023212 Proudler Feb 2002 A1
20020036991 Inoue Mar 2002 A1
20020046098 Maggio Apr 2002 A1
20020049679 Russell Apr 2002 A1
20020055906 Katz et al. May 2002 A1
20020057795 Spurgat May 2002 A1
20020091569 Kitaura et al. Jul 2002 A1
20020095603 Godwin Jul 2002 A1
20020097872 Maliszewski Jul 2002 A1
20020103880 Konetski Aug 2002 A1
20020104096 Cramer Aug 2002 A1
20020107701 Batty et al. Aug 2002 A1
20020111916 Coronna et al. Aug 2002 A1
20020112171 Ginter et al. Aug 2002 A1
20020116707 Morris Aug 2002 A1
20020123964 Kramer et al. Sep 2002 A1
20020124212 Nitschke et al. Sep 2002 A1
20020129359 Lichner Sep 2002 A1
20020138549 Urien Sep 2002 A1
20020141451 Gates et al. Oct 2002 A1
20020144131 Spacey Oct 2002 A1
20020147601 Fagan Oct 2002 A1
20020147782 Dimitrova et al. Oct 2002 A1
20020147912 Shmueli et al. Oct 2002 A1
20020164018 Wee Nov 2002 A1
20020178071 Walker et al. Nov 2002 A1
20020184482 Lacombe et al. Dec 2002 A1
20020184508 Bialick et al. Dec 2002 A1
20020193101 McAlinden Dec 2002 A1
20020194132 Pearson et al. Dec 2002 A1
20030004880 Banerjee Jan 2003 A1
20030005135 Inoue et al. Jan 2003 A1
20030005335 Watanabe Jan 2003 A1
20030014323 Scheer Jan 2003 A1
20030027549 Kiel et al. Feb 2003 A1
20030028454 Ooho et al. Feb 2003 A1
20030035409 Wang et al. Feb 2003 A1
20030037246 Goodman et al. Feb 2003 A1
20030040960 Eckmann Feb 2003 A1
20030046026 Levy et al. Mar 2003 A1
20030048473 Rosen Mar 2003 A1
20030055898 Yeager Mar 2003 A1
20030056107 Cammack et al. Mar 2003 A1
20030065918 Willey Apr 2003 A1
20030069981 Trovato Apr 2003 A1
20030084104 Salem et al. May 2003 A1
20030084278 Cromer et al. May 2003 A1
20030084285 Cromer et al. May 2003 A1
20030084306 Abburi May 2003 A1
20030084337 Simionescu et al. May 2003 A1
20030084352 Schwartz et al. May 2003 A1
20030088500 Shinohara et al. May 2003 A1
20030093694 Medvinsky et al. May 2003 A1
20030097596 Muratov et al. May 2003 A1
20030097655 Novak May 2003 A1
20030110388 Pavlin et al. Jun 2003 A1
20030115147 Feldman Jun 2003 A1
20030115458 Song Jun 2003 A1
20030120935 Teal Jun 2003 A1
20030126086 Safadi Jul 2003 A1
20030126519 Odorcic Jul 2003 A1
20030131252 Barton et al. Jul 2003 A1
20030133576 Grumiaux Jul 2003 A1
20030135380 Lehr et al. Jul 2003 A1
20030149671 Yamamoto et al. Aug 2003 A1
20030156572 Hui et al. Aug 2003 A1
20030156719 Cronce Aug 2003 A1
20030159037 Taki Aug 2003 A1
20030163383 Engelhart Aug 2003 A1
20030163712 LaMothe et al. Aug 2003 A1
20030165241 Fransdonk Sep 2003 A1
20030172376 Coffin, III et al. Sep 2003 A1
20030185395 Lee Oct 2003 A1
20030188165 Sutton et al. Oct 2003 A1
20030188179 Challener Oct 2003 A1
20030196102 McCarroll Oct 2003 A1
20030196106 Erfani et al. Oct 2003 A1
20030200336 Pal Oct 2003 A1
20030208338 Challener et al. Nov 2003 A1
20030208573 Harrison et al. Nov 2003 A1
20030219127 Russ Nov 2003 A1
20030221100 Russ Nov 2003 A1
20030229702 Hensbergen et al. Dec 2003 A1
20030236978 Evans et al. Dec 2003 A1
20040001088 Stancil et al. Jan 2004 A1
20040003190 Childs et al. Jan 2004 A1
20040003268 Bourne Jan 2004 A1
20040003269 Waxman Jan 2004 A1
20040003270 Bourne Jan 2004 A1
20040003288 Wiseman et al. Jan 2004 A1
20040010440 Lenard et al. Jan 2004 A1
20040010684 Douglas Jan 2004 A1
20040010717 Simec Jan 2004 A1
20040019456 Cirenis Jan 2004 A1
20040023636 Gurel et al. Feb 2004 A1
20040030912 Merkle, Jr. et al. Feb 2004 A1
20040034816 Richard Feb 2004 A1
20040039916 Aldis et al. Feb 2004 A1
20040039924 Baldwin et al. Feb 2004 A1
20040039960 Kassayan Feb 2004 A1
20040044629 Rhodes et al. Mar 2004 A1
20040054629 de Jong Mar 2004 A1
20040054907 Chateau et al. Mar 2004 A1
20040054908 Circenis et al. Mar 2004 A1
20040054909 Serkowski et al. Mar 2004 A1
20040059937 Takehiko Mar 2004 A1
20040064707 McCann et al. Apr 2004 A1
20040067746 Johnson Apr 2004 A1
20040073670 Chack et al. Apr 2004 A1
20040083289 Karger Apr 2004 A1
20040088548 Smetters et al. May 2004 A1
20040093371 Burrows et al. May 2004 A1
20040093508 Foerstner et al. May 2004 A1
20040098583 Weber May 2004 A1
20040107356 Shamoon Jun 2004 A1
20040107359 Kawano et al. Jun 2004 A1
20040107368 Colvin Jun 2004 A1
20040111609 Kaji Jun 2004 A1
20040123127 Teicher et al. Jun 2004 A1
20040125755 Roberts Jul 2004 A1
20040128251 Chris et al. Jul 2004 A1
20040133794 Kocher et al. Jul 2004 A1
20040139027 Molaro Jul 2004 A1
20040158742 Srinivasan et al. Aug 2004 A1
20040184605 Soliman Sep 2004 A1
20040187001 Bousis Sep 2004 A1
20040193919 Dabbish et al. Sep 2004 A1
20040199769 Proudler Oct 2004 A1
20040205028 Verosub Oct 2004 A1
20040205357 Kuo et al. Oct 2004 A1
20040210695 Weber Oct 2004 A1
20040220858 Maggio Nov 2004 A1
20040225894 Colvin Nov 2004 A1
20040249768 Kontio Dec 2004 A1
20040255000 Simionescu et al. Dec 2004 A1
20040268120 Mirtal et al. Dec 2004 A1
20050010766 Holden Jan 2005 A1
20050015343 Nagai et al. Jan 2005 A1
20050021859 Willian Jan 2005 A1
20050021944 Craft et al. Jan 2005 A1
20050021992 Aida Jan 2005 A1
20050028000 Bulusu et al. Feb 2005 A1
20050033747 Wittkotter Feb 2005 A1
20050039013 Bajikar et al. Feb 2005 A1
20050044197 Lai Feb 2005 A1
20050044391 Noguchi Feb 2005 A1
20050044397 Bjorkengren Feb 2005 A1
20050050355 Graunke Mar 2005 A1
20050060388 Tatsumi et al. Mar 2005 A1
20050060542 Risan Mar 2005 A1
20050065880 Amato et al. Mar 2005 A1
20050066353 Fransdonk Mar 2005 A1
20050071280 Irwin Mar 2005 A1
20050080701 Tunney et al. Apr 2005 A1
20050089164 Lang Apr 2005 A1
20050091104 Abraham Apr 2005 A1
20050091488 Dunbar Apr 2005 A1
20050091526 Alkove Apr 2005 A1
20050097204 Horowitz et al. May 2005 A1
20050102181 Scroggie et al. May 2005 A1
20050108547 Sakai May 2005 A1
20050108564 Freeman et al. May 2005 A1
20050120251 Fukumori Jun 2005 A1
20050123276 Sugaya Jun 2005 A1
20050125673 Cheng et al. Jun 2005 A1
20050129296 Setala Jun 2005 A1
20050131832 Fransdonk Jun 2005 A1
20050132150 Jewell et al. Jun 2005 A1
20050138370 Goud et al. Jun 2005 A1
20050138389 Catherman et al. Jun 2005 A1
20050138406 Cox Jun 2005 A1
20050138423 Ranganathan Jun 2005 A1
20050141717 Cromer et al. Jun 2005 A1
20050144099 Deb et al. Jun 2005 A1
20050149722 Wiseman Jul 2005 A1
20050149729 Zimmer Jul 2005 A1
20050166051 Buer Jul 2005 A1
20050172121 Risan Aug 2005 A1
20050182921 Duncan Aug 2005 A1
20050182940 Sutton Aug 2005 A1
20050188843 Edlund et al. Sep 2005 A1
20050203801 Morgenstern et al. Sep 2005 A1
20050204205 Ring Sep 2005 A1
20050210252 Freeman Sep 2005 A1
20050213761 Walmsley et al. Sep 2005 A1
20050216577 Durham et al. Sep 2005 A1
20050221766 Brizek et al. Oct 2005 A1
20050235141 Ibrahim et al. Oct 2005 A1
20050240533 Cutter et al. Oct 2005 A1
20050240985 Alkove Oct 2005 A1
20050246521 Bade et al. Nov 2005 A1
20050246525 Bade et al. Nov 2005 A1
20050246552 Bade et al. Nov 2005 A1
20050251803 Turner Nov 2005 A1
20050257073 Bade Nov 2005 A1
20050262022 Oliveira Nov 2005 A1
20050265549 Sugiyama Dec 2005 A1
20050268115 Barde Dec 2005 A1
20050268174 Kumagai Dec 2005 A1
20050275866 Corlett Dec 2005 A1
20050278519 Luebke et al. Dec 2005 A1
20050279827 Mascavage et al. Dec 2005 A1
20050286476 Crosswy et al. Dec 2005 A1
20050289177 Hohmann, II Dec 2005 A1
20050289343 Tahan Dec 2005 A1
20060010326 Bade et al. Jan 2006 A1
20060015717 Liu et al. Jan 2006 A1
20060015718 Liu et al. Jan 2006 A1
20060015732 Liu Jan 2006 A1
20060020784 Jonker et al. Jan 2006 A1
20060020821 Waltermann et al. Jan 2006 A1
20060020860 Tardif Jan 2006 A1
20060026418 Bade Feb 2006 A1
20060026419 Arndt et al. Feb 2006 A1
20060026422 Bade et al. Feb 2006 A1
20060041943 Singer Feb 2006 A1
20060045267 Moore Mar 2006 A1
20060055506 Nicolas Mar 2006 A1
20060072748 Buer Apr 2006 A1
20060072762 Buer Apr 2006 A1
20060074600 Sastry et al. Apr 2006 A1
20060075014 Tharappel et al. Apr 2006 A1
20060075223 Bade et al. Apr 2006 A1
20060085634 Jain et al. Apr 2006 A1
20060085637 Pinkas Apr 2006 A1
20060085844 Buer et al. Apr 2006 A1
20060089917 Strom et al. Apr 2006 A1
20060090084 Buer Apr 2006 A1
20060100010 Gatto et al. May 2006 A1
20060106845 Frank et al. May 2006 A1
20060106920 Steeb et al. May 2006 A1
20060107306 Thirumalai et al. May 2006 A1
20060107328 Frank et al. May 2006 A1
20060107335 Frank et al. May 2006 A1
20060112267 Zimmer et al. May 2006 A1
20060117177 Buer Jun 2006 A1
20060129496 Chow Jun 2006 A1
20060129824 Hoff et al. Jun 2006 A1
20060130130 Kablotsky Jun 2006 A1
20060143431 Rothman Jun 2006 A1
20060149966 Buskey Jul 2006 A1
20060156416 Huotari Jul 2006 A1
20060165005 Frank et al. Jul 2006 A1
20060168664 Frank et al. Jul 2006 A1
20060173787 Weber Aug 2006 A1
20060206618 Zimmer et al. Sep 2006 A1
20060212945 Donlin et al. Sep 2006 A1
20060213997 Frank et al. Sep 2006 A1
20060230042 Butler Oct 2006 A1
20060242406 Barde Oct 2006 A1
20060248594 Grigorovitch Nov 2006 A1
20060282319 Maggio Dec 2006 A1
20060282899 Raciborski Dec 2006 A1
20070033102 Frank et al. Feb 2007 A1
20070058807 Marsh Mar 2007 A1
20070280422 Setala Dec 2007 A1
20080040800 Park Feb 2008 A1
20080256647 Kim Oct 2008 A1
20090070454 McKinnon, III et al. Mar 2009 A1
20100146576 Costanzo Jun 2010 A1
20100177891 Keidar Jul 2010 A1
20100250927 Bradley Sep 2010 A1
20110128290 Howell Jun 2011 A1
Foreign Referenced Citations (135)
Number Date Country
1287665 Mar 2001 CN
1305159 Jul 2001 CN
1393783 Jan 2003 CN
1396568 Feb 2003 CN
1531673 Sep 2004 CN
1617152 May 2005 CN
0 387 599 Sep 1990 EP
0 409 397 Jan 1991 EP
0 613 073 Aug 1994 EP
0635790 Jan 1995 EP
0 679 978 Nov 1995 EP
0 715 246 Jun 1996 EP
0 715 247 Jun 1996 EP
0 735 719 Oct 1996 EP
0 778 512 Jun 1997 EP
0843449 May 1998 EP
0 994 475 Apr 2000 EP
1 045 388 Oct 2000 EP
1061465 Dec 2000 EP
1 083 480 Mar 2001 EP
1085396 Mar 2001 EP
1 128 342 Aug 2001 EP
1120967 Aug 2001 EP
1 130 492 Sep 2001 EP
1 191 422 Mar 2002 EP
1 338 992 Aug 2003 EP
1 376 302 Jan 2004 EP
1387237 Feb 2004 EP
1429224 Jun 2004 EP
1223722 Aug 2004 EP
1460514 Sep 2004 EP
1233337 Aug 2005 EP
1 582 962 Oct 2005 EP
2359969 Sep 2001 GB
2378780 Feb 2003 GB
H0535461 Feb 1993 JP
H0635718 Feb 1994 JP
H07036559 Feb 1995 JP
H07141153 Jun 1995 JP
H086729 Jan 1996 JP
2001526550 May 1997 JP
H09185504 Jul 1997 JP
H9251494 Sep 1997 JP
2000-242491 Sep 2000 JP
2000293369 Oct 2000 JP
2001051742 Feb 2001 JP
2001-075870 Mar 2001 JP
2003510684 Mar 2001 JP
2001101033 Apr 2001 JP
2003510713 Apr 2001 JP
2001-175606 Jun 2001 JP
2001184472 Jul 2001 JP
2001-290650 Oct 2001 JP
2001312325 Nov 2001 JP
2001331229 Nov 2001 JP
2001338233 Dec 2001 JP
2002108478 Apr 2002 JP
2002108870 Apr 2002 JP
2002374327 Dec 2002 JP
2003-058060 Feb 2003 JP
2003507785 Feb 2003 JP
2003-101526 Apr 2003 JP
2003-115017 Apr 2003 JP
2003-157334 May 2003 JP
2003140761 May 2003 JP
2003140762 May 2003 JP
2003157335 May 2003 JP
2003208314 Jul 2003 JP
2003248522 Sep 2003 JP
2003-284024 Oct 2003 JP
2003296487 Oct 2003 JP
2003-330560 Nov 2003 JP
2002182562 Jan 2004 JP
2004-062886 Feb 2004 JP
2004062561 Feb 2004 JP
2004118327 Apr 2004 JP
2004164491 Jun 2004 JP
2004295846 Oct 2004 JP
2004304755 Oct 2004 JP
2007525774 Sep 2007 JP
H08-054952 Feb 2011 JP
20010000805 Jan 2001 KR
20020037453 May 2002 KR
10-2004-0000323 Jan 2004 KR
1020040098627 Nov 2004 KR
20050008439 Jan 2005 KR
20050021782 Mar 2005 KR
10-0879907 Jan 2009 KR
2 207 618 Jun 2003 RU
WO 9301550 Jan 1993 WO
WO 9613013 May 1996 WO
WO 9624092 Aug 1996 WO
WO 9627155 Sep 1996 WO
WO-9721162 Jun 1997 WO
WO 9725798 Jul 1997 WO
WO 9743761 Nov 1997 WO
WO 9802793 Jan 1998 WO
WO 9809209 Mar 1998 WO
WO 9810381 Mar 1998 WO
WO-9811478 Mar 1998 WO
WO 9821679 May 1998 WO
WO 9821683 May 1998 WO
WO 9824037 Jun 1998 WO
WO 9833106 Jul 1998 WO
WO 9837481 Aug 1998 WO
WO 9858306 Dec 1998 WO
WO-0054126 Sep 2000 WO
WO 0058811 Oct 2000 WO
WO 0059150 Oct 2000 WO
WO-0135293 May 2001 WO
WO-0145012 Jun 2001 WO
WO 0152020 Jul 2001 WO
WO 0152021 Jul 2001 WO
WO-0163512 Aug 2001 WO
WO-0177795 Oct 2001 WO
WO-0193461 Dec 2001 WO
WO-0208969 Jan 2002 WO
WO 0219598 Mar 2002 WO
WO 0228006 Apr 2002 WO
WO 02057865 Jul 2002 WO
WO-02056155 Jul 2002 WO
WO 02088991 Nov 2002 WO
WO-02103495 Dec 2002 WO
WO-03009115 Jan 2003 WO
WO 03034313 Apr 2003 WO
WO-03030434 Apr 2003 WO
WO 03058508 Jul 2003 WO
WO03073688 Sep 2003 WO
WO-03107585 Dec 2003 WO
WO03107588 Dec 2003 WO
WO-2004092886 Oct 2004 WO
WO 2004097606 Nov 2004 WO
WO 2004102459 Nov 2004 WO
WO 2005010763 Feb 2005 WO
WO-2007032974 Mar 2007 WO
Non-Patent Literature Citations (278)
Entry
Patent Cooperation Treaty “PCT International Search Report” for International Application No. PCT/USO5/30490, Date of mailing of the international search report Sep. 18, 2007, Authorized Officer Jacqueline A. Whitfield.(The corresponding document was previously submitted in connection with parent U.S. Appl. No. 11/116,598 and is not being resubmitted herewith per 37 CFR 1.98(d).).
Oh, Kyung-Seok, “Acceleration technique for volume rendering using 2D texture based ray plane casting on GPU”, 2006 Intl. Conf. CIS, Nov. 3-6, 2006.
Slusallek, “Vision—An Architecture for Global Illumination Calculation”, IEEE Transactions on Visualization and Computer Graphics, vol. 1, No. 1; Mar. 1995; pp. 77-96.
Zhao, Hua, “A New Watermarking Scheme for CAD Engineering Drawings”, 9th Intl. Conf. Computer-Aided Industrial Design and Conceptual Design; CAID/CD 2008;Nov. 22-25, 2008.
Kuan-Ting Shen, “A New Digital Watermarking Technique for Video.” Proceedings Visual 2002, Hsin Chu, Taiwan, Mar. 11-13, 2002.
Lotspiech, “Broadcast Encryption's Bright Future,” IEEE Computer, Aug. 2002.
Memon, “Protecting Digital Media Content,” Communications of the ACM, Jul. 1998.
Ripley, “Content Protection in the Digital Home,” Intel Technology Journal, Nov. 2002.
Steinebach, “Digital Watermarking Basics—Applications—Limits,” NFD Information—Wissenschaft und Praxis, Jul. 2002.
DMOD WorkSpace OEM Unique Features; http://www.dmod.com/oem—features, downloaded Jan. 12, 2005.
Search Report Ref 313743.02, for Application No. PCT/US 06/10327, mailed Oct. 22, 2007.
Search Report Ref 313744.02, for Application No. PCT/US06/10664, mailed Oct. 23, 2007.
Preliminary Report on Patentability Ref 313744.02, for Application No. PCT/US2006/010664, mailed Nov. 22, 2007.
Arbaugh, “A Secure and Reliable Bootstrap Architecture,” IEEE Symposium on Security and Privacy, May 1997, pp. 65-71.
Search Report Ref 313746.02 WO, for Application No. PCT/US05/30489, mailed Aug. 2, 2007.
EP Partial Search Report, Ref. FB19620, for Application No. 06774630.5-1243 / 1902367 PCT/US2006026915, Mar. 29, 2012.
CN First Office Action for Appliction No. 200680013409.0, Jun. 26, 2009.
CN First Office Action for Appliction No. 200580049553.5, Aug. 8, 2008.
CN First Office Action for Appliction No. 200680013372.1, Dec. 18, 2009.
Bajikar, Trusted Platform Module (TPM) based Security on Notebook PCs—White Paper, Intel Corporation, Jun. 20, 2002.
Content Protection System Architecture, A Comprehensive Framework for Content Protection, Feb. 17, 2000.
Pruneda, Windows Media Technologies: Using Windows Media Rights Manager to Protect and Distribute Digital Media, Nov. 23, 2004.
Shi, A fast MPEG video encryption algorithm, 1998.
EP Communication for Application No. 04779544.8-2212/1678570 PCT/US2004024529, reference EP35527RK900kja, Mar. 9, 2010.
EP Communication for Application No. 04 779 544.8-2212, reference EP35527RK900kja, May 10, 2010.
EP Summons to attend oral proceedings for Application No. 04779544.8-2212/1678570, reference EP35527RK900kja, May 10, 2012.
Bovet, “An Overview of Unix Kernels” 2001, 0 Reilly, USA, XP-002569419.
JP Notice of Rejection for Application No. 2006-536592, Nov. 19, 2010.
CN First Office Action for Application No. 200480003262.8, Nov. 30, 2007.
CN Second Office Action for Application No. 200480003262.8, Jun. 13, 2008.
CA Office Action for Application No. 2,511,397, Mar. 22, 2012.
PCT international Search Report and Written Opinion for Application No. PCT/US04/24529, reference MSFT-4429, May 12, 2006.
JP Notice of Rejection for Application No. 2006-536586, Nov. 12, 2010.
EP Communication for Application No. 04 779 478.9-2212, reference EP35512RK900peu, May 21, 2010.
EP Communication for Application No. 04 779 4789-2212, reference EP35512RK900peti, Apr. 3, 2012.
AU Examiner's first report on patent application No. 2004287141, Dec. 8, 2008.
PCT International Search Report and Written Opinion for Application No. PCT/US04/24433, reference MSFT-4430, Nov. 29, 2005.
CN First Office Action for Application No. 200480003286.3, Nov. 27, 2009.
CA Office Action for Application No. 2,511,531, Mar. 22, 2012.
CN First Office Action for Application No. 200480012375.4, Sep. 4, 2009.
CN Second Office Action for Application No. 200480012375.4, Feb. 12, 2010.
AU Examiner's first report on patent application No. 2004288600, Jan. 18, 2010.
RU Office Action for Application No. 2005120671, reference 2412-132263RU/4102, Aug. 15, 2008.
PCT International Search Report and Written Opinion for Application No. PCT/US04/23606, Apr. 27, 2005.
PCT International Search Report and Written Opinion for Application No. PCT/US06/09904, reference 308715.02, Jul. 11, 2008.
CN First Office Action for Application No. 200680012462.9, Mar. 10, 2010.
JP Notice of Rejection for Application No. 2008-507668, Sep. 2, 2011.
EP Communication for Application No. 06738895.9-2202/1872479 PCT/US2006009904, reference FB19160, Sep. 16, 2011.
KR Office Action for Application No. 10-2007-7020527, reference 308715.08, Apr. 9, 2012.
JP Final Rejection for Application No. 2008-507668, May 18, 2012.
Kassier, “Generic QOS Aware Media Stream Transcoding and Adaptation,” Department at Distributed Systems, University of Ulm, Germany. Apr. 2003.
DRM Watch Staff, “Microsoft Extends Windows Media DRM to Non-Windows Devices,” May 7, 2004.
Lee, “Gamma: A Content-Adaptation Server for Wireless Multimedia Applications,” Bell Laboratories, Holmdel NJ, USA. Published in 2003.
Ihde, “Intermediary-based Transcoding Framework,” Jan. 2001.
LightSurf Technologies, “LightSurf Intelligent Media Optimization and Transcoding,” printed Apr. 18, 2005.
Digital 5, “Media Server,” printed Apr. 18, 2005.
“Transcode”, Nov. 29, 2002. XP-002293109.
“SoX—Sound eXchange”. Last Updated Mar. 26, 2003. XP-002293110.
Britton, “Transcoding: Extending e-buisness to new environments”, Accepted for publication Sep. 22, 2000. XP-002293153.
Britton, “Transcoding: Extending E-Business to New Environments”; IBM Systems Journal, vol. 40, No. 1, 2001.
Chandra, “Application-Level Differentiated Multimedia Web Services Using Quality Aware Transcoding”; IEEE Journal on Selected Areas of Communications, vol. 18, No. 12. Dec. 2000.
Chen, “An Adaptive Web Content Delivery System”. May 21, 2000. XP-002293303.
Chen, “iMobile EE—An Enterprise Mobile Service Platform”; AT&T Labs—Research, Wireless Networks, 2003.
Chi, “Pervasive Web Content Delivery with Efficient Data Reuse”, Aug. 1, 2002. XP-002293120.
Ripps, “The Multitasking Mindset Meets the Operating System”, Electrical Design News, Newton, MA. Oct. 1, 1990. XP 000162745.
Huang, “A Frame-Based MPEG Characteristics Extraction Tool and Its Application in video Transcoding”; IEEE Transaction on Consumer Electronics, vol. 48, No. 3. Aug. 2002.
Lee, “Data Synchronization Protocol in Mobile Computing Environment Using SyncML”; 5th IEEE International Conference on High Speed Networks and Multimedia Communications. Chungnam National University, Taejon, Korea. 2002.
Shaha, “Multimedia Content Adaptation for QoS Management over Heterogeneous Networks”. Rutgers University, Piscataway, NJ. May 11, 2001. XP-002293302.
Shen, “Caching Strategies in Transcoding-enabled Proxy Systems for Streaming Media Distribution Networks”. Dec. 10, 2003. XP-002293154.
Singh, “PTC: Proxies that Transcode and Cache in Heterogeneous Web Client Environments”; Proceedings of the Third International Conference on Web Information Systems, 2002.
Lei, “Context -based media Adaptation in Pervasive Computing”. University of Ottawa. Ottawa, Ontario, Canada. May 31, 2001. XP-002293137.
Hong, “On the construction of a powerful distributed authentication server without additional key management”, Computer Communications, Nov. 1, 2000.
Managing Digital Rights in Online Publishing, “How two publishing houses maintain control of copyright” information Management & Technology, Jul. 2001.
Jakobsson, “Proprietary Certificates”, 2002.
Kumik, “Digital Rights Management”, Computers and Law, E-commerce: Technology, Oct.-Nov. 2000.
Torrubia, “Cryptography Regulations for E-commerce and Digital Rights Management”, Computers & Security, 2001.
Zwollo, “Digital document delivery and digital rights management”, Information Services & Use, 2001.
Griswold, “A Method for Protecting Copyright on Networks”, IMA Intellectual Property Project Proceedings, 1994.
Kahn, “Deposit, Registration and Recordation in an Electronic Copyright Management System”, Coalition for Networked information, Last updated Jul. 3, 2002.
Evans, “DRM: Is the Road to Adoption Fraught with Potholes?”, 2001.
Fowler, “Technoiogy's Changing Role in Intellectual Property Rights”, IT Pro, Mar.-Apr. 2002.
Gable, “The Digital Rights Conundrum”, Transform Magazine—Information Lifecycle, Nov. 2001.
Gunter, “Models and Languages for Digital Rights”, Proceedings of the 34th Hawaii International Conference on System Sciences, Jan. 3-6, 2001.
Peinado, “Digital Rights Management in a Multimedia Environment”, SMPTE Journal, Apr. 2002.
Royan, “Content Creation and Rights Management: experiences of SCRAN (the Scottish Cultural Resources Access Network)”, 2000.
Valimaki, “Digital rights management on Open and Semi-open Networks”, Proceedings of the Second IEEE Workshop on Internet Applications, Jul. 23-24, 2001.
Yu, “Digital multimedia at home and content rights management”, Proceedings 2002 IEEE 4th International Workshop on Networked Appliances, Jan. 15-16, 2002.
Hwang, “Protection of Digital Contents on Distributed Multimedia Environment”, Proceedings of the IASTED International Conference, Internet and Multimedia Systems and Applications, Nov. 19-23, 2000.
Castro, “Secure routing for structured peer-to-peer overlay networks”, Proceedings of the Fifth Symposium on Operating Systems Design and Implementation, Dec. 9-11, 2002.
Friend, “Making the Gigabit IPsec VPN Architecture Secure”, Computer, Jun. 2004.
Hulicki, “Security Aspects in Content Delivery Networks”, The 6th World Multiconference on Systemics, Cybernetics and Informatics. Jul. 14-18, 2002.
McGarvey, “Arbortext: Enabler of Multichannel Publishing”, EContent, Apr. 2002.
Moffett, “Contributing and enabling technologies for knowledge management”, International Journal Information Technology and Management, Jul. 2003.
Aviv, “Aladdin Knowledge Systems Partners with Rights Exchange, Inc. To Develop a Comprehensive Solution for Electronic Software Distribution,” Aug. 3, 1998.
Amdur, “Metering Online Copyright,” Jan. 16, 1996.
Amdur, “interTrust Challenges IBM Digital Content Metering; Funding, Name Change, Developer Kit Kick Off Aggressive Market Push”, Report on Electronic Commerce, Jul. 23, 1996.
Armati, “Tools and standards for protection, control and presentation of data,” Last updated Apr. 3, 1996.
Benjamin, “Electronic Markets and Virtual Value Chains on the Information Superhighway,” Sloan Management Review, Winter 1995.
Cassidy, “A Web developer's guide to content encapsulation technology; New tools offer clever ways to distribute your programs, stories & and get paid for it”, Apr. 1997.
Clark, “Software Secures Digital Content on Web”, Interactive Week, Sep. 25, 1995.
Cox, “Superdistribution”, ldees Fortes, Wired, Sep. 1994.
Cox, “What if there is a silver bullet”, J. Object Oriented Program, Jun. 1992.
Hauser, “Does Licensing Require New Access Control Techniques?” Aug. 12, 1993.
Hudgins-Bonafield, “Selling Knowledge on the Net; Container Consortium Hopes to Revolutionize Electronic Commerce,” Network Computing, Jun. 1, 1995.
“IBM spearheading intellectual property protection technology for information on the Internet,” May 1, 1997.
“Technological Solutions Rise to Complement Law's Small Stick Guarding Electronic Works; Vendors fight to establish beachheads in copy-protection field,” Information Law Alert, Jun. 16, 1995.
Kaplan, “IBM Cryptolopes, SuperDistribution and Digital Rights Management,” Dec. 30, 1996.
Kent, “Protecting Externally Supplied Software in Small Computers,” Sep. 1980.
Kohl, “Safeguarding Digital Library Contents and Users; Protecting Documents Rather Than Channels,” D-Lib Magazine, Sep. 1997.
Linn, “Copyright and Information Services in the Context of the National Research and Education Network,” IMA Intellectual Property Project Proceedings, Jan. 1994.
McNab, “Superdistribution works better in practical applications,” Mar. 2, 1998.
Moeller, “NetTrust lets cyberspace merchants take account,” PC Week, Nov. 20, 1995.
Moeller, “IBM takes charge of E-commerce; Plans client, server apps based on SET,” Apr. 29, 1996.
Pemberton, “An Online interview with Jeff Crigler at IBM InfoMarket,” Jul. 1996.
“Licensit: kinder, gentler copyright? Copyright management system links content, authorship information,” Seybold Report on Desktop Publishing, Jul. 8, 1996.
Sibert, “The DigiBox: A Self-Protecting Container for Information Commerce,” First USENIX Workshop on Electronic Commerce, Jul. 11-12, 1995.
Sibert, “Securing the Content, Not the Wire, for Information Commerce,” Jul. 1995.
Smith, “A New Set of Rules for Information Commerce; Rights-protection technologies and personalized-information commerce will affect all knowledge workers” Electronic Commerce, Nov. 6, 1995.
Stefik, “Trusted Systems; Devices that enforce machine-readable rights to use the work of a musician or author may create secure ways to publish over the Internet,” Scientific American, Mar. 1997.
Stefik, “Technical Perspective; Shifting the Possible: How Trusted Systems and Digital Property Rights Challenge Us to Rethink Digital Publishing,” Berkeley Technology Law Journal, Spring 1997.
Tarter, “The Superdistribution Model,” Soft Letter: Trends & Strategies in Software Publishing, Nov. 15, 1996.
Secor, “Rights Management in the Digital Age: Trading in Bits, Not Atoms,” Spring 1997.
Weber, “Digital Right Management Technology,” A Report to the International Federation of Reproduction Rights Organisations, Oct. 1995.
White, “ABYSS: An Architecture for Software Protection,” IEEE Transactions on Software Engineering, Jun. 1990.
White, “ABYSS: A Trusted Architecture for Software Protection,” IEEE Symposium on Security and Privacy, Apr. 27-29, 1987.
“Boxing Up Bytes”. No publication date available. This reference was cited in U.S. Appl. No. 09/892,371 on Mar. 22, 2002.
Ramanujapuram, “Digital Content & Intellectual Property Rights: A specification language and tools for rights management,” Dr. Dobb's Journal, Dec. 1998.
CN Notice on First Office Action for Application No. 200510056328.6, Jul. 24, 2009.
EP Communication for Application No. 05 101 873.7-1247, reference EP34127TE900kja, Dec. 19, 2006.
JP Notice of Rejection for Application No. 2005-067120, Dec. 28, 2010.
Bellovin, “Defending Against Sequence No. Attacks” AT&T Research, IETF Standard, Internet Engineering Task Force, May 1996.
Chung Lae Kim, “Development of WDM Integrated Optical Protection Socket Module,” Journal of Korean institute of Telematics and Electronics, Mar. 1996.
Gardan, N+P (With and Without Priority) and Virtual Channel Protection: Comparison of Availability and Application to an Optical Transport Network, 7th International Conference on Reliability and Maintainability, Jun. 18, 1990.
Microsoft, “Digital Rights Management for Audio Drivers” Updated Dec. 4, 2001, XP002342580.
Microsoft, “Hardware Platform for the Next-Generation Secure Computing Base”, Windows Platform Design Notes, 2003, XP-002342581.
Microsoft, Security Model for the Next-Generation Secure Computing Base, Windows Platform Design Notes, 2003, XP002342582.
Choudhury, “Copyright Protection for Electronic Publishing Over Computer Networks”, Submitted to IEEE Network Magazine Jun. 1994.
CN Third Office Action for Application No. 03145223.X, Mar. 7, 2008.
EP Communication for Application No. 03 011 235.3-1247, Reference EP27518-034/gi, Apr. 22, 2010.
EP Communication for Application No. 03 011 235.3-1247, Reference EP27518-034/gi, Nov. 4, 2011.
JP Notice of Rejection for Application No. 2003-180214, Sep. 18, 2009.
RU Official Action for Application No. 2003118755/09(020028), reference 2412-127847RU/3152, Jul. 3, 2007.
“DirectShow System Overview,” Last updated Apr. 13, 2005.
“Features of the VMR,” accessed on Nov. 9, 2005.
“Introduction to DirectShow Application Programming,” accessed on Nov. 9, 2005.
“Overview of Data Row in DirectShow,” accessed on Nov. 9, 2005.
“Plug-in Distributors,” accessed on Nov. 9, 2005.
“Using the Video Mixing Renderer,” accessed on Nov. 9, 2005.
“VMR Filter Components,” accessed on Nov. 9, 2005.
JP Notice of Rejection for Application No. 2009-288223, Jun. 29, 2012.
EP Communication for Application No. 11007532.2-1247/2492774, Reference EP27518ITEjan, Aug. 3, 2012.
Abbadi, “Digital Rights Management Using a Mobile Phone”; Aug. 19-22, 2007, ICEC '07 Proceedings of the ninth international conference on Electronic commerce.
PCT international Search Report and Written Opinion for Application No. PCT/US06/26915, reference 313859.03, Oct. 17, 2007.
CN First Office Action for Application No. 200680025136.1, Apr. 24, 2009.
JP Notice of Rejection for Application No. 2008-521535, Jun. 10, 2011.
JP Notice of Rejection for Application No. 2008-521535, Sep. 27, 2011.
KR Preliminary Rejection for Application No. 10-2008-7000503, Sep. 27, 2012.
CN Notice on Reexamination for Application No. 200680025136.1, Jun. 17, 2013.
PCT International Search Report and Written Opinion for Application No. PCT/US06/27251, reference 311888.02, Jul. 3, 2007.
CN First Office Action for Application No. 200680026251.0, Oct. 8, 2010.
“International Search Report and Written Opinion mailed Jan. 16, 2007”, Application No. PCT/US2006/034622, 6 pages.
“International Search Report and Written Opinion mailed Nov. 30, 2006”, Application No. PCT/US05/40950, 8 pages.
Qiao, Daji et al., “MiSer: An Optimal Low-Energy Transmission Strategy for IEEE 802.11 a/h”, obtained from ACM, (Sep. 2003), pp. 161-175.
“International Search Report and Written Opinion mailed Apr. 22, 2008”, Application No. PCT/US2007/087960, 7 pages.
Eren, H. et al., “Fringe-Effect Capacitive Proximity Sensors for Tamper Proof Enclosures”, Proceedings of 2005 Sensors for Industry Conference, (Feb. 2005), pp. 22-25.
“International Search Report and Written Opinion mailed Jul. 24, 2008”, Application No. PCT/US05/40966, 13 pages.
Schneier, B. “Applied Cryptography, Second Edition: Protocols, Algorithms, and Source Code in C (cloth)”, (Jan. 1, 1996), 13 pages.
Goering, Richard “Web Venture Offers Metered Access to EDA Packages—Startup Winds Clocks by the hour Tools (E*CAD Will Launch Web Site That Provides Pay-Per-Use and Pay-Per-Hour Access to Range of Chip Design Software)”, Electronic Engineering Times, (Nov. 6, 2003), 3 pages.
Zemao, Chen et al., “A Malicious Code Immune Model Based on Program Encryption”, IEEE—Wireless Communication, Networking and Mobile Computing, WICOM '08, 4th International Conference on Oct. 12-14, 2008, 5 pages.
Mufti, Dr. Muid et al., “Design and Implementation of a Secure Mobile IP Protocol”, Networking and Communicaiton, INCC 204, International Conference on Jun. 11-13, 2004, 5 pages.
Davida, George I., et al., “UNIX Guardians: Active User Intervention in Data Protection”, Aerospace Computer Security Applications Conference, Fourth Dec. 12-16, 1988, 6 pages.
Morales, Tatiana “Understanding Your Credit Score”, http://www.cbsnews.com/stories/2003/04/29/earlyshow/contributors/raymartin/main55152.shtml retrieved from the Internet on Apr. 23, 2009, (Apr. 30, 2003), 3 pages.
“Achieving Peak Performance: Insights from a Global Survey on Credit Risk and Collections Practices”, GCI Group Pamphlet, (2002, 2004), 12 pages.
“Equifax Business Solutions—Manage Your Customers”, Retrieved from the Internet from http://www.equifax.com/sitePages/biz/smallBiz/?sitePage=manage Customers on Oct. 14, 2005, 3 pages.
“Prequalification Using Credit Reports”, Retrieved from the Internet at http://www.credco.com/creditreports/prequalification.htm on Oct. 14, 2005, 2 pages.
Gao, Jerry et al., “Online Advertising—Taxonomy and Engineering Perspectives”, http://www.engr.sisu.edu/gaojerry/report/OnlineAdvertising%20.pdf, (2002), 33 pages.
Oshiba, Takashi et al., “Personalized Advertisement-Duration Control for Streaming Delivery”, ACM Multimedia, (2002), 8 pages.
Yue, Wei T., et al., “The Reward Based Online Shopping Community”, Routledge, vol. 10, No. 4, (Oct. 1, 2000), 2 pages.
“International Search Report and Written Opinion mailed Nov. 8, 2007”, Application No. PCT/US05/40967, 5 pages.
“International Search Report and Written Opinion”, Application Serial No. PCT/US05/40940, 9 pages, May 2, 2008.
“International Search Report and Written Opinion mailed Apr. 25, 2007”, Application No. PCT/US05/040965, 5 pages.
“International Search Report and Written Opinion mailed Sep. 25, 2006”, Application No. PCT/US05/40949, 7 pages.
“EP Office Action mailed Nov. 17, 2006”, Application No. 05110697.9, 6 pages.
“EP Office Action mailed Apr. 5, 2007”, Application No. 05110697.9, 5 pages.
“EP Summons to Attend Oral Proceedings mailed Sep. 27, 2007”, Application No. 05110697.9, 7 pages.
“Decision to Refuse a European Application mailed Feb. 15, 2008”, Application No. 05110697.9, 45 pages.
“International Search Report and Written Opinion mailed Sep. 8, 2006”, Application No. PCT/US05/040942, 20 pages.
“European Search Report mailed Dec. 6, 2010”, Application No. 05820177.3, 8 pages.
Lampson, Butler et al., “Authentication in Distributed Systems: Theory and Practice”, ACM Transactions on Computer Systems, v10, 265, (1992), 18 pages.
“Office Action mailed Jun. 29, 2009”, Mexican Application No. MX/a/2007/005657, 2 pages.
“Search Report Dated Jan. 11, 2008”, EP Application No. 05820090.8, 7 pages.
“Examination Report mailed Mar. 5, 2008”, EP Application No. 05820090.8, 1 pages.
“First Office Action mailed Apr. 11, 2008”, Chinese Application No. 200580038813.9, 11 pages.
“Office Action mailed Jun. 29, 2009”, Mexican Application No. MX/a/2007/005656, 6 pages.
“Office Action mailed Nov. 30, 2009”, Mexican Application No. MX/a/2007/005659, 6 pages.
“Notice of Allowance mailed Jul. 2, 2010”, Mexican Application No. MX/a/2007/005659, 2 pages.
“Extended European Search Report mailed Dec. 6, 2010”, EP Application No. 05820177.3, 8 pages.
“Second Office Action mailed Dec. 18, 2009”, Chinese Application No. 200580038812.4, 24 pages.
“Third Office Action mailed Apr. 1, 2010”, Chinese Application No. 200580038812.4, 9 pages.
“Notice on Grant of Patent Right for Invention mailed May 5, 2011”, Chinese Application No. 200580038812.4, 4 pages.
“Office Action mailed Jul. 7, 2009”, Mexican Application No. MX/a/2007/005660, 8 pages.
“Notice of Allowance mailed Feb. 18, 2010”, Mexican Application No. MX/a/2007/005660, 2 pages.
“Extended European Search Report mailed Aug. 13, 2010”, EP Application No. 05823253.9, 7 pages.
“Notice on the First Office Action mailed Sep. 27, 2010”, Chinese Application No. 200580038745.6, 6 pages.
“Office Action mailed Jul. 8, 2009”, Mexican Application No. MX/a/2007/005662, 7 pages.
“Notice of Allowance mailed Feb. 19, 2010”, Mexican Application No. MX/a/2007/005662, 2 pages.
“Partial Search Report mailed Jul. 23, 2010”, EP Application No. 05821183.0.
“Extended European Search Report mailed Jan. 7, 2011”, EP Application No. 05821183.0, 9 pages.
“Notice of Allowance mailed Dec. 25, 2009”, Chinese Application No. 200580038773.8, 4 pages.
“Office Action mailed Jun. 26, 2009”, Mexican Application No. MX/a/2007/005655, 5 pages.
“Office Action mailed Feb. 9, 2010”, Mexican Application No. MX/a/2007/005855, 6 pages.
“Office Action mailed Sep. 24, 2010”, Mexican Application No. MX/a/2007/005655, 3 pages.
“Extended European Search Report mailed Jan. 21, 2010”, EP Application No. 05819896.1, 8 pages.
“Office Action mailed Mar. 19, 2010”, EP Application No. 05819896.1, 1 page.
“Office Action mailed Feb. 10, 2010”, Mexican Application No. MX/a/2007/005656, 5 pages.
“Office Action mailed Oct. 18, 2010”, Mexican Application No. MX/a/2007/005656, 3 pages.
“Notice on the First Office Action mailed Jul. 30, 2010”, Chinese Application No. 200680033207.2, 7 pages.
“EP Search Report mailed Jan. 2, 2008”, EP Application No. 05109616.2, 7 pages.
“Flonix: USB Desktop OS Solutions Provider, http://www.flonix.com”, Retrieved from the Internet Jun. 1, 2005, (Copyright 2004), 2 pages.
“Migo by PowerHouse Technologies Group, http://www.4migo.com”, Retrieved from the Internet Jun. 1, 2005, (Copyright 2003), 3 pages.
“WebServUSB. http://www.webservusb.com”, Retrieved from the Internet Jun. 1, 2015, (Copyright 2004), 16 pages.
“Notice of Rejection mailed Jul. 8, 2011”, Japanese Application No. 2007-541363, 10 pages.
“Notice of Rejection mailed Aug. 5, 2011”, Japanese Patent Application No. 2007-552142, 8 pages.
“Forward Solutions Unveils Industry's Most Advanced Portable Personal Computing System on USB Flash Memory Device”, Proquest, PR Newswire, http://proquest.umi.com/pqdweb?index=20&did=408811931&SrchMode=1&sid=6&Fmt=3, Retreived from the Internet Feb. 15, 2008, (Sep. 22, 2003), 3 pages.
“Office Action mailed May 26, 2008”, EP Application No. 05109616.2, 5 pages.
“Notice on Division of Application mailed Aug. 8, 2008”, CN Application No. 200510113398.0, 2 pages.
“Notice on First Office Action mailed Dec. 12, 2008”, CN Application No. 200510113398.0.
“The Second Office Action mailed Jul. 3, 2009”, CN Application No. 200510113398.0, 7 pages.
“Notice on Proceeding With the Registration Formalities mailed Oct. 23, 2009”, CN Application No. 200510113398.0, 4 pages.
“Examiner's First Report on Application mailed Jun. 4, 2010”, AU Application No. 2005222507, 2 pages.
“Notice of Acceptance mailed Oct. 14, 2010”, AU Application No. 2005222507, 3 pages.
“Decision on Grant of a Patent for Invention mailed Apr. 29, 2010”,Russian Application No. 2005131911, 31 pages.
“Notice of Allowance mailed Nov. 13, 2009”, MX Application No. PA/a/2005/011088, 2 pages.
“TCG Specification Architecture Overview”, Revision 1.2, (Apr. 28, 2004), 55 pages.
“International Search Report and Written Opinion mailed Jun. 19, 2007”, PCT Application No. PCT/US05/46091, 11 pages.
“Notice on Grant of Patent Right for Invention mailed Jan. 29, 2010”, CN Application No. 200580040764.2, 4 pages.
“International Search Report mailed Jan. 5, 2007”, Application No. PCT/US2006/032708, 3 pages.
“Cyotec—CyoLicence”, printed from www.cyotec.com/products/cyolcence on Sep. 7, 2005, (Copyright 2003-2005).
“Magic Desktop Automation Suite for the Small and Mid-Sized Business”, printed from www.remedy.com/soultions/magic—it—suite.htm on Sep. 7, 2005, (Copyright 2005), 4 pages.
“PACE Anti-Piracy Introduction”, printed from www.paceap.com/psintro.html on Sep. 7, 2005, (Copyright 2002), 4 pages.
“Office Action mailed Jul. 6, 2009”, MX Application No. MX/a/2007/005661, 6 pages.
“Office Action mailed Oct. 1, 2010”, MX Application No. MX/a/2007/005661, 3 pages.
“Office Action mailed Mar. 8, 2011”, MX Application No. MX/a/2007/005661, 8 pages.
“Notice on Second Office Action mailed Jun. 7, 2010”, CN Application No. 200680030846.3, 6 pages.
“Decision on Rejection mailed Sep. 13, 2010”, CN Application No. 200680030846.3, 5 pages.
Kwok, Sai H., “Digital Rights Management for the Online Music Business”, ACM SIGecom Exhchanges, vol. 3, No. 3, (Aug. 2002), pp. 17-24.
“International Search Report and Written Opinion mailed Mar. 21, 2007”, Application No. PCT/US05/46223, 10 pages.
“the First Office Action mailed Oct. 9, 2009”, CN Application No. 200580043102.0, 20 pages.
“International Search Report and Written Opinion mailed Jul. 9, 2008”, Application No. PCT/US05/46539, 11 pages.
“Notice of the First Office Action mailed Dec. 29, 2010”, CN Application No. 200580044294.7, 9 pages.
“Office Action mailed Jul. 1, 2009”, MX Application No. 2007/a/2007/007441.
“European Search Report mailed Aug. 31, 2011”, EP Application No. 05855148.2, 6 pages.
“International Search Report and Written Opinion mailed Sep. 25, 2007”, Application No. PCT/US06/12811, 10 pages.
“Examiner's First Report mailed Sep. 15, 2009”, AU Application No. 2006220489, 2 pages.
“Notice of Acceptance mailed Jan. 25, 2010”, AU Application No. 2006220489, 2 pages.
“The First Office Action mailed Aug. 22, 2008”, CN Application No. 200680006199.2, 23 pages.
“The Second Office Action mailed Feb. 20, 2009”, CN Application No. 200680006199.2, 9 pages.
“The Fourth Office Action mailed Jan. 8, 2010”, CN Application No. 200680006199.2, 10 pages.
“The Fifth Office Action mailed Jul. 14, 2010”, CN Application No. 200680006199.2, 6 pages.
“Notice on Grant of Patent mailed Oct. 20, 2010”, CN Application No. 200680006199.2, 4 pages.
“First Office Action mailed Aug. 21, 2009”, CN Application No. 20068003046.3, 8 pages.
“Notice on the First Office Action mailed Dec. 11, 2009”, CN Application No. 200510127170.7, 16 pages.
“The Third Office Action mailed Jun. 5, 2009”, CN Application No. 200680006199.2, 7 pages.
“Notice of Rejection mailed Sep. 9, 2011”, JP Application No. 2007-548385, 9 pages.
“Notice of Rejection mailed Nov. 11, 2011”, Japanese Application No. 2005-301957, 21 pages.
“Extended European Search Report mailed Dec. 21, 2011”, EP Application No. 05854752.2, 7 pages.
“Final Rejection mailed Jan. 17, 2012”, Japan Application No. 2007-552142, 8 pages.
“EP Office Action mailed Mar. 8, 2012”, EP Application No. 05109616.2, 6 pages.
“Notice of Preliminary Rejection mailed May 30, 2012”, Korean Patent Application No. 10-2007-7011069, 1 page.
“Extended European Search Report mailed Jul. 5, 2012”, EP Application No. 05851550.3, 6 pages.
“Preliminary Rejection mailed Jul. 4, 2012”, Korean Application No. 10-2007-7012294, 2 pages.
“Office Action mailed Jun. 8, 2012”, JP Application No. 2005-301957, 8 pages.
KR Notice of Final Rejection for Application No. 10-2007-7024145, Reference No. 313361.12, Oct. 23, 2012.
KR Notice of Preliminary Rejection for Application No. 2007-7023842, Reference No. 313361.06, Oct. 24, 2012.
EP Communication for Application No. 04 778 899.7-2212, Reference EP35523RK900peu, Nov. 23, 2012.
Utagawa, “Making of card applications using IC Card OS MULTOS” Mar. 1, 2003.
Nakajima, Do You Really Know It? Basics of Windows2000/XP, Jan. 2004.
N+1 Network Guide, “First Special Feature, Security Oriented Web Application Development, Part 3, Method for Realizing Secure Session Management”, Jan. 2004.
Related Publications (1)
Number Date Country
20090158036 A1 Jun 2009 US
Provisional Applications (1)
Number Date Country
60673979 Apr 2005 US
Divisions (1)
Number Date Country
Parent 11116598 Apr 2005 US
Child 12390505 US