Protected media pipeline

Information

  • Patent Grant
  • 9363481
  • Patent Number
    9,363,481
  • Date Filed
    Wednesday, April 27, 2005
    19 years ago
  • Date Issued
    Tuesday, June 7, 2016
    8 years ago
Abstract
A system for processing a media content comprising an application space, a media control mechanism operating in the application space, the media control mechanism controlling the operation of the system, a user interface adapted to provide input to the media control mechanism, a protected space distinct from the application space, and a protected media pipeline operating in the protected space, the protected media pipeline coupled to the media control mechanism, the protected media pipeline adapted to access the media content, process the media content, and output the media content.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims benefit to U.S. Provisional Patent Application No. 60/673,979, filed on Friday, Apr. 22, 2005.


DESCRIPTION OF THE DRAWINGS

The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:



FIG. 1 is a block diagram showing an example of a typical prior art media player or application designed to operate on an exemplary personal computer.



FIG. 2 is a block diagram showing an example of a trusted media system comprising an application space and a distinct protected space.



FIG. 3 is a block diagram showing exemplary components comprising an end-to-end system for protecting media content and other data from initial input to final output of a computing environment.



FIG. 4 is a block diagram showing exemplary components comprising a protected media pipeline operating in a protected space as part of a trusted media system.



FIG. 5 is a block diagram showing an alternate example of a protected media pipeline having a proxied media source as part of a trusted media system.



FIG. 6 is a block diagram showing an example of a further alternative example of a trusted media system.



FIG. 7 is a block diagram showing a plurality of protected media pipelines.



FIG. 8 is a block diagram showing an exemplary computing environment in which the software applications, systems and methods described in this application may be implemented.



FIG. 9 is a block diagram showing a conventional media application processing media content operating in a conventional computing environment with an indication of an attack against the system.



FIG. 10 is a block diagram showing a trusted application processing media content and utilizing a protected environment or protected space that tends to be resistant to attack.



FIG. 11 is a block diagram showing exemplary components of a trusted application that may be included in the protected environment.



FIG. 12 is a block diagram showing a system for downloading digital media content from a service provider that utilizes an exemplary trusted application utilizing a protected environment.



FIG. 13 is a block diagram showing exemplary attack vectors that may be exploited by a user or mechanism attempting to access media content or other data typically present in a computing environment in an unauthorized manner.



FIG. 14 is a flow diagram showing the process for creating and maintaining a protected environment that tends to limit unauthorized access to media content and other data.



FIG. 15 is a block diagram showing exemplary kernel components and other components utilized in creating an exemplary secure computing environment.



FIG. 16 and FIG. 17 are flow diagrams showing an exemplary process for loading kernel components to create an exemplary secure computing environment.



FIG. 18 is a block diagram showing a secure computing environment loading an application into an exemplary protected environment to form a trusted application that may be resistant to attack.



FIG. 19 is a flow diagram showing an exemplary process for creating a protected environment and loading an application into the protected environment.



FIG. 20 is a block diagram showing an exemplary trusted application utilizing an exemplary protected environment periodically checking the security state of the secure computing environment.



FIG. 21 is a flow diagram showing an exemplary process for periodically checking the security state of the secure computing environment.



FIG. 22 is a block diagram showing an exemplary computing environment including a representation of a protected environment, a trusted media system, and other related elements.







Like reference numerals are used to designate like elements in the accompanying drawings.


DETAILED DESCRIPTION

The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present examples may be constructed or utilized. The description sets forth the functions of the examples and the sequence of steps for constructing and operating the examples. However, the same or equivalent functions and sequences may be accomplished by different examples.


Although the present examples are described and illustrated herein as being implemented in a computer system, the system described is provided as an example and not a limitation. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of electronic systems.


Introduction


Digital media content is widely used in the form of CDs, DVDs and downloadable files. Various devices are able to process this media content including personal computers running various media player applications and the like, CD and DVD players, MP3 players and other general-purpose and/or dedicated electronic devices designed to process digital media content.


Because media content often comes in the form of a for-sale consumer products and the like, producers and providers may be anxious to protect their media content from unauthorized access, duplication, use, etc. Therefore, media content is often encrypted and/or otherwise secured. Some form of encryption key and/or other access mechanism may be provided for use with the media so that it can be accessed when and how appropriate. This key or mechanism may be used by a media application or the like to gain access to the protected media for processing, playing, rendering, etc.


Once the key or other mechanism has been used to decrypt or otherwise access media content within a system the media content may be vulnerable in its unprotected form. It may be possible to attack the system and/or media application so as to gain access to the unprotected media content. This may lead to the unauthorized access, use, duplication, distribution, etc. of the media content.


To avoid unauthorized access, a system that rightfully accesses the media content should be capable of protecting the media content. This protection should extend from the time the key or the like is obtained, used to access the media content, throughout any processing performed on the content, until the content is appropriately rendered in its authorized form. For example, a particular meeting may be recorded and encrypted using an access key with the intent of making the recording available to authorized personnel. Later, the recording is made available to an authorized individual via a media application on a PC. The media application uses the key to decrypt and access the media content, process it and play it for the listener. But if the media application itself has been compromised, or the application and/or content is attacked, the unencrypted media may no longer be protected.


One approach may be to construct a system for accessing, processing and rendering the media content within a protected environment that is designed to prevent unauthorized access to the media content. The example provided here describes a process and system for protecting media content from unauthorized access. Protection may be afforded by a protected media pipeline, among other mechanisms, which processes some, or all, of a media within a protected environment or protected space. A protected media pipeline may be composed of several elements.


A media source that may be part of the protected media pipeline accesses the media content, passes it through a set of transform functions or processes (decoders, effects, etc.) and then to a media sink which renders the processed media to a media output(s) (video rendering process, audio rendering process, etc). As an example, rendering may be as simple as sending audio signals to a set of headphones or it may be sending protected content in a secure manner to yet another process, system or mechanism external to the protected media pipeline.


A protected media pipeline may be constructed as a set or chain of media processing mechanisms operating in a secure or protected environment. In a PC, a protected media pipeline can be thought of as a software process that operates in a secure environment which protects the media content from unauthorized access while the content is being accessed, played and/or otherwise processed by the media system. When media content is being processed by an electronic device, a protected media pipeline can be thought of as a set of media processing mechanisms operating within a secure environment such that the media being processed is resistant to unauthorized access. The mechanism for providing this resistance may be purely physical in nature, such as a sealed case or lack of access points to the media content.


There may be two major aspects to constructing a trusted media system with a protected media pipeline. First, a trusted media system may be designed and constructed in such a way that it acknowledges and adheres to any access rules of the media content by ensuring that no actions are taken with the content above and beyond those allowed. Various mechanisms known to those skilled in this technology area may be used to address this first point. These mechanisms may include using encryption/decryption, key exchanges, passwords, licenses, interaction with a digital rights management system, and the like. Further, this may be as simple as storing the media content on/in a device such that it is resistant to physical, electronic or other methods of accessing and using the media content, except as intended.


Second, the trusted media system may be designed and constructed such that the media content being processed is secure from malicious attacks and/or unauthorized access and use. Processing the media content via a protected media pipeline operating in a protected environment or protected space addresses this second point. So in short, a protected media pipeline operating in a protected space refers to a media processing environment that resists unauthorized access to the media content being processed.



FIG. 1 is a block diagram showing an example of a typical prior art media player or application 100 designed to operate on an exemplary personal computer (FIG. 8, 800). Equivalently, media players may operate on other devices with similar processing capabilities such as consumer electronic devices and the like. Other media applications may include, but are not limited to, media processors, media manipulators, media analyzers, or media formatters. A media application may be a software application program that provides a way of playing media such as audio and video by a digital processor such as a CPU (FIG. 8, 807) or the like. A media application may include a user interface or graphic 101 that may indicate the media being played and provides various user controls. Controls may be accessed through activation with a computer pointing device such as a mouse or by conventional buttons or the like. Such a media application may be thought of as a software application program operating in an application space 102 that is provided by the PC's computing environment (FIG. 8, 801) or operating system.


Another example of a media player may be a hardware device comprising a memory capable of storing media content and various button, switches, displays and controls and the like to allow a user to control the device, select the media to be played, control volume, download media content, etc.


The media player 100 may be comprised of mechanisms 104, 106 and 108. These mechanisms may operate in the application space 102. For a software media player, an application space 102 may be a space created in system memory (FIG. 8, 809) on a PC (FIG. 8, 800) where various software components or processes can be loaded and executed. For a hardware media player an application space 102 may be a printed circuit board and an electronic module containing the electronic elements that perform the processing and functions of the media player 100. The media player application 100 may include other spaces and mechanisms which may provide additional capabilities or features that may or may not be directly related to the processing of media. For example, a second media player playing a music selection may operate in a media application at the same time as a media player playing a newscast.


The application space 102 may include a user interface process 104 coupled to a media control process 106 which in turn is coupled to a media processing process 108. Typically these processes enable the media application 100 to couple to a source of media content 110, process the media content 110 and render it via media output 130. The media content 110 may or may not be encrypted or otherwise protected as part of an overall security and access control scheme.


For example, when activated the media application 100 may access audio content 112 and video content 114 typically available on a DVD ROM, an on-line source, or the like. The media content 110 may be played via media processing 108 which renders the content as audio output 132 and/or video output 134. Audio and video may typically be rendered on the speakers and/or display of a PC (FIG. 8, 800). This system is only one example of common media applications and environments that enable audio and video and the like to be processed, played and/or provided to other processes or systems. Another example of a media application would be a consumer electronic device such as an electronic juke box or the like. Yet another example would be a dedicated electronic device, with or without software and/or firmware.


Application space 102 may contain various processes and, in this example, includes the user interface process 104, the media control process 106, the media processing process 108, or their equivalents, used to coordinate and control the overall operation of the media application 100 and its processes. Typically, to prepare the media content 110, the user interface process 104 may provide an interface 101 for interaction between the user and the application. The media control process 106 or its equivalents may provide the overall management and control of the internal operations of the media application 100. The media processing process 108 may perform the processing of the media content 110 making it possible to render the media content via the media output 130, or perform whatever other media processing it may have been designed to perform.


The processes described above may not be secure against unauthorized access to the media content 110. Processing the media content 110 via such a system may expose it to unauthorized access. Such an unprotected application may enable users and/or attackers, with varying degrees of effort, to access and make use of the media content 110 in an unauthorized manner. For example, unauthorized access may enable the unauthorized sharing, copying, modifying, and/or distributing of media content 110.


Exemplary Trusted Media System



FIG. 2 is a block diagram showing an example of a trusted media system 200 comprising an application space 202 and a distinct protected space 230. In this exemplary embodiment of a media player the system comprises a protected media pipeline 232 operating within a protected space 230 in addition to user interface 204 and media control 206 mechanisms operating in the application space 202.


The protected space 230 typically provides a protected environment for media content 110 processing, the protected space 230 resisting unauthorized access to the media content 110 during processing. Media content 110 is typically protected by various built-in security schemes to deliver it un-tampered-with to a user, such as encryption and the like. However, once the media content 110 is decrypted or the like for processing, additional mechanisms to protect it from unauthorized access are required. A protected media pipeline 232 operating in a protected space 230.


Application space 202 may be contain various mechanisms including, but not limited to, a user interface mechanism 204 and a media control mechanism 206, or their equivalents, which are coupled to the protected media pipeline 232 operating within the protected space 230. Typically the user interface process 204 may provide an interface 201 or set of controls for interaction between the user and the system. The media control process 206 may provide the overall management and control of the internal operations of the trusted media system 200. The protected media pipeline 232 operating in the protected space 230 may perform the processing of the media content 110 and render the content via the media output 130, or perform whatever other media processing the media system 200 is designed to perform.


One or more protected spaces 230 may be provided as an extension of a computing environment (FIG. 8, 801) and typically possess a heightened level of security and access control. A protected space 230 may also include mechanisms to ensure that any mechanism operating inside it, such as a protected media pipeline 232, along with any media content being processed within the protected space 230, are used and accessed appropriately. In some embodiments the access and use privileges may be indicated by a media content license and/or a digital rights management system. Alternatively, mechanisms such as password protection, encryption and the like may provide access control.



FIG. 3 is a block diagram showing exemplary components comprising an end-to-end system for protecting media content 110 and other data from initial input 302 to final output 308 of a computing environment 800. Such a system tends to protect media 110 or other data from the point of entry into a computing environment 800 to its final output 130 in addition to providing protection during processing within a protected media pipeline 232 and/or other processing components. Such end-to-end protection may be provided via three major components-protected input 302, a protected space 230 for processing and protected output 308.


Protected input 302 may be implement in hardware and/or software and may limit unauthorized access to media content 110 and/or other data as it is initially received onto the system 800 from some source such as a storage device, network connection, physical memory device and the like. The protected input 302 may be coupled to a protected media pipeline 232 via a secure connection 304. The secure connection 304 allows transfer of the media content 110 between the protected input 302 and the protected media pipeline 232 and/or other processing components and may be implemented using mechanisms such that it is tamper resistant.


Protected output 306 may be implemented in hardware and/or software and may limit unauthorized access to media content 110 as it is transferred from a protected media pipeline 232 or other processing to the output of the computing environment 800 which may be speakers, video displays, storage media, network connections and the like. The protected output 308 may be coupled to a protected media pipeline 232 via a secure connection 306. The secure connection 306 allows transfer of the media content 110, which may be in a processed form, between the protected media pipeline 232 and the protected output 308 and may be implemented using mechanisms such that it is tamper resistant.


Tamper resistance as used here includes limiting unauthorized access, resisting attack and otherwise protecting media content and/or other data from being compromised.


A protected space may also be referred to as a protected environment. Protected spaces or environments and their creation and maintenance are described beginning with the description of FIG. 9 below.


Protected Media Pipeline



FIG. 4 is a block diagram showing exemplary components comprising a protected media pipeline 232 operating in a protected space 230 as part of a trusted media system 200. The components 400, 421, 422, 425, and 480 form a protected media pipeline 232 operating in a protected space 230. Of these components, the transforms mechanisms 420 process the media content to prepare it for output. The protected space 230 may also contain other protected elements 410 of the trusted media system 200.


The protected media pipeline 232 typically performs the function of accessing and processing protected media content 110 and producing a protected output in the format determined by the trusted media system 200. Unprotected media content may also be processed in a protected media pipeline 232. Further, unprotected media pipelines may be constructed and operate in the application space 202 or other spaces. However, an unprotected media pipeline operating in the application space 202 would not benefit from a protected environment 230 which limits unauthorized access to the media content. For processing some types of media content, such as unprotected or unencrypted media content, an unprotected pipeline may be acceptable. In some embodiments there may be a plurality of media content having different security levels (some protected and some unprotected), processed through one or more pipelines each adapted to provide the desired level of protection.


In the protected media pipeline 232 a media source 400 may be coupled to a series of transform functions or mechanisms 420. A first transform function F(a)1 421 may be coupled to a second transform function F(b)2 422 which in turn may be coupled to any number of additional transform functions represented by F(z)n 425. The output of the set of transform functions 420 may be coupled to a media sink 480. There are typically one or more transform functions in a protected media pipeline 232, the specific function of each transform depending on the media content 110 and the processing that the trusted media system 200 is designed to perform.


The example shown illustrates transform mechanisms that may be connected in series forming a transform chain. In alternative embodiments of a protected media pipeline 232, two or more of the transform mechanisms may be coupled in parallel and/or two or more media pipelines may be coupled at some point in each pipeline's transform chain forming a single pipeline from that point forward. Further, each transform may have a single input or a plurality of inputs and they may have a single output or a plurality of outputs.


The media source 400 may access media content 110 via hardware and/or appropriate driver software or the like. For example, using a PC for processing music stored on a CD, the media source 400 couples to CD ROM driver software which controls the CD ROM drive hardware (FIG. 8, 804) to read audio data from a CD ROM disk (FIG. 8, 806). The media source 400 is a mechanism used in the construction of a media pipeline to access and receive the media content 110 and make it available to the remaining mechanisms of the media pipeline. Alternatively, a media source 400 may couple with a semiconductor memory in a consumer electronic device to access music stored on the device. Equivalent media sources may provide access to one or more types of media content, including video, digital recordings, and the like.


The media transforms 420, represented by F(a)1, F(b)2 and F(z)n, (421, 422 and 425 respectively) perform specific operations on the media content provided by the media source 400 and may each perform different operations. There are typically at least one media transform in a media pipeline. The media transforms 421, 422 and 425 prepare and/or process the media content 110 for rendering via the media output 130 and/or for further processing. The specific transformations performed may include operations such as encryption and/or decryption of media content, image enhancement of video content, silence detection in audio content, decompression, compression, volume normalization, and the like. Transforms may process media content 110 automatically or be controlled by a user via virtual or physical handles provided through a user interface 204. The specific transforms provided in a pipeline depend on the media content 110 to be processed and the function the trusted media system 200 has constructed the pipeline to perform. In a simple media system or application the processing may be as minimal as decoding an audio media and controlling the volume of the media accessed from a semiconductor memory and played on a headset. In a more complex media system or application a wide variety of processing and media manipulation are possible.


In a trusted media system 200 designed to process encrypted media content one of the transform mechanisms, typically the first transform F(a)1 421, may be a codec which decodes the media content such that it may be further processed. In alternative examples, decryption and/or decompression operations may be performed by distinct mechanisms and one or both operations may be eliminated depending on the format of media content being processed.


When operating on a PC, the media sink 480 may couple the processed or transformed media content 110 to the media output 130 via the media I/O hardware (FIG. 8, 812) controlled by appropriate driver programs. For example, in the case of audio data, the media sink 480 may couple to an available sound driver program which couples audio data that has been transformed to audio output hardware such as an amplifier and/or speakers (FIG. 2, 132). When operating on a consumer electronic device, the media sink 480 may be coupled, for example, to an audio amplifier which in turn couples to speakers or a headset through a connector on the device's case.


By constructing a pipeline that performs the sourcing, transform and sinking functions within a protected space 230, unauthorized access to the media content 110 may be restricted in a manner that conforms to the wishes of the media content provider/owner. Thus, this approach tends to provide a secure processing environment such that a media content provider may trust that their media content 110 will not be compromised while being processed.


The output of the protected media pipeline 232 may be coupled to the input of a media output 130. Alternatively the output of a protected media pipeline 232 may couple to the input of another protected media pipeline or some other process. This coupling may be implemented such that it is tamper resistant and restricts unauthorized access to any data or media content flowing from one pipeline to another or to some other process. The remainder of the elements illustrated in FIG. 4 operate as previously described for FIG. 2.



FIG. 5 is a block diagram showing an alternate example of a protected media pipeline 552 having a proxied media source 510 as part of a trusted media system 500. The proxied media source 510 includes a media source portion 518 and a stub portion 520 that may operate in an unprotected application space 502, and a proxy portion 540 that may operate in a protected space 550. The proxied media source 510 may allow media content 110 to be transferred from the application space 502 via the media source 518 and the stub 520 to the protected space 550 via the proxy 540 by using remote procedure calls or the like.


When used in a PC environment (FIG. 8, 800), the proxied media source 510 architecture described here may simplify the creation of the media source modules by third-party software makers or content providers. Such a simplification may be provided by splitting the proxied media source 510 such that media application writers may only need to implement the media source portion 518. The stub portion 520 and proxy portion 540 may be provided as an element of the protected environment 550.


Further, the use of a proxied media source 510 may support mixing protected and unprotected media content 110 by allowing protected media content to be directed from a media source 518 to a first stub operating as part of a protected media pipeline while the unprotected media content may be directed from the media source 518 to processing modules operating within the unprotected application space 502 or other unprotected space via a second stub portion also operating within the unprotected application space 502 or some other unprotected space.


Similar to the proxied media source 510, the media sink 480 may also be proxied and split into stub and proxy portions. The stub portion may operate in the protected space 650 and may encrypt data prior to forwarding it to the proxy portion operating in an application space 202 or some other space. The remainder of the elements in FIG. 5 operate as previously described for FIG. 4.



FIG. 6 is a block diagram showing an example of a further alternative example of a trusted media system 600. In this embodiment the trusted media system 600 includes a protected media source 610 constructed to include a media source portion 618 and a stub portion 620 which operate in a protected media space 609, and a proxy portion 640 which operates in a protected space 650. The two protected regions 609 and 650 are coupled by the protected media source 610 with data being passed from the media source portion 618 via the stub portion 620 operating in the protected media space 609 to the proxy portion 640 operating in the protected space 650. The protected media source 610 may allow media content 110 to be transferred from the protected media space 609 to the protected pipeline space 650 using remote procedure calls or the like. The protected media source 610 architecture described here may simplify the creation of the media source by third-parties or content providers and result in more stable and secure protected media applications 600. The remaining elements of FIG. 6 operate as previously described for FIG. 4 and FIG. 5.



FIG. 7 is a block diagram showing a plurality of protected media pipelines 751-759. The protected media pipelines 751, 752, 759 operate in a protected space 700. Alternatively each protected media pipeline may operate in its own protected space or various numbers of pipelines may be grouped into one or more protected spaces in any combination. A trusted media system may provide several such protected media pipelines.


An example of such a system may be a trusted media system playing a DVD with its audio content in Dolby digital 5.1 format. In this example there may be six different audio pipelines, one for each of the audio channels, in addition to a video pipeline for the video portion of the DVD. All of the protected media pipelines may operate in the same protected space as shown or, alternatively, the protected media pipelines may be grouped in groups of one or more with each group operating in its own distinct protected space.


In alternative embodiments of a protected media pipeline 232, two or more of the sources, transform mechanisms and/or sinks may be coupled in parallel and/or two or more media pipelines may be coupled at some point in each pipeline forming a single pipeline from that point forward. Alternatively a single pipeline may split into two pipelines. Further, sources, transforms and/or sinks may have a single input or a plurality of inputs and/or they may have a single output or a plurality of outputs. The remaining elements of FIG. 7 operate as previously described for FIG. 4.



FIG. 8 is a block diagram showing an exemplary computing environment 800 in which the software applications, systems and methods described in this application may be implemented. Exemplary personal computer 800 is only one example of a computing system or device that may process media content (FIG. 4, 110) and is not intended to limit the examples described in this application to this particular computing environment or device type.


The computing environment can be implemented with numerous other general purpose or special purpose computing system configurations. Examples of well known computing systems may include, but are not limited to, personal computers 800, hand-held or laptop devices, microprocessor-based systems, multiprocessor systems, set top boxes, programmable consumer electronics, gaming consoles, consumer electronic devices, cellular telephones, PDAs, and the like.


The PC 800 includes a general-purpose computing system in the form of a computing device 801. The components of computing device 801 may include one or more processors (including CPUs, GPUs, microprocessors and the like) 807, a system memory 809, and a system bus 808 that couples the various system components. Processor 807 processes various computer executable instructions to control the operation of computing device 801 and to communicate with other electronic and computing devices (not shown) via various communications connections such as a network connection 814 an the like. The system bus 808 represents any number of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.


The system memory 809 includes computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). A basic input/output system (BIOS) may be stored in ROM. RAM typically contains data and/or program modules that are immediately accessible to and/or presently operated on by one or more of the processors 807. A trusted media system 200 may be contained in system memory 809.


Mass storage devices 804 and 810 may be coupled to the computing device 801 or incorporated into the computing device by coupling to the system bus. Such mass storage devices 804 and 810 may include a magnetic disk drive which reads from and/or writes to a removable, non volatile magnetic disk (e.g., a “floppy disk”) 805, or an optical disk drive that reads from and/or writes to a removable, non-volatile optical disk such as a CD ROM, DVD ROM or the like 806. Computer readable media 805 and 806 typically embody computer readable instructions, data structures, program modules and the like supplied on floppy disks, CDs, DVDs, portable memory sticks and the like.


Any number of program modules may be stored on the hard disk 810, other mass storage devices 804, and system memory 809 (limited by available space), including by way of example, an operating system(s), one or more application programs, other program modules, and program data. Each of such operating system, application program, other program modules and program data (or some combination thereof) may include an embodiment of the systems and methods described herein. For example, a trusted media system 200 may be stored on mass storage devices 804 and 810 and/or in system memory 809.


A display device 134 may be coupled to the system bus 808 via an interface, such as a video adapter 811. A user can interface with computing device 800 via any number of different input devices 803 such as a keyboard, pointing device, joystick, game pad, serial port, and/or the like. These and other input devices may be coupled to the processors 807 via input/output interfaces 812 that may be coupled to the system bus 808, and may be coupled by other interface and bus structures, such as a parallel port, game port, and/or a universal serial bus (USB).


Computing device 800 may operate in a networked environment using communications connections to one or more remote computers and/or devices through one or more local area networks (LANs), wide area networks (WANs), the Internet, optical links and/or the like. The computing device 800 may be coupled to one or more networks via network adapter 813 or alternatively by a modem, DSL, ISDN interface and/or the like.


Communications connection 814 is an example of communications media. Communications media typically embody computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communications media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media.


Those skilled in the art will realize that storage devices utilized to store computer-readable program instructions can be distributed across a network. For example a remote computer or device may store an example of the system described as software. A local or terminal computer or device may access the remote computer or device and download a part or all of the software to run the program. Alternatively the local computer may download pieces of the software as needed, or distributively process the software by executing some of the software instructions at the local terminal and some at remote computers or devices.


Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion, of the software instructions may be carried out by a dedicated electronic circuit such as a digital signal processor (“DSP”), programmable logic array (“PLA”), or the like. The term electronic apparatus as used herein includes computing devices, consumer electronic devices including any software and/or firmware and the like, and electronic devices or circuits containing no software and/or firmware and the like.


The term computer readable medium may include system memory, hard disks, mass storage devices and their associated media, communications media, and the like.


Protected Environment



FIG. 9 is a block diagram showing a conventional media application 100 processing media content 110 operating in a conventional computing environment 900 with an indication of an attack 907 against the system 901. A conventional computing environment 900 may be provided by a personal computer (“PC”) or consumer electronics (“CE”) device 901 that may include operating system (“OS”) 902. Typical operating systems often partition their operation into a user mode 903, and a kernel mode 904. User mode 903 and kernel mode 904 may be used by one or more application programs 100. An application program 100 may be used to process media content 110 that may be transferred to the device 901 via some mechanism, such as a CD ROM drive, Internet connection or the like. An example of content 110 would be media files that may be used to reproduce audio and video information.


The computing environment 900 may typically include an operating system (“OS”) 902 that facilitates operation of the application 100, in conjunction with the one or more central processing units (“CPU”). Many operating systems 902 may allow multiple users to have access to the operation of the CPU. Multiple users may have ranges of access privileges typically ranging from those of a typical user to those of an administrator. Administrators typically have a range of access privileges to applications 100 running on the system, the user mode 903 and the kernel 904. Such a computing environment 900 may be susceptible to various types of attacks 907. Attacks may include not only outsiders seeking to gain access to the device 901 and the content 110 on it, but also attackers having administrative rights to the device 901 or other types of users having whatever access rights granted them.



FIG. 10 is a block diagram showing a trusted application 200 processing media content 110 and utilizing a protected environment or protected space 230 that tends to be resistant to attack 1005. The term “trusted application”, as used here, may be defined as an application that utilizes processes operating in a protected environment such that they tend to be resistant to attack 1005 and limit unauthorized access to any media content 110 or other data being processed. Thus, components or elements of an application operating in a protected environment are typically considered “trusted” as they tend to limit unauthorized access and tend to be resistant to attack. Such an application 200 may be considered a trusted application itself or it may utilize another trusted application to protect a portion of its processes and/or data.


For example, a trusted media player 200 may be designed to play media content 110 that is typically licensed only for use such that the media content 110 cannot be accessed in an unauthorized manner. Such a trusted application 200 may not operate and/or process the media content 110 unless the computing environment 1000 can provide the required level of security, such as by providing a protected environment 230 resistant to attack 1005.


As used herein, the term “process” may be defined as an instance of a program (including executable code, machine instructions, variables, data, state information, etc.), residing and/or operating in a kernel space, user space and/or any other space of an operating system and/or computing environment.


A digital rights management system 1004 or the like may be utilized with the protected environment 230. The use of a digital rights management system 1004 is merely provided as an example and may not be utilized with a protected environment or a secure computing environment. Typically a digital rights management system utilizes tamper-resistant software (“TRS”) which tends to be expensive to produce and may negatively impact computing performance. Utilizing a trusted application 200 may minimize the amount of TRS functionality required to provide enhanced protection.


Various mechanisms known to those skilled in this technology area may be utilized in place of, in addition to, or in conjunction with a typical digital rights management system. These mechanisms may include, but are not limited to, encryption/decryption, key exchanges, passwords, licenses, and the like. Thus, digital right management as used herein may be a mechanism as simple as decrypting an encrypted media, utilizing a password to access data, or other tamper-resistant mechanisms. The mechanisms to perform these tasks may be very simple and entirely contained within the trusted application 200 or may be accessed via interfaces that communicate with complex systems otherwise distinct from the trusted application 200.



FIG. 11 is a block diagram showing exemplary components of a trusted application 200 that may be included in the protected environment 230. A trusted application 200 will typically utilize a protected environment 230 for at least a potion of its subcomponents 232, 400, 480. Other components 1101 of the trusted application may not utilize a protected environment. Components 232, 400 and 480 involved in the processing of media content or data that may call for an enhanced level of protection from attack or unauthorized access may operate within a protected environment 230. A protected environment 230 may be utilized by a single trusted application 200 or, possibly, by a plurality of trusted applications. Alternatively, a trusted application 200 may utilize a plurality of protected environments. A trusted application 200 may also couple to and/or utilize a digital rights management system 1004.


In the example shown, source 400 and sink 480 are shown as part of a media pipeline 232 operating in the protected environment 230. A protected environment 230 tends to ensure that, once protected and/or encrypted content 1109 has been received and decrypted, the trusted application 200 and its components prevent unauthorized access to the content 1109.


Digital rights management 1004 may provide a further avenue of protection for the trusted application 200 and the content 1109 it processes. Through a system of licenses 1108, device certificates 1111, and other security mechanisms a content provider is typically able to have confidence that encrypted content 1109 has been delivered to the properly authorized device and that the content 1109 is used as intended.



FIG. 12 is a block diagram showing a system for downloading digital media content 1210 from a service provider 1207 to an exemplary trusted application 200 utilizing a protected environment 230. In the example shown the trusted application 200 is shown being employed in two places 1201, 1203. The trusted application 200 may be used in a CE device 1201 or a PC 1203. Digital media 1210 may be downloaded via a service provider 1207 and the Internet 1205 for use by the trusted application 200. Alternatively, digital media may be made available to the trusted application via other mechanisms such as a network, a CD or DVD disk, or other storage media. Further, the digital media 1210 may be provided in an encrypted form 1109 requiring a system of decryption keys, licenses, certificates and/or the like which may take the form of a digital rights management system 1004. The data or media content 1210 provided to the trusted application may or may not be protected, i.e, encrypted or the like.


In one example, a trusted application 200 may utilize a digital rights management (“DRM”) system 1004 or the like along with a protected environment 230. In this case, the trusted application 200 is typically designed to acknowledge, and adhere to, the content's usage policies by limiting usage of the content to that authorized by the content provider via the policies. Implementing this may involve executing code which typically interrogates content licenses and subsequently makes decisions about whether or not a requested action can be taken on a piece of content. This functionality may be provided, at least in part, by a digital rights management system 1004. An example of a Digital Rights Management system is provided in U.S. patent application Ser. No. 09/290,363, filed Apr. 12, 1999, U.S. patent application Ser. Nos. 10/185,527, 10/185,278, and 10/185,511, each filed on Jun. 28, 2002 which are hereby incorporated by reference in its entirety.


Building a trusted application 200 that may be utilized in the CE device 1201 or the PC 1203 may include making sure the trusted application 200 which decrypts and processes the content 1109 may be “secure” from malicious attacks. Thus, a protected environment 230 typically refers to an environment that may not be easy to attack.


As shown, the trusted applications 200 operate in a consumer electronics device 1201, which can be periodically synced to a PC 1203 that also provides a trusted application. The PC 1203 is in turn coupled 1204 to the internet 1205. The internet connection allows digital media 1210 to be provided by a service provider 1207. The service provider 1207 may transmit licenses and encrypted media 1206 over the internet 1205 to trusted application 200. Once encrypted media is delivered and decrypted it may be susceptible to various forms of attack.


A protected computing environment tends to provide an environment that limit hackers from gaining access to unauthorized content. A hacker may include hackers acting as a systems administrator. A systems administrator typically has full control of virtually all of the processes being executed on a computer, but this access may not be desirable. For example, if a system user has been granted a license to use a media file it should not be acceptable for a system administrator different from the user to be able to access the media file. A protected environment tends to contribute to the creation of a process in which code that decrypts and processes content can operate without giving hackers access to the decrypted content. A protected environment may also limit unauthorized access to users of privilege, such as administrators, and/or any other user, who may otherwise gain unauthorized access to protected content. Protection may include securing typical user mode (FIG. 9, 903) processes and kernel mode (FIG. 9, 904) processes and any data they may be processing.


Processes operating in the kernel may be susceptible to attack. For example, in the kernel of a typical operating system objects are created, including processes, which may allow unlimited access by an administrator. Thus, an administrator, typically with full access privileges, may access virtually all processes.


Protected content may include policy or similar information indicating the authorized use of the content. Such policy may be enforced via a DRM system or other mechanism. Typically, access to the protected content is granted through the DRM system or other security mechanism, which may enforce policy. However, a system administrator, with full access to the system, may alter the state of the DRM system or mechanism to disregard the content policy.


A protected environment tends to provide a protected space that restricts unauthorized access to media content being processed therein, even for high-privilege users such as an administrator. When a protected environment is used in conjunction with a system of digital rights management or the like, a trusted application may be created in which a content provider may feel that adequate security is provided to protect digital media from unauthorized access and may also protect the content's policy from be tampered with along with any other data, keys or protection mechanisms that may be associated with the media content.


Current operating system (“OS”) architectures typically present numerous possible attack vectors that could compromise a media application and any digital media content being processed. For purposes of this example, attacks that may occur in an OS are grouped into two types of attacks, which are kernel mode attacks and user mode attacks.


The first type of attack is the kernel mode attack. Kernel mode is typically considered to be the trusted base of the operating system. The core of the operating system, most system and peripheral drivers operate in kernel mode. Typically any piece of code running in the kernel is susceptible to intrusion by any other piece of code running in the kernel, which tends not to be the case for user mode. Also, code running in kernel mode typically has access to substantially all user mode processes. A CPU may also provide privilege levels for various code types. Kernel mode code is typically assigned the highest level of privilege by such a CPU, typically giving it full access to the system.


The second type of attack is the user mode attack. Code that runs in user mode may or may not be considered trusted code by the system depending on the level of privilege it has been assigned. This level of privilege may be determined by the user context or account in which it is operating. User mode code running in the context of an administrator account may have full access to the other code running on the system. In addition, code that runs in user mode may be partitioned to prevent one user from accessing another's processes.


These attacks may be further broken down into specific attack vectors. The protected environment is typically designed to protect against unauthorized access that may otherwise be obtained via one or more of these attack vectors. The protected environment may protect against attack vectors that may include: process creation, malicious user mode applications, loading malicious code into a process, malicious kernel code, invalid trust authorities, and external attack vectors.


Process creation is a possible attack vector. An operating system typically includes a “create process” mechanism that allows a parent process to create a child process being created. A malicious parent process may, by modifying the create process code or by altering the data it creates, make unauthorized modifications to the child process. This could result in compromising digital media that may be processed by a child process created by a malicious parent process.


Malicious user mode applications are a possible attack vector. An operating system typically includes administrator level privileges. Processes running with administrator privileges may have unlimited access to many operating system mechanisms and to nearly all processes running on the computer. Thus, in Windows for example, a malicious user mode application running with administrator privileges may gain access to many other processes running on the computer and may thus compromise digital media. Similarly, processes operating in the context of any user may be attacked by any malicious process operating in the same context.


Loading malicious code into a secure process is a possible attack vector. It may be possible to append or add malicious code to a process. Such a compromised process cannot be trusted and may obtain unauthorized access to any media content or other data being processed by the modified process.


Malicious kernel mode code is a possible attack vector. An operating system typically includes a “system level” of privilege. In Windows, for example, all code running in kernel mode is typically running as system and therefore may have maximum privileges. The usual result is that all drivers running in kernel mode have maximum opportunity to attack any user mode application, for example. Such an attack by malicious kernel mode code may compromise digital media.


Invalid trust authorities (TAs) are a possible attack vector. TAs may participate in the validation of media licenses and may subsequently “unlock” the content of a digital media. TAs may be specific to a media type or format and may be implemented by media providers or their partners. As such, TAs may be pluggable and/or may be provided as dynamic link libraries (“DLL”). A DLL or the like may be loaded by executable code, including malicious code. In order for a TA to ensure that the media is properly utilized it needs to be able to ensure that the process in which it is running is secure. Otherwise the digital media may be compromised.


External attacks are another possible attack vector. There are a set of attacks that don't require malicious code running in a system in order to attack it. For instance, attaching a debugger to a process or a kernel debugger to the machine, looking for sensitive data in a binary file on a disk, etc., are all possible mechanisms for finding and compromising digital media or the processes that can access digital media.



FIG. 13 is a block diagram showing exemplary attack vectors 1307-1310 that may be exploited by a user or mechanism attempting to access media content or other data 1300 typically present in a computing environment 900 in an unauthorized manner. A protected environment may protect against these attack vectors such that unauthorized access to trusted applications and the data they process is limited and resistance to attack is provided. Such attacks may be made by users of the system or mechanisms that may include executable code. The media application 100 is shown at the center of the diagram and the attack vectors 1307-1310 tend to focus on accessing sensitive data 1300 being stored and/or processed by the application 100.


A possible attack vector 1309 may be initiated via a malicious user mode application 1302. In the exemplary operating system architecture both the parent of a process, and any process with administrative privileges, typically have unlimited access to other processes, such as one processing media content, and the data they process. Such access to media content may be unauthorized. Thus a protected environment may ensure that a trusted application and the media content it processes are resistant to attacks by other user mode applications and/or processes.


A possible attack vector 1308 is the loading of malicious code 1303 into a process 1301. Having a secure process that is resistant to attacks from the outside is typically only as secure as the code running on the inside forming the process. Given that DLLs and other code are typically loaded into processes for execution, a mechanism that may ensure that the code being loaded is trusted to run inside a process before loading it into the process may be provided in a protected environment.


A possible vector of attack 1310 is through malicious kernel mode code 1304. Code running in kernel mode 904 typically has maximum privileges. The result may be that drivers running in kernel mode may have a number of opportunities to attack other applications. For instance, a driver may be able to access memory directly in another process. The result of this is that a driver could, once running, get access to a processes memory which may contain decrypted “encrypted media content” (FIG. 11, 1109). Kernel Mode attacks may be prevented by ensuring that the code running in the kernel is non-malicious code, as provided by this example.


A possible attack vector 1307 is by external attacks 1306 to the system 900. This group represents the set of attacks that typically do not require malicious code to be running on the system 900. For instance, attaching a debugger to an application and/or a process on the system, searching a machine 900 for sensitive data, etc. A protected environment may be created to resist these types of attacks.



FIG. 14 is a flow diagram showing the process 1400 for creating and maintaining a protected environment that tends to limit unauthorized access to media content and other data. The sequence 1400 begins when a computer system is started 1402 and the kernel of the operating system is loaded and a kernel secure flag is set 1404 to an initial value. The process continues through the time that a protected environment is typically created and an application is typically loaded into it 1406. The process includes periodic checking 1408 via the protected environment that seeks to ensure the system remains secure through the time the secure process is needed.


The term “kernel”, as used here, is defined as the central module of an operating system for a computing environment, system or device. The kernel module may be implemented in the form of computer-executable instructions and/or electronic logic circuits. Typically, the kernel is responsible for memory management, process and task management, and storage media management of a computing environment. The term “kernel component”, as used here, is defined to be a basic controlling mechanism, module, computer-executable instructions and/or electronic logic circuit that forms a portion of the kernel. For example, a kernel component may be a “loader”, which may be responsible for loading other kernel components in order to establish a fully operational kernel.


To summarize the process of creating and maintaining a protected environment:


1. Block 1402 represents the start-up of a computer system. This typically begins what is commonly known as the boot process and includes loading an operating system from disk or some other storage media.


2. Typically one of the first operations during the boot process is the loading of the kernel and its components. This example provides the validation of kernel components and, if all are successfully validated as secure, the setting of a flag indicating the kernel is secure. This is shown in block 1404.


3. After the computer system is considered fully operational a user may start an application such as a trusted media player which may call for a protected environment. This example provides a secure kernel with an application operating in a protected environment, as shown in block 1406.


4. Once the protected environment has been created and one or more of the processes of the application have been loaded into it and are operating, the trusted environment may periodically check the kernel secure flag to ensure the kernel remains secure, as shown in block 1408. That is, from the point in time that the trusted application begins operation, a check may be made periodically to determine whether any unauthorized kernel components have been loaded. Such unauthorized kernel components could attack the trusted application or the data it may be processing. Therefore, if any such components are loaded, the kernel secure flag may be set appropriately.



FIG. 15 is a block diagram showing exemplary kernel components 1520-1530 and other components 1510-1514 utilized in creating an exemplary secure computing environment 1000. This figure shows a computer system containing several components 1510-1530 typically stored on a disk or the like, several of which are used to form the kernel of an operating system when a computer is started. Arrow 1404 indicates the process of loading the kernel components into memory forming the operational kernel of the system. The loaded kernel 1550 is shown containing its various components 1551-1562 and a kernel secure flag 1590 indicating whether or not the kernel is considered secure for a protected environment. The kernel secure flag 1590 being described as a “flag” is not meant to be limiting; it may be implemented as a boolean variable or as a more complex data structure or mechanism.


Kernel components 1520-1530 are typically “signed” and may include certificate data 1538 that may enable the kernel to validate that they are the components they claim to be, that they have not been modified and/or are not malicious. A signature block and/or certificate data 1538 may be present in each kernel component 1520-1530 and/or each loaded kernel component 1560, 1562. The signature and/or certificate data 1538 may be unique to each component. The signature and/or certificate data 1538 may be used in the creation and maintenance of protected environments as indicated below. Typically a component is “signed” by its provider in such as way as to securely identify the source of the component and/or indicate whether it may have been tampered with. A signature may be implemented as a hash of the component's header or by using other techniques. A conventional certificate or certificate chain may also be included with a component that may be used to determine if the component can be trusted. The signature and/or certificate data 1538 are typically added to a component before it is distributed for public use. Those skilled in the art will be familiar with these technologies and their use.


When a typical computer system is started or “booted” the operating system's loading process or “kernel loader” 1551 will typically load the components of the kernel from disk or the like into a portion of system memory to form the kernel of the operating system. Once all of the kernel components are loaded and operational the computer and operating system are considered “booted” and ready for normal operation.


Kernel component #1 1520 thru kernel component #n 1530, in the computing environment, may be stored on a disk or other storage media, along with a revocation list 1514, a kernel dump flag 1512 and a debugger 1510 along with a debug credential 1511. Arrow 1404 indicates the kernel loading process which reads the various components 1514-1530 from their storage location and loads them into system memory forming a functional operating system kernel 1550. The kernel dump flag 1512 being described as a “flag” is not meant to be limiting; it may be implemented as a boolean variable or as a more complex data structure or mechanism.


The kernel loader 1551 along with the PE management portion of the kernel 1552, the revocation list 1554 and two of the kernel components 1520 and 1522 are shown loaded into the kernel, the latter as blocks 1560 and 1562, along with an indication of space for additional kernel components yet to be loaded into the kernel, 1564 and 1570. Finally, the kernel 1550 includes a kernel secure flag 1590 which may be used to indicate whether or not the kernel 1550 is currently considered secure or not. This illustration is provided as an example and is not intended to be limiting or complete. The kernel loader 1551, the PE management portion of the kernel 1552 and/or the other components of the kernel are shown as distinct kernel components for clarity of explanation but, in actual practice, may or may not be distinguishable from other portions of the kernel.


Included in the computing environment 1000 may be a revocation list 1514 that may be used in conjunction with the signature and certificate data 1538 associated with the kernel components 1560 and 1562. This object 1514 may retain a list of signatures, certificates and/or certificate chains that are no longer considered valid as of the creation date of the list 1514. The revocation list 1514 is shown loaded into the kernel as object 1554. Such lists are maintained because a validly-signed and certified component, for example components 1560 and 1562, may later be discovered to have some problem. The system may use such a list 1554 to check kernel components 1520-1530 as they are loaded, which may be properly signed and/or have trusted certificate data 1538, but that may have subsequently been deemed untrustworthy. Such a revocation list 1554 will typically include version information 1555 so that it can more easily be identified, managed and updated as required.


Another component of the system that may impact kernel security is a debugger 1510. Debuggers may not typically be considered a part of the kernel but may be present in a computing environment 1000. Debuggers, including those known as kernel debuggers, system analyzers, and the like, may have broad access to the system and the processes running on the system along with any data present. A debugger 1510 may be able access any data in a computing environment 1000, including media content that should not be accessed in a manner other than that authorized. On the other hand, debugging is typically a part of developing new functionality and it should be possible to debug within protected environments the code intended to process protected media content. A debugger 1510 may thus include debug credentials 1511 which may indicate that the presence of the debugger 1510 on a system is authorized. Thus detection of the presence of a debugger 1510 along with any accompanying credentials 1511 may be a part of the creation and maintenance of protected environments (FIG. 14, 1400).


The computing environment 1000 may include a kernel dump flag 1512. This flag 1512 may be used to indicate how much of kernel memory is available for inspection in case of a catastrophic system failure. Such kernel dumps may be used for postmortem debugging after such as failure. If such a flag 1512 indicates that system memory is available for inspection upon a dump then the kernel 1550 may be considered insecure as hacker could run an application which exposes protected media in system memory and then force a catastrophic failure condition which may result in the system memory being available for inspection, including that containing the exposed media content. Thus a kernel dump flag 1512 may be used in the creation and maintenance of a protected environments (FIG. 14, 1400).



FIG. 16 and FIG. 17 are flow diagrams showing an exemplary process 1404 for loading kernel components to create an exemplary secure computing environment. This process 1404 begins after the kernel loader has been started and the PE management portion of the kernel has been loaded and made operational. Not shown in these figures, the PE management portion of the kernel may validate the kernel loader itself and/or any other kernel elements that may have been previously loaded. Validation is usually defined as determining whether or not a given component is considered secure and trustworthy as illustrated in part 2 of this process 1404.


The term “authorized for secure use” and the like as used below with respect to kernel components has the following specific meaning. A kernel containing any components that are not authorized for secure use does not provide a secure computing environment within which protected environments may operate. The opposite may not be true as it depends on other factors such as attack vectors.


1. Block 1601 shows the start of the loading process 1404 after the PE management portion of the kernel has been loaded and made operational. Any component loaded in the kernel prior to this may be validated as described above.


2. Block 1602 shows the kernel secure flag initially set to TRUE unless any component loaded prior to the PE management portion of the kernel, or that component itself, is found to be insecure at which point the kernel secure flag may be set to FALSE. In practice the indication of TRUE or FALSE may take various forms; the use of TRUE or FALSE here is only an example and is not meant to be limiting.


3. Block 1604 indicates a check for the presence of a debugger in the computing environment. Alternatively a debugger could reside remotely and be attached to the computing environment via a network or other communications media to a process in the computing environment. If no debugger is detected the loading process 1404 continues at block 1610. Otherwise it continues at block 1609. Not shown in the diagram, this check may be performed periodically and the state of the kernel secure flag updated accordingly.


4. If a debugger is detected, block 1606 shows a check for debug credentials which may indicate that debugging is authorized on the system in the presence of a protected environment. If such credentials are not present, the kernel secure flag may be set to FALSE as shown in block 1608. Otherwise the loading process 1404 continues at block 1610.


5. Block 1610 shows a check of the kernel dump flag. If this flag indicates that a full kernel memory dump or the like is possible then the kernel secure flag may be set to FALSE as shown in block 1608. Otherwise the loading process 1404 continues at block 1612. Not shown in the diagram, this check may be performed periodically and the state of the kernel secure flag updated accordingly.


6. Block 1612 shows the loading of the revocation list into the kernel. In cases where the revocation list may be used to check debug credentials, or other previously loaded credentials, signatures, certificate data, or the like, this step may take place earlier in the sequence (prior to the loading of credentials and the like to be checked) than shown. Not shown in the diagram is that, once this component is loaded, any and all previously loaded kernel components may be checked to see if their signature and/or certificate data has been revoked per the revocation list. If any have been revoked, the kernel secure flag may be set to FALSE and the loading process 1404 continues at block 1614. Note that a revocation list may or may not be loaded into the kernel to be used in the creation and maintenance of a protected environments.


7. Block 1614 shows the transition to part 2 of this diagram shown in FIG. 17 and continuing at block 1701.


8. Block 1702 shows a check for any additional kernel components to be loaded. If all components have been loaded then the load process 1404 is usually complete and the kernel secure flag remains in whatever state it was last set to, either TRUE or FALSE. If there are additional kernel components to be loaded the load process 1404 continues at block 1706.


9. Block 1706 shows a check for a valid signature of the next component to be loaded. If the signature is invalid then the kernel secure flag may be set to FALSE as shown in block 1718. Otherwise the loading process 1404 continues at block 1708. If no component signature is available the component may be considered insecure and the kernel secure flag may be set to FALSE as shown in block 1718. Signature validity may be determined by checking for a match on a list of valid signatures and/or by checking whether the signer's identity is a trusted identity. As familiar to those skilled in the security technology area, other methods could also be used to validate component signatures.


10. Block 1708 shows a check of the component's certificate data. If the certificate data is invalid then the kernel secure flag may be set to FALSE as shown in block 1718. Otherwise the loading process 1404 continues at block 1710. If no component certificate data is available the component may be considered insecure and the kernel secure flag may be set to FALSE as shown in block 1718. Certificate data validity may be determined by checking the component's certificate data to see if the component is authorized for secure use. As familiar to those skilled in the art, other methods could also be used to validate component certificate data.


11. Block 1710 shows a check of the component's signature against a revocation list. If the signature is present on the list, indicating that it has been revoked, then the kernel secure flag may be set to FALSE as shown in block 1718. Otherwise the loading process 1404 continues at block 1712.


12. Block 1712 shows a check of the component's certificate data against a revocation. If the certificate data is present on the list, indicating that it has been revoked, then the kernel secure flag may be set to FALSE as shown in block 1718. Otherwise the loading process 1404 continues at block 1714.


13. Block 1714 shows a check of the component's signature to determine if it is OK for use. This check may be made by inspecting the component's leaf certificate data to see if the component is authorized for secure use. Certain attributes in the certificate data may indicate if the component is approved for protected environment usage. If not the component may not be appropriately signed and the kernel secure flag may be set to FALSE as shown in block 1718. Otherwise the loading process 1404 continues at block 1716.


14. Block 1716 shows a check of the component's root certificate data. This check may be made by inspecting the component's root certificate data to see if it is listed on a list of trusted root certificates. If not the component may be considered insecure and the kernel secure flag may be set to FALSE as shown in block 1718. Otherwise the loading process 1404 continues at block 1720.


15. Block 1720 shows the loading of the component into the kernel where it is now considered operational. Then the loading process 1404 returns to block 1702 to check for any further components to be loaded.



FIG. 18 is a block diagram showing a secure computing environment 1000 loading an application 100 into an exemplary protected environment 230 to form a trusted application that may be resistant to attack. In this example the kernel may be the same as that described in FIG. 15, has already been loaded and the system 1000 is considered fully operational. At this point, as an example, a user starts media application 100. The media application 100 may call for the creation of a protected environment 230 for one or more of its processes and/or components to operate within. The protected environment creation process 1406 creates the protected environment 230 and loads the application 100 and/or its components as described below.



FIG. 19 is a flow diagram showing an exemplary process 1406 for creating a protected environment and loading an application into the protected environment. This process 1406 includes the initial step of creating a secure process followed by validating the software component to be loaded into it and then loading the software component into the new secure process and making it operational. Upon success, the result may be a software component operating in a protected environment supported by a secure kernel. Such a software component, along with any digital media content or other data it processes, may be protected from various attacks, including those described above.


1. Block 1901 shows the start of the protected environment creation process 1406. This point is usually reached when some application or code calls for a protected environment to operate.


2. Block 1902 shows the establishment of a protected environment. While not shown in the diagram, this may be accomplished by requesting the operating system to create a new secure process. Code later loaded and operating in this secure process may be considered to be operating in a protected environment. If the kernel secure flag is set to FALSE then the “create new secure process” request may fail. This may be because the system as a whole is considered insecure and unsuitable for a protected environment and any application or data requiring a protected environment. Alternatively, the “create new secure process” request may succeed and the component loaded into the new process may be informed that the system is considered insecure so that it can modify its operations accordingly. Otherwise the process 1406 continues at block 1906.


3. Block 1906 shows a check for a valid signature of the software component to be loaded into the new secure process or protected environment. If the signature is invalid then the process 1406 may fail as shown in block 1918. Otherwise the process 1406 continues at block 1908. Not shown in the process is that the program, or its equivalent, creating the new secure process may also be checked for a valid signature and the like. Thus, for either the component itself and/or the program creating the new secure process, if no signature is available the component may be considered insecure and the process 1406 may fail as shown in block 1918. Signature validity may be determined by checking for a match on a list of valid signatures and/or by checking whether the signer's identity is a trusted identity. As familiar to those skilled in the security technology area, other methods could also be used to validate component signatures.


4. Block 1908 shows a check of the software component's certificate data. If the certificate data is invalid then the process 1406 may fail as shown in block 1918. Otherwise the process 1406 continues at block 1910. If no component certificate data is available the component may be considered insecure and the process 1406 may fail as shown in block 1918. Certificate data validity may be determined by checking the component's certificate data to see if the component is authorized for secure use. As familiar to those skilled in the art, other methods could also be used to validate component certificate data.


5. Block 1910 shows a check of the component's signature against a revocation list. If the signature is present on the list, indicating that it has been revoked, then the process 1406 may fail as shown in block 1918. Otherwise the process 1406 continues at block 1912.


12. Block 1912 shows a check of the component's certificate data against the revocation list. If the certificate data is present on the list, indicating that it has been revoked, then the process 1406 may fail as shown in block 1918. Otherwise the process 1406 continues at block 1914.


13. Block 1914 shows a check of the component's signature to determine if it is acceptable for use. This check may be made by inspecting the component's leaf certificate data to see if the component is authorized for secure use. Certain attributes in the certificate data may indicate if the component is approved for protected environment usage. If not the component may be considered to not be appropriately signed and the process 1406 may fail as shown in block 1918. Otherwise the process 1406 continues at block 1916.


14. Block 1916 shows a check of the component's root certificate data. This check may be made by inspecting the component's root certificate data to see if it is listed on a list of trusted root certificates. If not the component may be considered insecure and the process 1406 may fail as shown in block 1918. Otherwise the process 1406 continues at block 1920.


15. Block 1918 shows the failure of the software component to load followed by block 1930, the end of the protected environment creation process 1406.


16. Block 1920 shows the software component being loaded into the protected environment, where it is considered operational, followed by block 1930, the end of the protected environment creation process 1406.



FIG. 20 is a block diagram showing an exemplary trusted application utilizing an exemplary protected environment 230 periodically checking 1408 the security state 1590 of the secure computing environment 1000. In this example, the computing environment 1000 and the kernel 1550 may be the same as those described in FIG. 15 and FIG. 16. The kernel 1550 has already been loaded and the computer 1000 is considered fully operational. Further, a protected environment has been created and the appropriate components of the trusted application have been loaded into it and made operational, establishing a trusted application utilizing a protected environment 230, hereafter referred to simply as the “protected environment”.


The protected environment 230 may periodically check with the PE management portion of the kernel 1552 to determine whether the kernel 1550 remains secure over time. This periodic check may be performed because it is possible for a new component to be loaded into the kernel 1550 at any time, including a component that may be considered insecure. If this were to occur, the state of the kernel secure flag 1590 may change to FALSE and the code operating in the protected environment 230 has the opportunity to respond appropriately.


For example, consider a media player application that was started on a PC 1000 with a secure kernel 1550 and a portion of the media player application operating in a protected environment 230 processing digital media content that is licensed only for secure use. In this example, if a new kernel component that is considered insecure is loaded while the media player application is processing the media content, then the check kernel secure state process 1040 would note the kernel secure flag 1590 has changed to FALSE indicating the kernel 1550 may no longer be secure.


Alternatively, the revocation list 1545 may be updated and a kernel component previously considered secure may no longer be considered secure, resulting in the kernel secure flag 1590 being set to FALSE. At this point the application may receive notification that the system 1000 is no longer considered secure and can terminate operation, or take other appropriate action to protect itself and/or the media content it is processing.



FIG. 21 is a flow diagram showing an exemplary process 1408 for periodically checking the security state of the secure computing environment. This process 1408 may be used by a protected environment 230 to determine if the kernel remains secure over time. The protected environment 230 may periodically use this process 1408 to check the current security status of the kernel. The protected environment 230 and/or the software component operating within it may use the current security status information to modify its operation appropriately. Periodic activation of the process may be implemented using conventional techniques.


The diagram in FIG. 21 shows a sequence of communications 1408, illustrated with exemplary pseudo code, between the protected environment 230 and the PE management portion of the kernel 1552. This communication may include a check of the version of a revocation list which may give an application the ability to specify a revocation list of at least a certain version. This communications sequence may be cryptographically secured using conventional techniques.


1. The protected environment 230 makes a IsKernelSecure(MinRLVer) call 2120 to the PE management portion of the kernel to query the current security state of the kernel. Included in this call 2120 may be the minimum version (MinRLVer) of the revocation list expected to be utilized.


2. The PE management portion of the kernel checks to see if the protected environment, which is the calling process, is secure. If not, then it may provide a Return(SecureFlag=FALSE) indication 2122 to the protected environment and the communications sequence 1408 is complete. This security check may be done by the PE management portion of the kernel checking the protected environment for a valid signature and/or certificate data as described above.


3. Otherwise, the PE management portion of the kernel checks the kernel secure flag in response to the call 2120. If the state of the flag is FALSE then it may provide a Return(SecureFlag=FALSE) indication 2124 to the protected environment and the communications sequence 1408 is complete.


4. Otherwise, the PE management portion of the kernel checks the revocation list version information for the revocation list. If the revocation list has version information that is older than that requested in the IsKernelSecure(MinRLVer) call 2120 then several options are possible. First, as indicated in the diagram, the PE management portion of the kernel may provide a Return(SecureFlag=FALSE) indication 2126 to the protected environment and the communications sequence 1408 is complete.


Alternatively, and not shown in the diagram, an appropriate version revocation list may be located and utilized, all kernel components may be re-validated using this new or updated list, the kernel secure flag updated as appropriate and the previous step #3 of this communications sequence 1408 repeated.


5. Otherwise, the PE management portion of the kernel may provide a Return(SecureFlag=TRUE) indication 2128 to the protected environment and the communications sequence 1408 is complete.



FIG. 22 is a block diagram showing an exemplary computing environment 800 including a representation of a protected environment 230, a trusted media system 200, and other related elements. Exemplary personal computer 800 is similar to that shown in FIG. 8 with the addition of kernel components 1520-1530 that may be stored on the disk 810 along with the other operating system code and the like. Media application 100 and/or a digital rights management system 1004 may be stored on the disk 810 along with other application programs. These components 1520-1530 and applications 100, 1004 may be loaded into system memory 809 and considered operational. Shown loaded in system memory 809 is a trusted application 200 utilizing a protected environment 230 and media content 110.

Claims
  • 1. A system comprising a computing device and at least one software module that are together configured for processing media content, the system comprising: a media source having an input and an output, the media source configured for operating in a protected space provided within the computing device, the input of the media source coupled to a first secure connection over which the media content is received via the media source into the protected space;a plurality of transform mechanisms having an input and an output and configured for operating in the protected space provided within the computing device, the input of the plurality of transform mechanisms coupled to the output of the media source, where the plurality of transform mechanisms are configured for processing the media content;a media sink having an input and an output, the media sink configured for operating in the protected space provided within the computing device, the input of the media sink coupled to the output of the plurality of transform mechanisms, the output of the media sink coupled to a second secure connection over which the processed media content is transferred via the media source out of the protected space, where the media source, the plurality of transform mechanisms, and the media sink are separate from each other and together form a protected media pipeline that includes an output and an input and that is configured for processing the media content within the protected space of the computing device.
  • 2. The system of claim 1, where one of the plurality of transform mechanisms is a decoder.
  • 3. The system of claim 1 further comprising a plurality of protected media pipelines.
  • 4. The system of claim 1, where two of the plurality of transform mechanisms are coupled in series.
  • 5. The system of claim 1, where two of the plurality of transform mechanisms are coupled in parallel.
  • 6. The system of claim 1, where the protected media pipeline processes digitized audio.
  • 7. The system of claim 1, where the protected media pipeline processes digitized video.
  • 8. The system of claim 1, where the protected media pipeline is configured for resisting unauthorized access to the media content.
  • 9. The system of claim 1 where the media source is configured for accessing the media content via hardware or via software.
  • 10. A system comprising a computing device and at least one software module that are together configured for processing media content, the system comprising: a stub portion of a protected media source, where the stub portion includes an input and an output and is configured for operating in a first space provided within the computing device, the input of the stub portion of the protected media source coupled to media content; anda proxy potion of the protected media source, where the proxy portion includes an input and an output and is configured for operating in a protected space provided within the computing device, the input of the proxy portion of the protected media source coupled to the output of the stub portion of the protected media source, the stub portion further configured for transferring at least a portion of the media content via remote procedure call to the proxy portion;a plurality of transform mechanisms having an input and an output and configured for operating in the protected space provided within the computing device, the input of the plurality of transform mechanisms coupled to the output of the proxy portion of the protected media source, where the plurality of transform mechanisms are configured for processing the media content;a media sink having an input and an output, the media sink configured for operating in the protected space provided within the computing device, the input of the media sink coupled to the output of the plurality of transform mechanisms, the output of the media sink coupled to a second secure connection over which the processed media content is transferred via the media source out of the protected space, where the media source, the plurality of transform mechanisms, and the media sink are separate from each other and together form a protected media pipeline that includes an output and an input and that is configured for processing the media content within the protected space of the computing device.
  • 11. The system of claim 10, where the first space is configured as an unprotected application space comprising unprotected elements of the system.
  • 12. The system of claim 10, where the first space is configured as a protected media space distinct from the protected space and distinct from an unprotected application space comprising unprotected elements of the system.
  • 13. The system claim 10, where the protected media source is configured for resisting unauthorized access to the media content transferred between the stub portion of the media source and the proxy portion of the media source.
  • 14. A system comprising a computing device and at least one software module that are together configured for processing media content, the system comprising: a media control mechanism configured for operating in an application space within the computing device, and for controlling operations of the system;a protected media pipeline configured for operating in a protected space within the computing device, the protected space distinct from the application space, the protected media pipeline coupled to the media control mechanism, the protected media pipeline including a media source, a media sink, and a plurality of transform mechanisms, an input of the media source coupled to a first secure connection over which the media content is received via the media source into the protected space, an output of the media source coupled to an input of a plurality of transform mechanisms, the protected media pipeline configured for accessing the media content via the media source, decrypting the media content, processing the decrypted media content, and outputting the processed media content via the media sink, an output of the media sink coupled to a second secure connection over which the processed media content is transferred via the media source out of the protected space, where the media source, the plurality of transform mechanisms, and the media sink are separate from each other.
  • 15. The system of claim 14, where the protected media pipeline is configured for resisting unauthorized access to the media content.
  • 16. The system of claim 14 further comprising a digital rights management system communicating with the protected media pipeline.
  • 17. The system of claim 14, where the media content is encrypted.
  • 18. The system of claim 1, where the output of the protected media pipeline is coupled to the input of another protected media pipeline.
  • 19. The system of claim 14 where the media source is configured for accessing the media content via hardware or via software.
US Referenced Citations (860)
Number Name Date Kind
3718906 Lightner Feb 1973 A
4183085 Roberts Jan 1980 A
4323921 Guillou Apr 1982 A
4405829 Rivest Sep 1983 A
4528643 Freeny Jul 1985 A
4529870 Chaum Jul 1985 A
4558176 Arnold et al. Dec 1985 A
4620150 Germer et al. Oct 1986 A
4658093 Hellman Apr 1987 A
4683553 Mollier Jul 1987 A
4750034 Lem Jun 1988 A
4817094 Lebizay et al. Mar 1989 A
4827508 Shear May 1989 A
4855730 Venners et al. Aug 1989 A
4855922 Huddleston et al. Aug 1989 A
4857999 Welsh Aug 1989 A
4910692 Outram Mar 1990 A
4916738 Chandra Apr 1990 A
4926479 Goldwasser May 1990 A
4953209 Ryder Aug 1990 A
4959774 Davis Sep 1990 A
4967273 Greenberg Oct 1990 A
4977594 Shear Dec 1990 A
5001752 Fischer Mar 1991 A
5012514 Renton Apr 1991 A
5047928 Wiedemer Sep 1991 A
5050213 Shear Sep 1991 A
5103392 Mori Apr 1992 A
5103476 Waite Apr 1992 A
5109413 Comerford Apr 1992 A
5117457 Comerford May 1992 A
5193573 Chronister Mar 1993 A
5204897 Wyman Apr 1993 A
5222134 Waite Jun 1993 A
5249184 Woest et al. Sep 1993 A
5261002 Perlman Nov 1993 A
5269019 Peterson et al. Dec 1993 A
5274368 Breeden et al. Dec 1993 A
5295266 Hinsley Mar 1994 A
5301268 Takeda Apr 1994 A
5303370 Brosh Apr 1994 A
5319705 Halter Jun 1994 A
5355161 Bird et al. Oct 1994 A
5369262 Dvorkis et al. Nov 1994 A
5373561 Haber Dec 1994 A
5406630 Piosenka et al. Apr 1995 A
5410598 Shear Apr 1995 A
5414861 Horning May 1995 A
5437040 Campbell Jul 1995 A
5442704 Holtey Aug 1995 A
5444780 Hartman Aug 1995 A
5448045 Clark Sep 1995 A
5457699 Bode Oct 1995 A
5459867 Adams et al. Oct 1995 A
5469506 Berson Nov 1995 A
5473692 Davis Dec 1995 A
5490216 Richardson, III Feb 1996 A
5500897 Hartman, Jr. Mar 1996 A
5509070 Schull Apr 1996 A
5513319 Finch et al. Apr 1996 A
5522040 Hofsass et al. May 1996 A
5530846 Strong Jun 1996 A
5535276 Ganesan Jul 1996 A
5552776 Wade et al. Sep 1996 A
5553143 Ross Sep 1996 A
5557765 Lipner Sep 1996 A
5563799 Brehmer et al. Oct 1996 A
5568552 Davis Oct 1996 A
5586291 Lasker et al. Dec 1996 A
5615268 Bisbee Mar 1997 A
5629980 Stefik May 1997 A
5634012 Stefik May 1997 A
5636292 Rhoads Jun 1997 A
5638443 Stefik Jun 1997 A
5638513 Ananda Jun 1997 A
5644364 Kurtze Jul 1997 A
5671412 Christiano Sep 1997 A
5673316 Auerbach Sep 1997 A
5708709 Rose Jan 1998 A
5710706 Markl et al. Jan 1998 A
5710887 Chelliah Jan 1998 A
5715403 Stefik Feb 1998 A
5717926 Browning Feb 1998 A
5721788 Powell Feb 1998 A
5724425 Chang et al. Mar 1998 A
5745573 Lipner Apr 1998 A
5745879 Wyman Apr 1998 A
5754657 Schipper May 1998 A
5754763 Bereiter May 1998 A
5757908 Cooper May 1998 A
5758068 Brandt et al. May 1998 A
5763832 Anselm Jun 1998 A
5765152 Erickson Jun 1998 A
5768382 Schneier et al. Jun 1998 A
5771354 Crawford Jun 1998 A
5774870 Storey Jun 1998 A
5790664 Coley Aug 1998 A
5793839 Farris et al. Aug 1998 A
5799088 Raike Aug 1998 A
5802592 Chess Sep 1998 A
5809144 Sirbu Sep 1998 A
5809145 Slik Sep 1998 A
5812930 Zavrel Sep 1998 A
5825876 Peterson Oct 1998 A
5825877 Dan Oct 1998 A
5825879 Davis Oct 1998 A
5825883 Archibald et al. Oct 1998 A
5841865 Sudia Nov 1998 A
5844986 Davis Dec 1998 A
5845065 Conte et al. Dec 1998 A
5845281 Benson Dec 1998 A
5864620 Pettitt Jan 1999 A
5872846 Ichikawa Feb 1999 A
5875236 Jankowitz et al. Feb 1999 A
5883670 Sporer et al. Mar 1999 A
5883955 Ronning Mar 1999 A
5883958 Ishiguro Mar 1999 A
5892900 Ginter Apr 1999 A
5892906 Chou et al. Apr 1999 A
5893086 Schmuck Apr 1999 A
5893920 Shaheen Apr 1999 A
5905799 Ganesan May 1999 A
5913038 Griffiths Jun 1999 A
5917912 Ginter Jun 1999 A
5925127 Ahmad Jul 1999 A
5926624 Katz Jul 1999 A
5935248 Kuroda Aug 1999 A
5943248 Clapp Aug 1999 A
5943422 Van Wie Aug 1999 A
5948061 Merriman Sep 1999 A
5949877 Traw Sep 1999 A
5949879 Berson Sep 1999 A
5951642 Onoe Sep 1999 A
5953502 Helbig et al. Sep 1999 A
5956408 Arnold Sep 1999 A
5982891 Ginter Nov 1999 A
5983238 Becker et al. Nov 1999 A
5983350 Minear Nov 1999 A
5987126 Okuyama Nov 1999 A
5991406 Lipner Nov 1999 A
5994710 Knee et al. Nov 1999 A
5995625 Sudia Nov 1999 A
6005945 Whitehouse Dec 1999 A
6009177 Sudia Dec 1999 A
6021438 Duvvoori Feb 2000 A
6023510 Epstein Feb 2000 A
6026293 Osborn Feb 2000 A
6049789 Frison et al. Apr 2000 A
6049878 Caronni Apr 2000 A
6052735 Ulrich Apr 2000 A
6058188 Chandersekaran May 2000 A
6058476 Matsuzaki May 2000 A
6061451 Muratani May 2000 A
6061794 Angelo et al. May 2000 A
6069647 Sullivan May 2000 A
6072874 Shin Jun 2000 A
6073124 Krishnan Jun 2000 A
6078909 Knutson Jun 2000 A
6085976 Sehr Jul 2000 A
6101606 Diersch et al. Aug 2000 A
6105069 Franklin Aug 2000 A
6112181 Shear Aug 2000 A
6119229 Martinez et al. Sep 2000 A
6122741 Patterson Sep 2000 A
6128740 Curry Oct 2000 A
6131162 Yoshiura Oct 2000 A
6134659 Sprong Oct 2000 A
6141754 Choy Oct 2000 A
6147773 Taylor Nov 2000 A
6148417 Da Silva Nov 2000 A
6151676 Cuccia Nov 2000 A
6157721 Shear Dec 2000 A
6158011 Chen Dec 2000 A
6158657 Hall, III et al. Dec 2000 A
6170060 Mott Jan 2001 B1
6175825 Fruechtel Jan 2001 B1
6178244 Takeda Jan 2001 B1
6185678 Arbaugh et al. Feb 2001 B1
6188995 Garst et al. Feb 2001 B1
6189146 Misra et al. Feb 2001 B1
6192392 Ginter Feb 2001 B1
6199068 Carpenter Mar 2001 B1
6209099 Saunders Mar 2001 B1
6212634 Geer Apr 2001 B1
6219652 Carter et al. Apr 2001 B1
6219788 Flavin Apr 2001 B1
6223291 Puhl Apr 2001 B1
6226618 Downs May 2001 B1
6226747 Larsson et al. May 2001 B1
6230185 Salas et al. May 2001 B1
6230272 Lockhart May 2001 B1
6233600 Salas et al. May 2001 B1
6233685 Smith May 2001 B1
6243439 Arai et al. Jun 2001 B1
6243470 Coppersmith Jun 2001 B1
6243692 Floyd Jun 2001 B1
6253224 Brice, Jr. et al. Jun 2001 B1
6260141 Park Jul 2001 B1
6263313 Milsted Jul 2001 B1
6263431 Lovelace et al. Jul 2001 B1
6266420 Langford Jul 2001 B1
6266480 Ezaki Jul 2001 B1
6272469 Koritzinsky et al. Aug 2001 B1
6279111 Jensenworth et al. Aug 2001 B1
6279156 Amberg et al. Aug 2001 B1
6286051 Becker et al. Sep 2001 B1
6289319 Lockwood et al. Sep 2001 B1
6289452 Arnold Sep 2001 B1
6295577 Anderson et al. Sep 2001 B1
6298446 Schreiber Oct 2001 B1
6303924 Adan et al. Oct 2001 B1
6304915 Nguyen Oct 2001 B1
6314408 Salas et al. Nov 2001 B1
6314409 Schneck et al. Nov 2001 B2
6321335 Chu Nov 2001 B1
6324544 Alam Nov 2001 B1
6327652 England et al. Dec 2001 B1
6330670 England et al. Dec 2001 B1
6334189 Granger Dec 2001 B1
6335972 Chandersekaran Jan 2002 B1
6343280 Clark Jan 2002 B2
6345256 Milsted Feb 2002 B1
6345294 O'Toole et al. Feb 2002 B1
6363488 Ginter Mar 2002 B1
6367017 Gray Apr 2002 B1
6373047 Adan et al. Apr 2002 B1
6374355 Patel Apr 2002 B1
6374357 Mohammed Apr 2002 B1
6385596 Wiser May 2002 B1
6385727 Cassagnol et al. May 2002 B1
6389535 Thomlinson May 2002 B1
6389537 Davis May 2002 B1
6389538 Gruse May 2002 B1
6389541 Patterson May 2002 B1
6393427 Vu May 2002 B1
6393434 Huang May 2002 B1
6397259 Lincke May 2002 B1
6398245 Gruse Jun 2002 B1
6405923 Seysen Jun 2002 B1
6407680 Lai Jun 2002 B1
6408170 Schmidt et al. Jun 2002 B1
6409089 Eskicioglu Jun 2002 B1
6411941 Mullor et al. Jun 2002 B1
6418421 Hurtado Jul 2002 B1
6424714 Wasilewski et al. Jul 2002 B1
6425081 Iwamura Jul 2002 B1
6438690 Patel Aug 2002 B1
6441813 Ishibashi Aug 2002 B1
6442529 Krishan et al. Aug 2002 B1
6442690 Howard Aug 2002 B1
6446207 Vanstone Sep 2002 B1
6449598 Green Sep 2002 B1
6449719 Baker Sep 2002 B1
6460140 Schoch et al. Oct 2002 B1
6463445 Suzuki Oct 2002 B1
6463534 Geiger et al. Oct 2002 B1
6490680 Scheidt Dec 2002 B1
6493758 McLain Dec 2002 B1
6496858 Frailong et al. Dec 2002 B1
6502079 Ball Dec 2002 B1
6507909 Zurko Jan 2003 B1
6515676 Kasai Feb 2003 B1
6532451 Schell Mar 2003 B1
6539364 Moribatake Mar 2003 B2
6542546 Vetro Apr 2003 B1
6549626 Al-Salqan Apr 2003 B1
6550011 Sims Apr 2003 B1
6557105 Tardo Apr 2003 B1
6567793 Hicks et al. May 2003 B1
6571216 Garg et al. May 2003 B1
6574609 Downs Jun 2003 B1
6574612 Baratti Jun 2003 B1
6581102 Amini Jun 2003 B1
6581331 Kral Jun 2003 B1
6585158 Norskog Jul 2003 B2
6587684 Hsu et al. Jul 2003 B1
6587837 Spagna Jul 2003 B1
6609201 Folmsbee Aug 2003 B1
6611358 Narayanaswamy Aug 2003 B1
6615350 Schell Sep 2003 B1
6625729 Angelo Sep 2003 B1
6631478 Wang et al. Oct 2003 B1
6646244 Aas et al. Nov 2003 B2
6664948 Crane et al. Dec 2003 B2
6665303 Saito Dec 2003 B1
6671737 Snowdon Dec 2003 B1
6671803 Pasieka Dec 2003 B1
6678828 Pham et al. Jan 2004 B1
6684198 Shimizu Jan 2004 B1
6690556 Smola et al. Feb 2004 B2
6694000 Ung et al. Feb 2004 B2
6701433 Schell Mar 2004 B1
6704873 Underwood Mar 2004 B1
6708176 Strunk et al. Mar 2004 B2
6711263 Nordenstam et al. Mar 2004 B1
6714921 Stefik Mar 2004 B2
6716652 Ortlieb Apr 2004 B1
6738810 Kramer et al. May 2004 B1
6757517 Chang Jun 2004 B2
6763458 Watanabe Jul 2004 B1
6765470 Shinzaki Jul 2004 B2
6772340 Peinado Aug 2004 B1
6775655 Peinado Aug 2004 B1
6781956 Cheung Aug 2004 B1
6791157 Casto et al. Sep 2004 B1
6792531 Heiden Sep 2004 B2
6799270 Bull Sep 2004 B1
6816596 Peinado Nov 2004 B1
6816809 Circenis Nov 2004 B2
6816900 Vogel et al. Nov 2004 B1
6826606 Freeman Nov 2004 B2
6826690 Hind Nov 2004 B1
6829708 Peinado Dec 2004 B1
6834352 Shin Dec 2004 B2
6839841 Medvinsky et al. Jan 2005 B1
6844871 Hinckley et al. Jan 2005 B1
6847942 Land et al. Jan 2005 B1
6850252 Hoffberg Feb 2005 B1
6851051 Bolle et al. Feb 2005 B1
6853380 Alcorn Feb 2005 B2
6859790 Nonaka Feb 2005 B1
6868433 Philyaw Mar 2005 B1
6871283 Zurko et al. Mar 2005 B1
6895504 Zhang May 2005 B1
6898286 Murray May 2005 B2
6920567 Doherty et al. Jul 2005 B1
6922724 Freeman Jul 2005 B1
6931545 Ta Aug 2005 B1
6934840 Rich Aug 2005 B2
6934942 Chilimbi Aug 2005 B1
6954728 Kusumoto et al. Oct 2005 B1
6957186 Guheen et al. Oct 2005 B1
6959288 Medina Oct 2005 B1
6959290 Stefik Oct 2005 B2
6959291 Armstrong Oct 2005 B1
6959348 Chan Oct 2005 B1
6961858 Fransdonk Nov 2005 B2
6973444 Blinn Dec 2005 B1
6976162 Ellison et al. Dec 2005 B1
6976163 Hind Dec 2005 B1
6981045 Brooks Dec 2005 B1
6983050 Yacobi et al. Jan 2006 B1
6983371 Hurtado Jan 2006 B1
6986042 Griffin Jan 2006 B2
6990174 Eskelinen Jan 2006 B2
6993648 Goodman et al. Jan 2006 B2
7000100 Lacombe et al. Feb 2006 B2
7000829 Harris et al. Feb 2006 B1
7010808 Leung Mar 2006 B1
7013384 Challener et al. Mar 2006 B2
7016498 Peinado Mar 2006 B2
7017188 Schmeidler Mar 2006 B1
7020704 Lipscomb Mar 2006 B1
7024393 Peinado Apr 2006 B1
7028149 Grawrock Apr 2006 B2
7028180 Aull Apr 2006 B1
7039643 Sena May 2006 B2
7039801 Narin May 2006 B2
7043633 Fink May 2006 B1
7051005 Peinado May 2006 B1
7052530 Edlund et al. May 2006 B2
7054335 Wee May 2006 B2
7054468 Yang May 2006 B2
7054964 Chan May 2006 B2
7055169 Delpuch May 2006 B2
7058819 Okaue Jun 2006 B2
7069442 Sutton, II Jun 2006 B2
7069595 Cognigni et al. Jun 2006 B2
7073056 Kocher Jul 2006 B2
7073063 Peinado Jul 2006 B2
7076652 Ginter et al. Jul 2006 B2
7080039 Marsh Jul 2006 B1
7080043 Chase Jul 2006 B2
7089309 Ramaley Aug 2006 B2
7089594 Lai Aug 2006 B2
7095852 Wack Aug 2006 B2
7096469 Kubala et al. Aug 2006 B1
7097357 Johnson et al. Aug 2006 B2
7103574 Peinado Sep 2006 B1
7111058 Nguyen Sep 2006 B1
7113912 Stefik et al. Sep 2006 B2
7114168 Wyatt et al. Sep 2006 B1
7116969 Park Oct 2006 B2
7117183 Blair et al. Oct 2006 B2
7120250 Candelore Oct 2006 B2
7120873 Li Oct 2006 B2
7121460 Parsons et al. Oct 2006 B1
7123608 Scott Oct 2006 B1
7124938 Marsh Oct 2006 B1
7127579 Zimmer Oct 2006 B2
7130951 Christie et al. Oct 2006 B1
7131004 Lyle Oct 2006 B1
7133846 Ginter Nov 2006 B1
7133925 Mukherjee Nov 2006 B2
7136838 Peinado Nov 2006 B1
7143066 Shear Nov 2006 B2
7143297 Buchheit et al. Nov 2006 B2
7143354 Li Nov 2006 B2
7146504 Parks Dec 2006 B2
7155475 Agnoli Dec 2006 B2
7162645 Iguchi et al. Jan 2007 B2
7171539 Mansell et al. Jan 2007 B2
7174457 England et al. Feb 2007 B1
7194092 England Mar 2007 B1
7200680 Evans Apr 2007 B2
7200760 Riebe Apr 2007 B2
7203310 England Apr 2007 B2
7203620 Li Apr 2007 B2
7203966 Abburi Apr 2007 B2
7207039 Komarla et al. Apr 2007 B2
7213005 Mourad May 2007 B2
7213266 Maher et al. May 2007 B1
7216363 Serkowski May 2007 B2
7216368 Ishiguro May 2007 B2
7222062 Goud May 2007 B2
7224805 Hurst May 2007 B2
7233666 Lee Jun 2007 B2
7233948 Shamoon et al. Jun 2007 B1
7234144 Wilt et al. Jun 2007 B2
7236455 Proudler et al. Jun 2007 B1
7254836 Alkove Aug 2007 B2
7260721 Tanaka Aug 2007 B2
7266569 Cutter et al. Sep 2007 B2
7266714 Davies Sep 2007 B2
7278165 Molaro Oct 2007 B2
7290699 Reddy Nov 2007 B2
7296154 Evans Nov 2007 B2
7296296 Dunbar Nov 2007 B2
7299292 Morten Nov 2007 B2
7299358 Chateau et al. Nov 2007 B2
7299504 Tiller Nov 2007 B1
7310732 Matsuyama Dec 2007 B2
7315941 Ramzan Jan 2008 B2
7336791 Ishiguro Feb 2008 B2
7340055 Hori Mar 2008 B2
7343496 Hsiang Mar 2008 B1
7350228 Peled Mar 2008 B2
7353209 Peinado Apr 2008 B1
7353402 Bourne et al. Apr 2008 B2
7356709 Gunyakti et al. Apr 2008 B2
7359807 Frank et al. Apr 2008 B2
7360253 Frank et al. Apr 2008 B2
7376976 Fierstein May 2008 B2
7382879 Miller Jun 2008 B1
7382883 Cross Jun 2008 B2
7383205 Peinado Jun 2008 B1
7392429 Westerinen et al. Jun 2008 B2
7395245 Okamoto et al. Jul 2008 B2
7395452 Nicholson et al. Jul 2008 B2
7406446 Frank et al. Jul 2008 B2
7406603 MacKay Jul 2008 B1
7421024 Castillo Sep 2008 B2
7421413 Frank et al. Sep 2008 B2
7426752 Agrawal et al. Sep 2008 B2
7433546 Marriott Oct 2008 B2
7441121 Cutter Oct 2008 B2
7441246 Auerbach et al. Oct 2008 B2
7451202 Nakahara Nov 2008 B2
7461249 Pearson et al. Dec 2008 B1
7464103 Siu Dec 2008 B2
7474106 Kanno Jan 2009 B2
7475106 Agnoli Jan 2009 B2
7490356 Lieblich et al. Feb 2009 B2
7493487 Phillips et al. Feb 2009 B2
7494277 Setala Feb 2009 B2
7499545 Bagshaw Mar 2009 B1
7500267 McKune Mar 2009 B2
7502945 Bourne Mar 2009 B2
7519816 Phillips et al. Apr 2009 B2
7526649 Wiseman Apr 2009 B2
7539863 Phillips May 2009 B2
7540024 Phillips et al. May 2009 B2
7549060 Bourne et al. Jun 2009 B2
7552331 Evans Jun 2009 B2
7558463 Jain Jul 2009 B2
7562220 Frank et al. Jul 2009 B2
7565325 Lenard Jul 2009 B2
7568096 Evans et al. Jul 2009 B2
7574706 Meulemans Aug 2009 B2
7574747 Oliveira Aug 2009 B2
7584502 Alkove Sep 2009 B2
7590841 Sherwani Sep 2009 B2
7596784 Abrams Sep 2009 B2
7609653 Amin Oct 2009 B2
7610631 Frank et al. Oct 2009 B2
7617401 Marsh Nov 2009 B2
7644239 Westerinen et al. Jan 2010 B2
7653943 Evans Jan 2010 B2
7665143 Havens Feb 2010 B2
7669056 Frank Feb 2010 B2
7680744 Blinn Mar 2010 B2
7694153 Ahdout Apr 2010 B2
7703141 Alkove Apr 2010 B2
7739505 Reneris Jun 2010 B2
7752674 Evans Jul 2010 B2
7770205 Frank Aug 2010 B2
7809646 Rose Oct 2010 B2
7810163 Evans Oct 2010 B2
7814532 Cromer et al. Oct 2010 B2
7822863 Balfanz Oct 2010 B2
7860250 Russ Dec 2010 B2
7877607 Circenis Jan 2011 B2
7881315 Haveson Feb 2011 B2
7891007 Waxman et al. Feb 2011 B2
7900140 Mohammed et al. Mar 2011 B2
7903117 Howell Mar 2011 B2
7958029 Bobich et al. Jun 2011 B1
7979721 Westerinen Jul 2011 B2
8060923 Cutter Nov 2011 B2
8074287 Barde Dec 2011 B2
8095985 Dunbar Jan 2012 B2
8176564 Frank May 2012 B2
8248423 Howell Aug 2012 B2
8347078 Jain Jan 2013 B2
20010010076 Wray Jul 2001 A1
20010021252 Carter Sep 2001 A1
20010033619 Hanamura Oct 2001 A1
20010034711 Tashenberg Oct 2001 A1
20010044782 Hughes Nov 2001 A1
20010049667 Moribatake Dec 2001 A1
20010051996 Cooper Dec 2001 A1
20010052077 Fung Dec 2001 A1
20010053223 Ishibashi Dec 2001 A1
20010056413 Suzuki et al. Dec 2001 A1
20010056539 Pavlin et al. Dec 2001 A1
20020002597 Morrell, Jr. Jan 2002 A1
20020002674 Grimes Jan 2002 A1
20020007310 Long Jan 2002 A1
20020010863 Mankefors Jan 2002 A1
20020012432 England Jan 2002 A1
20020013772 Peinado Jan 2002 A1
20020019814 Ganesan Feb 2002 A1
20020023207 Olik Feb 2002 A1
20020023212 Proudler Feb 2002 A1
20020026574 Watanabe Feb 2002 A1
20020035723 Inoue Mar 2002 A1
20020036991 Inoue Mar 2002 A1
20020044654 Maeda Apr 2002 A1
20020046098 Maggio Apr 2002 A1
20020049679 Russell Apr 2002 A1
20020055906 Katz et al. May 2002 A1
20020057795 Spurgat May 2002 A1
20020059518 Smeets May 2002 A1
20020063933 Maeda May 2002 A1
20020065781 Hillegass May 2002 A1
20020073068 Guha Jun 2002 A1
20020091569 Kitaura et al. Jul 2002 A1
20020095603 Godwin et al. Jul 2002 A1
20020097872 Maliszewski Jul 2002 A1
20020103880 Konetski Aug 2002 A1
20020104096 Cramer Aug 2002 A1
20020107701 Batty et al. Aug 2002 A1
20020111916 Coronna et al. Aug 2002 A1
20020112171 Ginter et al. Aug 2002 A1
20020116707 Morris Aug 2002 A1
20020118835 Uemura Aug 2002 A1
20020123964 Kramer et al. Sep 2002 A1
20020124212 Nitschke et al. Sep 2002 A1
20020129359 Lichner Sep 2002 A1
20020138549 Urien Sep 2002 A1
20020141451 Gates et al. Oct 2002 A1
20020144131 Spacey Oct 2002 A1
20020147601 Fagan Oct 2002 A1
20020147782 Dimitrova et al. Oct 2002 A1
20020147912 Shmueli et al. Oct 2002 A1
20020164018 Wee Nov 2002 A1
20020169974 McKune Nov 2002 A1
20020178071 Walker et al. Nov 2002 A1
20020184482 Lacombe et al. Dec 2002 A1
20020184508 Bialick et al. Dec 2002 A1
20020186843 Weinstein Dec 2002 A1
20020193101 McAlinden Dec 2002 A1
20020194132 Pearson et al. Dec 2002 A1
20020198845 Lao Dec 2002 A1
20020198846 Lao Dec 2002 A1
20030004880 Banerjee Jan 2003 A1
20030005135 Inoue et al. Jan 2003 A1
20030005335 Watanabe Jan 2003 A1
20030014323 Scheer Jan 2003 A1
20030014496 Spencer Jan 2003 A1
20030021416 Brown Jan 2003 A1
20030023564 Padhye Jan 2003 A1
20030027549 Kiel et al. Feb 2003 A1
20030028454 Ooho et al. Feb 2003 A1
20030028488 Mohammed Feb 2003 A1
20030028643 Jabri Feb 2003 A1
20030035409 Wang et al. Feb 2003 A1
20030037246 Goodman et al. Feb 2003 A1
20030040960 Eckmann Feb 2003 A1
20030041008 Grey Feb 2003 A1
20030046026 Levy et al. Mar 2003 A1
20030046238 Nonaka Mar 2003 A1
20030048473 Rosen Mar 2003 A1
20030055898 Yeager Mar 2003 A1
20030056107 Cammack et al. Mar 2003 A1
20030065918 Willey Apr 2003 A1
20030069854 Hsu Apr 2003 A1
20030069981 Trovato Apr 2003 A1
20030078853 Peinado Apr 2003 A1
20030084104 Salem et al. May 2003 A1
20030084278 Cromer et al. May 2003 A1
20030084285 Cromer et al. May 2003 A1
20030084306 Abburi May 2003 A1
20030084337 Simionescu et al. May 2003 A1
20030084352 Schwartz et al. May 2003 A1
20030088500 Shinohara et al. May 2003 A1
20030093694 Medvinsky et al. May 2003 A1
20030097596 Muratov et al. May 2003 A1
20030097655 Novak May 2003 A1
20030110388 Pavlin et al. Jun 2003 A1
20030115147 Feldman Jun 2003 A1
20030115458 Song Jun 2003 A1
20030120935 Teal Jun 2003 A1
20030126086 Safadi Jul 2003 A1
20030126519 Odorcic Jul 2003 A1
20030126608 Safadi Jul 2003 A1
20030131252 Barton et al. Jul 2003 A1
20030133576 Grumiaux Jul 2003 A1
20030135380 Lehr et al. Jul 2003 A1
20030149670 Cronce Aug 2003 A1
20030149671 Yamamoto et al. Aug 2003 A1
20030156572 Hui et al. Aug 2003 A1
20030156719 Cronce Aug 2003 A1
20030159037 Taki Aug 2003 A1
20030163383 Engelhart Aug 2003 A1
20030163712 LaMothe et al. Aug 2003 A1
20030165241 Fransdonk Sep 2003 A1
20030172376 Coffin, III et al. Sep 2003 A1
20030185395 Lee Oct 2003 A1
20030188165 Sutton et al. Oct 2003 A1
20030188179 Challener Oct 2003 A1
20030194094 Lampson Oct 2003 A1
20030196102 McCarroll Oct 2003 A1
20030196106 Erfani et al. Oct 2003 A1
20030198350 Foster Oct 2003 A1
20030200336 Pal et al. Oct 2003 A1
20030208338 Challener et al. Nov 2003 A1
20030208573 Harrison et al. Nov 2003 A1
20030219127 Russ Nov 2003 A1
20030221100 Russ Nov 2003 A1
20030229702 Hensbergen et al. Dec 2003 A1
20030233553 Parks Dec 2003 A1
20030236978 Evans Dec 2003 A1
20040001088 Stancil et al. Jan 2004 A1
20040001594 Krishnaswamy Jan 2004 A1
20040003190 Childs et al. Jan 2004 A1
20040003268 Bourne Jan 2004 A1
20040003269 Waxman Jan 2004 A1
20040003270 Bourne Jan 2004 A1
20040003288 Wiseman et al. Jan 2004 A1
20040010440 Lenard et al. Jan 2004 A1
20040010684 Douglas Jan 2004 A1
20040010717 Simec Jan 2004 A1
20040019456 Cirenis Jan 2004 A1
20040023636 Gurel et al. Feb 2004 A1
20040030912 Merkle, Jr. et al. Feb 2004 A1
20040034816 Richard Feb 2004 A1
20040039916 Aldis et al. Feb 2004 A1
20040039924 Baldwin et al. Feb 2004 A1
20040039960 Kassayan Feb 2004 A1
20040044629 Rhodes et al. Mar 2004 A1
20040054629 de Jong Mar 2004 A1
20040054678 Okamoto Mar 2004 A1
20040054907 Chateau et al. Mar 2004 A1
20040054908 Circenis et al. Mar 2004 A1
20040054909 Serkowski et al. Mar 2004 A1
20040059937 Nakano Mar 2004 A1
20040064351 Mikurak Apr 2004 A1
20040064707 McCann et al. Apr 2004 A1
20040067746 Johnson Apr 2004 A1
20040073670 Chack et al. Apr 2004 A1
20040083289 Karger Apr 2004 A1
20040088548 Smetters et al. May 2004 A1
20040093371 Burrows et al. May 2004 A1
20040093508 Foerstner et al. May 2004 A1
20040098583 Weber May 2004 A1
20040107125 Guheen Jun 2004 A1
20040107356 Shamoon et al. Jun 2004 A1
20040107359 Kawano et al. Jun 2004 A1
20040107368 Colvin Jun 2004 A1
20040111609 Kaji Jun 2004 A1
20040111615 Nyang Jun 2004 A1
20040123127 Teicher et al. Jun 2004 A1
20040125755 Roberts Jul 2004 A1
20040128251 Adam et al. Jul 2004 A1
20040133794 Kocher et al. Jul 2004 A1
20040139027 Molaro Jul 2004 A1
20040139312 Medvinsky Jul 2004 A1
20040146015 Cross Jul 2004 A1
20040158742 Srinivasan Aug 2004 A1
20040184605 Soliman Sep 2004 A1
20040187001 Bousis Sep 2004 A1
20040193648 Lai Sep 2004 A1
20040193919 Dabbish et al. Sep 2004 A1
20040196975 Zhu Oct 2004 A1
20040199769 Proudler Oct 2004 A1
20040205028 Verosub et al. Oct 2004 A1
20040205357 Kuo et al. Oct 2004 A1
20040205510 Rising Oct 2004 A1
20040210695 Weber Oct 2004 A1
20040220858 Maggio Nov 2004 A1
20040225894 Colvin Nov 2004 A1
20040249768 Kontio Dec 2004 A1
20040255000 Simionescu et al. Dec 2004 A1
20040268120 Mirtal et al. Dec 2004 A1
20050010766 Holden Jan 2005 A1
20050015343 Nagai et al. Jan 2005 A1
20050021859 Willian Jan 2005 A1
20050021944 Craft et al. Jan 2005 A1
20050021992 Aida Jan 2005 A1
20050028000 Bulusu et al. Feb 2005 A1
20050033747 Wittkotter Feb 2005 A1
20050039013 Bajikar et al. Feb 2005 A1
20050044197 Lai Feb 2005 A1
20050044391 Noguchi Feb 2005 A1
20050044397 Bjorkengren Feb 2005 A1
20050050355 Graunke Mar 2005 A1
20050060388 Tatsumi et al. Mar 2005 A1
20050060542 Risan Mar 2005 A1
20050065880 Amato et al. Mar 2005 A1
20050066353 Fransdonk Mar 2005 A1
20050071280 Irwin Mar 2005 A1
20050080701 Tunney et al. Apr 2005 A1
20050086174 Eng Apr 2005 A1
20050089164 Lang Apr 2005 A1
20050091104 Abraham Apr 2005 A1
20050091488 Dunbar Apr 2005 A1
20050091526 Alkove Apr 2005 A1
20050097204 Horowitz et al. May 2005 A1
20050102181 Scroggie et al. May 2005 A1
20050108547 Sakai May 2005 A1
20050108564 Freeman et al. May 2005 A1
20050120125 Morten Jun 2005 A1
20050120251 Fukumori Jun 2005 A1
20050123276 Sugaya Jun 2005 A1
20050125673 Cheng et al. Jun 2005 A1
20050129296 Setala Jun 2005 A1
20050131832 Fransdonk Jun 2005 A1
20050132150 Jewell et al. Jun 2005 A1
20050138370 Goud et al. Jun 2005 A1
20050138389 Catherman et al. Jun 2005 A1
20050138406 Cox Jun 2005 A1
20050138423 Ranganathan Jun 2005 A1
20050141717 Cromer et al. Jun 2005 A1
20050144099 Deb et al. Jun 2005 A1
20050149722 Wiseman Jul 2005 A1
20050149729 Zimmer Jul 2005 A1
20050166051 Buer Jul 2005 A1
20050172121 Risan et al. Aug 2005 A1
20050182921 Duncan Aug 2005 A1
20050182940 Sutton Aug 2005 A1
20050188843 Edlund et al. Sep 2005 A1
20050198510 Robert Sep 2005 A1
20050203801 Morgenstern et al. Sep 2005 A1
20050204205 Ring et al. Sep 2005 A1
20050210252 Freeman Sep 2005 A1
20050213761 Walmsley et al. Sep 2005 A1
20050216577 Durham et al. Sep 2005 A1
20050221766 Brizek et al. Oct 2005 A1
20050226170 Relan Oct 2005 A1
20050235141 Ibrahim et al. Oct 2005 A1
20050239434 Marlowe Oct 2005 A1
20050240533 Cutter et al. Oct 2005 A1
20050240985 Alkove Oct 2005 A1
20050246521 Bade et al. Nov 2005 A1
20050246525 Bade et al. Nov 2005 A1
20050246552 Bade et al. Nov 2005 A1
20050251803 Turner Nov 2005 A1
20050257073 Bade et al. Nov 2005 A1
20050262022 Oliveira Nov 2005 A1
20050265549 Sugiyama Dec 2005 A1
20050268115 Barde Dec 2005 A1
20050268174 Kumagai Dec 2005 A1
20050275866 Corlett Dec 2005 A1
20050278519 Luebke et al. Dec 2005 A1
20050279827 Mascavage et al. Dec 2005 A1
20050283601 Tahan Dec 2005 A1
20050286476 Crosswy et al. Dec 2005 A1
20050289177 Hohmann, II Dec 2005 A1
20050289343 Tahan Dec 2005 A1
20060008256 Khedouri Jan 2006 A1
20060010074 Zeitsiff Jan 2006 A1
20060010076 Cutter Jan 2006 A1
20060010326 Bade et al. Jan 2006 A1
20060015717 Liu et al. Jan 2006 A1
20060015718 Liu et al. Jan 2006 A1
20060015732 Liu Jan 2006 A1
20060020784 Jonker et al. Jan 2006 A1
20060020821 Waltermann Jan 2006 A1
20060020860 Tardif Jan 2006 A1
20060026418 Bade Feb 2006 A1
20060026419 Arndt et al. Feb 2006 A1
20060026422 Bade et al. Feb 2006 A1
20060041943 Singer Feb 2006 A1
20060045267 Moore Mar 2006 A1
20060053112 Chitkara Mar 2006 A1
20060055506 Nicolas Mar 2006 A1
20060072748 Buer Apr 2006 A1
20060072762 Buer Apr 2006 A1
20060074600 Sastry et al. Apr 2006 A1
20060075014 Tharappel et al. Apr 2006 A1
20060075223 Bade et al. Apr 2006 A1
20060085634 Jain et al. Apr 2006 A1
20060085637 Pinkas Apr 2006 A1
20060085844 Buer et al. Apr 2006 A1
20060089917 Strom et al. Apr 2006 A1
20060090084 Buer Apr 2006 A1
20060100010 Gatto et al. May 2006 A1
20060106845 Frank et al. May 2006 A1
20060106920 Steeb et al. May 2006 A1
20060107306 Thirumalai et al. May 2006 A1
20060107328 Frank et al. May 2006 A1
20060107335 Frank et al. May 2006 A1
20060112267 Zimmer et al. May 2006 A1
20060117177 Buer Jun 2006 A1
20060129496 Chow et al. Jun 2006 A1
20060129824 Hoff et al. Jun 2006 A1
20060130130 Kablotsky Jun 2006 A1
20060143431 Rothman Jun 2006 A1
20060149966 Buskey Jul 2006 A1
20060156008 Frank Jul 2006 A1
20060156416 Huotari et al. Jul 2006 A1
20060165005 Frank et al. Jul 2006 A1
20060165227 Steeb Jul 2006 A1
20060167814 Peinado Jul 2006 A1
20060167815 Peinado Jul 2006 A1
20060168664 Frank et al. Jul 2006 A1
20060173787 Weber et al. Aug 2006 A1
20060174110 Strom Aug 2006 A1
20060206618 Zimmer et al. Sep 2006 A1
20060212363 Peinado Sep 2006 A1
20060212945 Donlin Sep 2006 A1
20060213997 Frank et al. Sep 2006 A1
20060229990 Shimoji Oct 2006 A1
20060230042 Butler Oct 2006 A1
20060235798 Alkove Oct 2006 A1
20060235799 Evans Oct 2006 A1
20060235801 Strom Oct 2006 A1
20060242406 Barde Oct 2006 A1
20060248596 Jain Nov 2006 A1
20060265758 Khandelwal Nov 2006 A1
20060282319 Maggio Dec 2006 A1
20060282899 Raciborski Dec 2006 A1
20070033102 Frank et al. Feb 2007 A1
20070058718 Shen Mar 2007 A1
20070058807 Marsh Mar 2007 A1
20070153910 Levett Jul 2007 A1
20070280422 Setala Dec 2007 A1
20070297426 Haveson Dec 2007 A1
20080021839 Peinado Jan 2008 A1
20080040800 Park Feb 2008 A1
20080256647 Kim et al. Oct 2008 A1
20090070454 McKinnon, III et al. Mar 2009 A1
20090132815 Ginter May 2009 A1
20090158036 Barde Jun 2009 A1
20100146576 Costanzo et al. Jun 2010 A1
20100177891 Keidar et al. Jul 2010 A1
20100250927 Bradley Sep 2010 A1
20110128290 Howell Jun 2011 A1
20120137127 Jain May 2012 A1
Foreign Referenced Citations (168)
Number Date Country
1287665 Mar 2001 CN
1305159 Jul 2001 CN
1393783 Jan 2003 CN
1396568 Feb 2003 CN
1531673 Sep 2004 CN
1617152 May 2005 CN
0 387 599 Sep 1990 EP
0 409 397 Jan 1991 EP
0 613 073 Aug 1994 EP
0635790 Jan 1995 EP
0 665 486 Aug 1995 EP
0 679 978 Nov 1995 EP
0 709 760 May 1996 EP
0 715 245 Jun 1996 EP
0 715 246 Jun 1996 EP
0 715 247 Jun 1996 EP
0 725 512 Aug 1996 EP
0 735 719 Oct 1996 EP
0 752 663 Jan 1997 EP
0 778 512 Jun 1997 EP
0 798 892 Oct 1997 EP
0843449 May 1998 EP
0 849 658 Jun 1998 EP
0 874 300 Oct 1998 EP
0 887 723 Dec 1998 EP
0 994 475 Apr 2000 EP
1 045 388 Oct 2000 EP
1061465 Dec 2000 EP
1 083 480 Mar 2001 EP
1085396 Mar 2001 EP
1 128 342 Aug 2001 EP
1120967 Aug 2001 EP
1 130 492 Sep 2001 EP
1 191 422 Mar 2002 EP
1 253 740 Oct 2002 EP
1 292 065 Mar 2003 EP
1 338 992 Aug 2003 EP
1 363 424 Nov 2003 EP
1 376 302 Jan 2004 EP
1 378 811 Jan 2004 EP
1387237 Feb 2004 EP
1429224 Jun 2004 EP
1223722 Aug 2004 EP
1460514 Sep 2004 EP
1233337 Aug 2005 EP
1 582 962 Oct 2005 EP
2 492 774 Sep 2012 EP
2359969 Sep 2001 GB
2378780 Feb 2003 GB
02-291043 Nov 1990 JP
H0535461 Feb 1993 JP
H0635718 Feb 1994 JP
H07036559 Feb 1995 JP
H07141153 Jun 1995 JP
H086729 Jan 1996 JP
09-006880 Jan 1997 JP
09-069044 Mar 1997 JP
2001526550 May 1997 JP
H09185504 Jul 1997 JP
H9251494 Sep 1997 JP
2000-242491 Sep 2000 JP
2000293369 Oct 2000 JP
2001051742 Feb 2001 JP
2001-075870 Mar 2001 JP
2003510684 Mar 2001 JP
2001101033 Apr 2001 JP
2003510713 Apr 2001 JP
2001-175605 Jun 2001 JP
2001-175606 Jun 2001 JP
2001184472 Jul 2001 JP
2001-290650 Oct 2001 JP
2001312325 Nov 2001 JP
2001331229 Nov 2001 JP
2001338233 Dec 2001 JP
2002108478 Apr 2002 JP
2002108870 Apr 2002 JP
2002374327 Dec 2002 JP
2003-058660 Feb 2003 JP
2003507785 Feb 2003 JP
2003-101526 Apr 2003 JP
2003-115017 Apr 2003 JP
2003-157334 May 2003 JP
2003140761 May 2003 JP
2003140762 May 2003 JP
2003157335 May 2003 JP
2003208314 Jul 2003 JP
2003248522 Sep 2003 JP
2003-284024 Oct 2003 JP
2003296487 Oct 2003 JP
2003-330560 Nov 2003 JP
2002182562 Jan 2004 JP
2004-062886 Feb 2004 JP
2004062561 Feb 2004 JP
2004118327 Apr 2004 JP
2004164491 Jun 2004 JP
2004295846 Oct 2004 JP
2004304755 Oct 2004 JP
2007525774 Sep 2007 JP
H08-054952 Feb 2011 JP
20010000805 Jan 2001 KR
20020037453 May 2002 KR
10-2004-0000323 Jan 2004 KR
1020040098627 Nov 2004 KR
20050008439 Jan 2005 KR
20050021782 Mar 2005 KR
10-0879907 Jan 2009 KR
2 207 618 Jun 2003 RU
200508970 Mar 2005 TW
WO 9301550 Jan 1993 WO
WO 9613013 May 1996 WO
WO 9624092 Aug 1996 WO
WO 9627155 Sep 1996 WO
WO-9721162 Jun 1997 WO
WO 9725798 Jul 1997 WO
WO 9743763 Nov 1997 WO
WO 9802793 Jan 1998 WO
WO 9809209 Mar 1998 WO
WO 9810381 Mar 1998 WO
WO-9811478 Mar 1998 WO
WO 9821679 May 1998 WO
WO 9821683 May 1998 WO
WO 9824037 Jun 1998 WO
WO 9833106 Jul 1998 WO
WO 9837481 Aug 1998 WO
9842098 Sep 1998 WO
WO 9858306 Dec 1998 WO
9915970 Apr 1999 WO
9953689 Oct 1999 WO
0008909 Feb 2000 WO
WO-0054126 Sep 2000 WO
0057684 Oct 2000 WO
0058810 Oct 2000 WO
0058859 Oct 2000 WO
0059150 Oct 2000 WO
0059152 Oct 2000 WO
WO 0058811 Oct 2000 WO
WO 0059150 Oct 2000 WO
WO-0135293 May 2001 WO
0144908 Jun 2001 WO
WO-0145012 Jun 2001 WO
WO 0152020 Jul 2001 WO
WO 0152021 Jul 2001 WO
WO 0163512 Aug 2001 WO
WO-0163512 Aug 2001 WO
WO-0177795 Oct 2001 WO
WO-0193461 Dec 2001 WO
WO-0208969 Jan 2002 WO
WO 0219598 Mar 2002 WO
WO 0228006 Apr 2002 WO
0237371 May 2002 WO
WO 02057865 Jul 2002 WO
WO-02056155 Jul 2002 WO
WO 02088991 Nov 2002 WO
WO-02103495 Dec 2002 WO
WO-03009115 Jan 2003 WO
WO 03034313 Apr 2003 WO
WO-03030434 Apr 2003 WO
WO 03058508 Jul 2003 WO
WO03073688 Sep 2003 WO
WO-03107585 Dec 2003 WO
WO03107588 Dec 2003 WO
WO-2004092886 Oct 2004 WO
WO 2004097606 Nov 2004 WO
WO 2004102459 Nov 2004 WO
WO 2005010763 Feb 2005 WO
2006065012 Jun 2006 WO
2006115533 Nov 2006 WO
WO-2007032974 Mar 2007 WO
Non-Patent Literature Citations (350)
Entry
Changgui Shi; A fast MPEG video encryption algorithm; Year of Publication: 1998 ; Bristol, United Kingdom ; pp. 81-88.
Lotspiech, “Broadcast Encryption's Bright Future,” IEEE Computer, Aug. 2002.
Memon, “Protecting Digital Media Content,” Communications of the ACM, Jul. 1998.
Ripley, “Content Protection in the Digital Home,” Intel Technology Journal, Nov. 2002.
Steinebach, “Digital Watermarking Basics—Applications—Limits,” NFD Information—Wissenschaft und Praxis, Jul. 2002.
DMOD WorkSpace OEM Unique Features; http://www.dmod.com/oem—features, downloaded Jan. 12, 2005.
Search Report Ref 306928.03 WO, for Application No. PCT/US05/30490, Date of mailing of the international search report Sep. 18, 2007, Authorized Officer Jacqueline A. Whitfield.
Search Report Ref 313743.02, for Application No. PCT/US 06/10327, mailed Oct. 22, 2007.
Search Report Ref 313744.02, for Application No. PCT/US06/10664, mailed Oct. 23, 2007.
Preliminary Report on Patentability Ref 313744.02, for Application No. PCT/US2006/010664, mailed Nov. 22, 2007.
Arbaugh, “A Secure and Reliable Bootstrap Architecture,” IEEE Symposium on Security and Privacy, May 1997, pp. 65-71.
Search Report Ref 313746.02 WO, for Application No. PCT/US05/30489, mailed Aug. 2, 2007.
Oh, Kyung-Seok, “Acceleration technique for volume rendering using 2D texture based ray plane casting on GPU”, 2006 Intl. Conf. CIS, Nov. 3-6, 2006.
Slusallek, “Vision—An Architecture for Global Illumination Calculation”, IEEE Transactions on Visualization and Computer Graphics, vol. 1, No. 1; Mar. 1995; pp. 77-96.
Zhao, Hua, “A New Watermarking Scheme for CAD Engineering Drawings”, 9th Intl. Conf. Computer-Aided Industrial Design and Conceptual Design; CAID/CD 2008;Nov. 22-25, 2008.
Kuan-Ting Shen, “A New Digital Watermarking Technique for Video.” Proceedings VISUAL 2002, Hsin Chu, Taiwan, Mar. 11-13, 2002.
EP Partial Search Report, Ref. FB19620, for Application No. 06774630.5-1243 / 1902367 PCT/US2006026915, Mar. 29, 2012.
EP Communication for Application No. 04779544.8-2212 / 1678570 PCT/US2004024529 reference EP35527RK900kja, Mar. 9, 2010.
EP Communication for Application No. 04 779 544.8-2212, reference EP35527RK900kja, May 10, 2010.
EP Summons to attend oral proceedings for Application No. 04779544.8-2212 / 1678570, reference EP35527RK900kja, May 10, 2012.
Bovet, “An Overview of Unix Kernels” 2001, 0 Reilly, USA, XP-002569419.
JP Notice of Rejection for Application No. 2006-536592, Nov. 19, 2010.
CN First Office Action for Application No. 200480003262.8, Nov. 30, 2007.
CN Second Office Action for Application No. 200480003262.8, Jun. 13, 2008.
CA Office Action for Application No. 2,511,397, Mar. 22, 2012.
PCT international Search Report and Written Opinion for Application No. PCT/US04124529, reference MSFT-4429, May 12, 2006.
JP Notice of Rejection for Application No. 2006-536586, Nov. 12, 2010.
EP Communication for Application No. 04 779 478.9-2212, reference EP35512RK900peu, May 21, 2010.
EP Communication for Application No. 04 779 4789-2212, reference EP35512RK900peu, Apr. 3, 2012.
AU Examiner's first report on patent application No. 2004287141, Dec. 8, 2008.
PCT International Search Report and Written Opinion for Application No. PCT/US04/24433, reference MSFT-4430, Nov. 29, 2005.
CN First Office Action for Application No. 200480003286.3, Nov. 27, 2009.
CA Office Action for Application No. 2,511,531 , Mar. 22, 2012.
CN Notice on First Office Action for Application No. 200510056328.6, Jul. 24, 2009.
EP Communication for Application No. 05 101 873.7-1247, reference EP34127TE900kja, Dec. 19, 2006.
JP Notice of Rejection for Application No. 2005-067120, Dec. 28, 2010.
Bellovin; “Defending Against Sequence Number Attacks” AT&T Research, IETF Standard, Internet Engineering Task Force, May 1996.
Chung Lae Kim, “Development of WDM Integrated Optical Protection Socket Module,” Journal of Korean institute of Telematics and Electronics, Mar. 1996.
Gardan, N+P (With and Without Priority) and Virtual Channel Protection: Comparison of Availability and Application to an Optical Transport Network, 7th International Conference on Reliability and Maintainability, Jun. 18, 1990.
Microsoft, “Digital Rights Management for Audio Drivers” Updated Dec. 4, 2001; XP002342580.
Microsoft, “Hardware Platform for the Next-Generation Secure Computing Base”, Windows Platform Design Notes, 2003, XP-002342581.
Microsoft, Security Model for the Next-Generation Secure Computing Base, Windows Platform Design Notes, 2003, XP002342582.
Choudhury, “Copyright Protection for Electronic Publishing Over Computer Networks”, Submitted to IEEE Network Magazine Jun. 1994.
CN Third Office Action for Application No. 03145223.X, Mar. 7, 2008.
EP Communication for Application No. 03 011 235.3-1247, Reference EP27518-034/gi, Apr. 22, 2010.
EP Communication for Application No. 03 011 235.3-1247, Reference EP27518-034/gi, Nov. 4, 2011.
JP Notice of Rejection for Application No. 2003-180214, Sep. 18, 2009.
RU Official Action for Application No. 2003118755/09(020028), reference 2412-127847RU/3152, May 29, 2007.
CN First Office Action for Application No. 200480012375.4, Sep. 4, 2009.
CN Second Office Action for Application No. 200480012375.4, Feb. 12, 2010.
AU Examiner's first report on patent application No. 2004288600, Jan. 18, 2010.
RU Office Action for Application No. 2005120671, reference 2412-132263RU/4102, Oct. 15, 2008.
RU Office Action for Application No. 2005120671, reference 2412-132263RU/4102, Oct. 21, 2008.
PCT International Search Report and Written Opinion for Application No. PCT/US04/23606, Apr. 27, 2005.
EP Communication for Application No. 04 778 899.7-2212, Reference EP35523RK900peu, Nov. 23, 2012.
PCT International Search Report and Written Opinion for Application No. PCT/US06/27251, reference 311888.02, Jul. 3, 2007.
CN First Office Action for Application No. 200680026251.0, Oct. 8, 2010.
Hong, “On the construction of a powerful distributed authentication server without additional key management”, Computer Communications, Nov. 1, 2000.
Managing Digital Rights in Online Publishing, “How two publishing houses maintain control of copyright” Information Management & Technology, Jul. 2001.
Jakobsson, “Proprietary Certificates”, 2002.
Kumik, “Digital Rights Management”, Computers and Law, E-commerce: Technology, Oct.-Nov. 2000.
Torrubia, “Cryptography Regulations for E-commerce and Digital Rights Management”, Computers & Security, 2001.
Zwollo, “Digital document delivery and digital rights management”, Information Services & Use, 2001.
Griswold, “A Method for Protecting Copyright on Networks”, IMA Intellectual Property Project Proceedings, 1994.
Kahn, “Deposit, Registration and Recordation in an Electronic Copyright Management System”, Coalition for Networked information, Last updated Jul. 3, 2002.
Evans, “DRM: Is the Road to Adoption Fraught with Potholes?”, 2001.
Fowler, “Technoiogy's Changing Role in Intellectual Property Rights”, IT Pro, Mar.-Apr. 2002.
Gable, “The Digital Rights Conundrum”, Transform Magazine—Information Lifecycle, Nov. 2001.
Gunter, Models and Languages for Digital Rights Proceedings of the 34th Hawaii International Conference on System Sciences, Jan. 3-6, 2001.
Peinado, “Digital Rights Management in a Multimedia Environment”, SMPTE Journal, Apr. 2002.
Royan, “Content Creation and Rights Management: experiences of SCRAN (the Scottish Cultural Resources Access Network)”, 2000.
Valimaki, “Digital rights management on Open and Semi-open Networks”, Proceedings of the Second IEEE Workshop on Internet Applications, Jul. 23-24, 2001.
Yu, “Digital multimedia at home and content rights management”, Proceedings 2002 IEEE 4th International Workshop on Networked Appliances, Jan. 15-16, 2002.
Hwang, “Protection of Digital Contents on Distributed Multimedia Environment”, Proceedings of the IASTED International Conference, Internet and Multimedia Systems and Applications, Nov. 19-23, 2000.
Castro, “Secure routing for structured peer-to-peer overlay networks”, Proceedings of the Fifth Symposium on Operating Systems Design and Implementation, Dec. 9-11, 2002.
Friend, “Making the Gigabit IPsec VPN Architecture Secure”, Computer, Jun. 2004.
Hulicki, “Security Aspects in Content Delivery Networks”, The 6th World Multiconference on Systemics, Cybernetics and Informatics. Jul. 14-18, 2002.
McGarvey, “Arbortext: Enabler of Multichannel Publishing”, EContent, Apr. 2002.
Moffett, “Contributing and enabling technologies for knowledge management”, International Journal Information Technology and Management, Jul. 2003.
Utagawa, “Making of card applications using IC Card OS MULTOS”, Mar. 1, 2003.
Nakajima, Do You Really Know It? Basics of Windows2000/XP, Jan. 2004.
N+1 Network Guide, “First Special Feature, Security Oriented Web Application Development, Part 3, Method for Realizing Secure Session Management”, Jan. 2004.
CN First Office Action for Appliction No. 200680013409.0, Jun. 26, 2009.
CN First Office Action for Appliction No. 200580049553.5, Aug. 8, 2008.
CN First Office Action for Appliction No. 200680013372.1, Dec. 18, 2009.
Bajikar, Trusted Platform Module (TPM) based Security on Notebook PCs—White Paper, Intel Corporation, Jun. 20, 2002.
Content Protection System Architecture, A Comprehensive Framework for Content Protection, Feb. 17, 2000.
Pruneda, Windows Media Technologies: Using Windows Media Rights Manager to Protect and Distribute Digital Media, Nov. 23, 2004.
“DirectShow System Overview,” Last updated Apr. 13, 2005.
“Features of the VMR,” accessed on Nov. 9, 2005.
“Introduction to DirectShow Application Programming,” accessed on Nov. 9, 2005.
“Overview of Data Row in DirectShow,” accessed on Nov. 9, 2005.
“Plug-in Distributors,” accessed on Nov. 9, 2005.
“Using the Video Mixing Renderer,” accessed on Nov. 9, 2005.
“VMR Filter Components,” accessed on Nov. 9, 2005.
KR Office Action for Application No. 10-2008-7000503, Sep. 27, 2012.
PCT International Search Report and Written Opinion for Application No. PCT/US06/09904, reference 308715.02, Jul. 11, 2008.
CN First Office Action for Application No. 200680012462.9, Mar. 10, 2010.
JP Notice of Rejection for Application No. 2008-507668, Sep. 2, 2011.
EP Communication for Application No. 06738895.9-2202 / 1872479 PCT/US2006009904, reference F619160, Sep. 16, 2011.
KR Office Action for Application No. 10-2007-7020527, reference 308715.08, Apr. 9, 2012.
JP Final Rejection for Application No. 2008-507668, May 18, 2012.
Kassier, “Generic QOS Aware Media Stream Transcoding and Adaptation,” Department of Distributed Systems, University of Ulm, Germany. Apr. 2003.
DRM Watch Staff, “Microsoft Extends Windows Media DRM to Non-Windows Devices,” May 7, 2004.
Lee, “Gamma: A Content-Adaptation Server for Wireless Multimedia Applications,” Bell Laboratories, Holmdel NJ, USA. Published in 2003.
Ihde, “Intermediary-based Transcoding Framework,” Jan. 2001.
LightSurf Technologies, “LightSurf Intelligent Media Optimization and Transcoding,” printed Apr. 18, 2005.
Digital 5, “Media Server,” printed Apr. 18, 2005.
“Transcode”, Nov. 29, 2002. XP-002293109.
“SoX—Sound eXchange”. Last Updated Mar. 26, 2003. XP-002293110.
Britton, “Transcoding: Extending e-buisness to new environments”, Accepted for publication Sep. 22, 2000. XP-002293153.
Britton, “Transcoding: Extending E-Business to New Environments”; IBM Systems Journal, vol. 40, No. 1, 2001.
Chandra, “Application-Level Differentiated Multimedia Web Services Using Quality Aware Transcoding”; IEEE Journal on Selected Areas of Communications, vol. 18, No. 12. Dec. 2000.
Chen, “An Adaptive Web Content Delivery System”. May 21, 2000. XP-002293303.
Chen, “iMobile EE—An Enterprise Mobile Service Platform”; AT&T Labs—Research, Wireless Networks, 2003.
Chi, “Pervasive Web Content Delivery with Efficient Data Reuse”, Aug. 1, 2002. XP-002293120.
Ripps, “The Multitasking Mindset Meets the Operating System”, Electrical Design News, Newton, MA. Oct. 1, 1990. XP 000162745.
Huang, “A Frame-Based MPEG Characteristics Extraction Tool and Its Application in Video Transcoding”; IEEE Transaction on Consumer Electronics, vol. 48, No. 3. Aug. 2002.
Lee, “Data Synchronization Protocol in Mobile Computing Environment Using SyncML”; 5th IEEE International Conference on High Speed Networks and Multimedia Communications. Chungnarn National University, Taejon, Korea. 2002.
Shaha, “Multimedia Content Adaptation for QoS Management over Heterogeneous Networks”. Rutgers University, Piscataway, NJ. May 11, 2001. XP-002293302.
Shen, “Caching Strategies in Transcoding-enabled Proxy Systems for Streaming Media Distribution Networks”. Dec. 10, 2003. XP-002293154.
Singh, “PTC: Proxies that Transcode and Cache in Heterogeneous Web Client Environments”; Proceedings of the Third International Conference on Web Information Systems, 2002.
Lei, “Context-based media Adaptation in Pervasive Computing”. University of Ottawa. Ottawa, Ontario, Canada. May 31, 2001. XP-002293137.
“International Search Report and Written Opinion mailed Jan. 16, 2007”, Application No. PCT/US2006/034622, 6 pages (MS#313832.02).
“International Search Report and Written Opinion mailed Nov. 30, 2006”, Application No. PCT/US05/40950, 8 pages (MS#310475.12).
Qiao, Daji et al., “MiSer: An Optimal Low-Energy Transmission Strategy for IEEE 802.11 a/h”, obtained from ACM, (Sep. 2003),pp. 161-175.
“International Search Report and Written Opinion mailed Apr. 22, 2008”, Application No. PCT/US2007/087960, 7 pages (MS#318113.05).
Eren, H. et al., “Fringe-Effect Capacitive Proximity Sensors for Tamper Proof Enclosures”, Proceedings of 2005 Sensors for Industry Conference, (Feb. 2005),pp. 22-25.
“International Search Report and Written Opinion mailed Jul. 24, 2008”, Application No. PCT/US05/40966 13pages (MS#310739.02).
Schneier, B. “Applied Cryptography, Second Edition: Protocols, Algorithms, and Source Code in C (cloth)”, (Jan. 1, 1996),13 pages.
Goering, Richard “Web Venture Offers Metered Access to EDA Packages—Startup Winds Clocks by the Hour Tools (E*Cad Will Launch Web Site That Provides Pay-Per-Use and Pay-Per-Hour Access to Range of Chip Design Software)”, Electronic Engineering Times, (Nov. 6, 2000),3 pages.
Zemac, Chen et al., “A Malicious Code Immune Model Based on Program Encryption”, IEEE—Wireless Communication, Networking and Mobile.Computing, WICOM '08, 4th International Conference on Oct. 12-14, 2008,(2008),5 pages.
Mufti, Dr. Muid et al., “Design and Implementation of a Secure Mobile IP Protocol”, Networking and Communication, INCC 204, International Conference on Jun. 11-13, 2004,(2004),5 pages.
Davida, George I., et al., “Unix Guardians: Active User Intervention in Data Protection”, Aerospace Computer Security Applications Conference, Fourth Dec. 12-16, (1988),6 pages.
Morales, Tatiana “Understanding Your Credit Score”, http://www.cbsnews.com/stories/2003/04/29/earlyshow/contributors/raymartin/main55152.shtml retrieved from the Internet on Apr. 23, 2009,3 pages.
“Achieving Peak Performance: Insights from a Global Survey on Credit Risk and Collections Practices”, GCI Group Pamphlet, (2002, 2004), 12 pages.
“Equifax Business Solutions—Manage Your Customers”, Retrieved from the Internet from http://www.equifax.com/sitePages/biz/smallBiz/?sitePage=manageCustomers on Oct. 14, 2005, 3 pages.
“Prequalification Using Credit Reports”, Retrieved from the Internet at http://www.credco.com/creditreports/prequalification.htm on Oct. 14, 2005, 2 pages.
Gao, Jerry et al., “Online Advertising—Taxonomy and Engineering Perspectives”, http://www.engr.sjsu.edu/gaojerry/report/OnlineAdvertising%20.pdf, (2002),33 pages.
Oshiba, Takashi et al., “Personalized Advertisement-Duration Control for Streaming Delivery”, ACM Multimedia, (2002),8 pages.
Yue, Wei T., et al., “The Reward Based Online Shopping Community”, Routledge, vol. 10, No. 4, (Oct. 1, 2000),2 pages.
“International Search Report and Written Opinion mailed Nov. 8, 2007”, Application No. PCT/US05/40967, 5 pages (MS#310477.18).
“International Search Report and Written Opinion”, Application Serial No. PCT/US05/40940, 9 pages (MS#312786.02), May 2, 2008.
“International Search Report and Written Opinion mailed Apr. 25, 2007”, Application No. PCT/US05/040965, 5 pages (MS#311052.02).
“International Search Report and Written Opinion mailed Sep. 25, 2006”, Application No. PCT/US05/40949, 7 pages (MS#311044.02).
“EP Office Action Mailed Nov. 17, 2006”, Application No. 05110697.9, 6 pages (MS#310474.02).
“EP Office Action mailed Apr. 5, 2007”, Application No. 05110697.9, 5 pages.
“EP Summons to Attend Oral Proceedings mailed Sep. 27, 2007” Application No. 05110697.9, 7 pages.
“Decision to Refuse a European Application mailed Feb. 15, 2008”, Application No. 05110697.9, 45 pages.
“International Search Report and Written Opinion mailed Sep. 8, 2006”, Application No. PCT/US05/040942, 20 pages (MS#309572.17).
“European Search Report mailed Dec. 6, 2010” Application No. 05820177.3, 8 pages (MS#309572.41).
Lampson, Butler et al., “Authentication in Distributed Systems: Theory and Practice”, ACM Transactions on Computer Systems, v10, 265,(1992),18 pages.
“Office Action mailed Jun. 29, 2009”, Mexican Application No. MX/a/2007/005657, 2 pages.
“Search Report Dated Jan. 11, 2008”, EP Application No. 05820090.8, 7 pages.
“Examination Report mailed Mar. 5, 2008”, EP Application No. 05820090.8, 1 page.
“First Office Action mailed Apr. 11, 2008”, Chinese Application No. 200580038813.9, 11 pages.
“Office Action mailed Jun. 29, 2009”, Mexican Application No. MX/a/2007/005656, 6 pages.
“Office Action mailed Nov. 30, 2009”, Mexican Application No. MX/a/2007/005659, 6 pages.
“Notice of Allowance mailed Jul. 2, 2010”, Mexican Application No. MX/a/2007/005659, 2 pages.
“Extended European Search Report mailed Dec. 6, 2010” EP Application No. 05820177.3, 8 pages.
“Second Office Action mailed Dec. 18, 2009”, Chinese Application No. 200580038812.4, 24 pages.
“Third Office Action mailed Apr. 1, 2010”, Chinese Application No. 200580038812.4, 9 pages.
“Notice on Grant of Patent Right for Invention mailed May 5, 2011”, Chinese.Application No. 200580038812.4, 4 pages.
“Office Action mailed Jul. 7, 2009”, Mexican Application No. MX/a/2007/005660, 8 pages.
“Notice of Allowance mailed Feb. 18, 2010” Mexican Application No. MX/a/2007/005660, 2 pages.
“Extended European Search Report mailed Aug. 13, 2010”, EP Application No. 05823253.9, 7 pages.
“Notice on the First Office Action mailed Sep. 27, 2010”, Chinese Application No. 200580038745.6, 6 pages.
“Office Action mailed Jul. 8, 2009” Mexican Application No. MX/a/2007/005662, 7 pages.
“Notice of Allowance mailed Feb. 19, 2010”, Mexican Application No. MX/a/2007/005662, 2 pages.
“Partial Search Report mailed Jul. 23, 2010”, EP Application No. 05821183.0.
“Extended European Search Report mailed Jan. 7, 2011”, EP Application No. 05821183.0, 9 pages (MS#309572.57).
“Notice of Allowance mailed Dec. 25, 2009”, Chinese Application No. 200580038773.8, 4 pages.
“Office Action mailed Jun. 26, 2009”, Mexican Application No. MX/a/2007/005655, 5 pages.
“Office Action mailed Feb. 9, 2010”, Mexican Application No. MX/a/2007/005855, 6 pages.
“Office Action mailed Sep. 24, 2010”, Mexican Application No. MX/a/2007/005655, 3 pages.
“Extended European Search Report mailed Jan. 21, 2010” EP Application No. 05819896.1 8 pages (MS#309572.65).
“Office Action mailed Mar. 19, 2010”, EP Application No. 05819896.1, 1 page.
“Office Action mailed Feb. 10, 2010”, Mexican Application No. MX/a/2007/005656, 5 pages.
“Office Action mailed Oct. 18, 2010” Mexican Application No. MX/a/2007/005656, 3 pages.
“Notice on the First Office Action mailed Jul. 30, 2010”, Chinese Application No.200680033207.2, 7 pages.
“EP Search Report mailed Jan. 2, 2008”, EP Application No. 05109616.2, 7 pages (MS#310416.05).
“Flonix: USB Desktop OS Solutions Provider, http://www.flonix.com”, Retrieved from the Internet Jun. 1, 2005, (Copyright 2004),2 pages.
“Migo by PowerHouse Technologies Group, http://www.4migo.com” Retrieved from the Internet Jun. 1, 2005, (Copyright 2003),3 pages.
“WebServUSB, http://www.webservusb.com”, Retrieved from the Internet Jun. 1, 2005, (Copyright 2004),16 pages.
“Notice of Rejection mailed Jul. 8, 2011”, Japanese Application No. 2007-541363, 10 pages (MS#310477.22).
“Notice of Rejection mailed Aug. 5, 2011” Japanese Patent Application No.2007-552142, 8 pages (MS#310522.06).
“Forward Solutions Unveils Industry's Most Advanced Portable Personal Computing System on USB Flash Memory Device”, Proquest, PR Newswire, http://proquest.umi.com/pqdweb?index=20&did=408811931&SrchMode=1&sid=6&Fmt=3, Retreived from the Internet Feb. 15, 2008,(Sep. 22, 2003),3 pages.
“Office Action mailed May 26, 2008”, EP Application No. 05109616.2, 5 pages (MS#310416.05).
“Notice on Division of Application mailed Aug. 8, 2008”, CN Application No. 200510113398.0, (Aug. 8, 2008),2 pages.
“Notice on First Office Action mailed Dec. 12, 2008”, CN Application No. 200510113398.0.
“The Second Office Action mailed Jul. 3, 2009”, CN Application No. 200510113398.0, 7 pages.
“Notice on Proceeding with the Registration Formalities mailed Oct. 23, 2009”, CN Application No. 200510113398.0, 4 pages.
“Examiner's First Report on Application mailed Jun. 4, 2010”, AU Application No. 2005222507, 2 pages.
“Notice of Acceptance mailed Oct. 14, 2010”, AU Application No. 2005222507, 3 pages.
“Decision on Grant of a Patent for Invention mailed Apr. 29, 2010”, Russian Application No. 2005131911, 31 pages.
“Notice of Allowance mailed Nov. 13, 2009”, MS Application No. PA/a/2005/011088, 2 pages.
“TCG Specification Architecture Overview”, Revision 1.2, (Apr. 28, 2004),55 pages.
“International Search Report and Written Opinion mailed Jun. 19, 2007”, PCT Application No. PCT/US05/46091, 11 pages (MS#310476.02).
“Notice on Grant of Patent Right for Invention mailed Jan. 29, 2010” CN Application No. 200580040764.2, 4 pages.
“International Search Report mailed Jan. 5, 2007”, Application No. PCT/US2006/032708, 3 pages (MS#313706.02).
“Cyotec—CyoLicence”, printed from www.cyotec.com/products/cyoicence on Sep. 7, 2005, (Copyright 2003-2005).
“Magic Desktop Automation Suite for the Small and Mid-Sized Business”, printed from www.remedy.com/soultions/magic—it—suite.htm on Sep. 7, 2005, (Copyright 2005),4 pages.
“Pace Anti-Piracy Introduction”, printed from www.paceap.com/psintro.html on Sep. 7, 2005, (Copyright 2002),4 pages.
“Office Action mailed Jul. 6, 2009”, MX Application No. MX/a/2007/005661, 6 pages.
“Office Action mailed Oct. 1, 2010”, MX Application No. MX/a/2007/005661, 3 pages.
“Office Action mailed Mar. 8, 2011”, MX Application No. MX/a/2007/005661, 8 pages.
“Notice on Second Office Action mailed Jun. 7, 2010”, CN Application No. 200680030846.3, 6 pages.
“Decision on Rejection mailed Sep. 13, 2010”, CN Application No. 200680030846.3, 5 pages.
Kwok, Sai H., “Digital Rights Management for the Online Music Business”, ACM SlGecom Exhchanges, vol. 3, No. 3, (Aug. 2002),pp. 17-24.
“International Search Report and Written Opinion mailed Mar. 21, 2007”, Application No. PCT/US05/46223, 10 pages (MS#310521.02).
“The First Office Action mailed Oct. 9, 2009”, CN Application No. 200580043102.0, 20 pages.
“International Search Report and Written Opinion mailed Jul. 9, 2008” Application No. PCT/US05/46539, 11 pages (MS#310522.02).
“Notice of the First Office Action mailed Dec. 29, 2010”, CN Application No. 200580044294.7, 9 pages.
“Office Action mailed Jul. 1, 2009”, MX Application No. 2007/a/2007/007441.
“European Search Report mailed Aug. 31, 2011”, EP Application No. 05855148.2, 6 pages (MS#310522.10).
“International Search Report and Written Opinion mailed Sep. 25, 2007”, Application No. PCT/US06/12811, 10 pages (MS#311045.02).
“Examiner's First Report mailed Sep. 15, 2009” AU Application No. 2006220489, 2 pages.
“Notice of Acceptance mailed Jan. 25, 2010”, AU Application No. 2006220489, 2 pages.
“The First Office Action mailed Aug. 22, 2008”, CN Application No. 200680006199.2, 23 pages.
“The Second Office Action mailed Feb. 20, 2009” CN Application No. 200680006199.2, 9 pages.
“The Fourth Office Action mailed Jan. 8, 2010”, CN Application No. 200680006199.2, 10 pages.
“The Fifth Office Action mailed Jul. 14, 2010”, CN Application No. 200680006199.2, 6 pages.
“Notice on Grant of Patent mailed Oct. 20, 2010”, CN Application No. 200680006199.2, 4 pages.
“First Office Action mailed Aug. 21, 2009”, CN Application No. 200680030846.3, 8 pages.
“Notice on the First Office Action mailed Dec. 11, 2009”, CN Application No. 200510127170.7, 16 pages.
“The Third Office Action mailed Jun. 5, 2009”, CN Application No. 200680006199.2, 7 pages.
“Notice of Rejection mailed Sep. 9, 2011”, JP Application No. 2007-548385, 9 pages (MS#310476.06).
“Notice of Rejection mailed Nov. 11, 2011”, Japanese Application No. 2005-301957, 21 pages (MS#310416.06).
“Extended European Search Report mailed Dec. 21, 2011”, EP Application No. 05854752.2, 7 pages (MS#310476.10).
“Final Rejection mailed Jan. 17, 2012” Japan Application No. 2007-552142, 8 pages (MS#310522.06).
“EP Office Action mailed Mar. 8, 2012”, EP Application No. 05109616.2, 6 pages (MS#310416.05).
“Notice of Preliminary Rejection mailed May 30, 2012”, Korean Patent Application No. 10-2007-7011069. 1 page (MS310477.23).
“Extended European Search Report mailed Jul. 5, 2012” EP Application No. 05851550.3 (MS#310477.26) 6 pages.
“Preliminary Rejection mailed Jul. 4, 2012”, Korean Application No. 10-2007-7012294, 2 pages (MS#310476.07).
“Office Action mailed Jun. 8, 2012”, JP Application No. 2005-301957, 8 pages (MS#310416.06).
JP Notice of Rejection for Application No. 2009-288223, Jun. 29, 2012.
EP Communication for Application No. 11007532 2-1247 / 2492774, Reference EP27518ITEjan, Aug. 3, 2012.
Abbadi, “Digital Rights Management Using a Mobile Phone”; Aug. 19-22, 2007, ICEC '07 Proceedings of the ninth international conference on Electronic commerce.
PCT international Search Report and Written Opinion for Application No. PCT/US06/26915, reference 313859.03, Oct. 17, 2007.
CN First Office Action for Application No. 200680025136.1, Apr. 24, 2009.
JP Notice of Rejection for Application No. 2008-521535, Jun. 10, 2011.
JP Notice of Rejection for Application No. 2008-521535, Sep. 27, 2011.
KR Preliminary Rejection for Application No. 10-2008-7000503, Reference 313859.07, Sep. 27, 2012.
Aviv, “Aladdin Knowledge Systems Partners with Rights Exchange, Inc. to Develop a Comprehensive Solution for Electronic Software Distribution,” Aug. 3, 1998.
Amdur, “Metering Online Copyright,” Jan. 16, 1996.
Amdur, “InterTrust Challenges IBM Digital Content Metering; Funding, Name Change, Developer Kit Kick Off Aggressive Market Push”, Report On Electronic Commerce, Jul. 23, 1996.
Armati, “Tools and standards for protection, control and presentation of data,” Last updated Apr. 3, 1996.
Benjamin, “Electronic Markets and Virtual Value Chains on the Information Superhighway,” Sloan Management Review, Winter 1995.
Cassidy, “A Web developer's guide to content encapsulation technolocly; New tools offer clever ways to distribute your programs, stories & and get paid for it”, Apr. 1997.
Clark, “Software Secures Digital Content on Web”, Interactive Week, Sep. 25, 1995.
Cox, “Superdistribution”, ldees Fortes, Wired, Sep. 1994.
Cox, “What if there is a silver bullet”, J. Object Oriented Program, Jun. 1992.
Hauser, “Does Licensing Require New Access Control Techniques?” Aug. 12, 1993.
Hudgins-Bonafield, “Selling Knowledge on the Net; Container Consortium Hopes to Revolutionize Electronic Commerce,” Network Computing, Jun. 1, 1995.
“IBM spearheading intellectual property protection technology for information on the Internet,” May 1, 1997.
“Technological Solutions Rise to Complement Law's Small Stick Guarding Electronic Works; Vendors fight to establish beachheads in copy-protection field,” Information Law Alert, Jun. 16, 1995.
Kaplan, “IBM Cryptolopes, SuperDistribution and Digital Rights Management,” Dec. 30, 1996.
Kent, “Protecting Externally Supplied Software in Small Computers,” Sep. 1980.
Kohl, “Safeguarding Digital Library Contents and Users; Protecting Documents Rather Than Channels,” D-Lib Magazine, Sep. 1997.
Linn, “Copyright and Information Services in the Context of the National Research and Education Network,” IMA intellectual Property Project Proceedings, Jan. 1994.
McNab, “Superdistribution works better in practical applications,” Mar. 2, 1998.
Moeller, “NetTrust lets cyberspace merchants take account,” PC Week, Nov. 20, 1995.
Moeller, “IBM takes charge of E-commerce; Plans client, server apps based on SET,” Apr. 29, 1996.
Pemberton, “An ONLINE interview with Jeff Ongler at IBM InfoMarket,” Jul. 1996.
“Licensit: kinder, gentler copyright? Copyright management system links content, authorship information,” Seybold Report on Desktop Publishing, Jul. 8, 1996.
Sibert, “The DigiBox: A Self-Protecting Container for Information Commerce,” First USENIX Workshop on Electronic Commerce, Jul. 11-12, 1995.
Sibert, “Securing the Content, Not the Wire, for Information Commerce,” Jul. 1995.
Smith, “A New Set of Rules for Information Commerce; Rights-protection technologies and personalized-information commerce will affect all knowledge workers” Electronic Commerce, Nov. 6, 1995.
Stefik, “Trusted Systems; Devices that enforce machine-readable rights to use the work of a musician or author may create secure ways to publish over the Internet,” Scientific American, Mar. 1997.
Stefik, “Technical Perspective; Shifting the Possible: How Trusted Systems and Digital Property Rights Challenge Us to Rethink Digital Publishing,” Berkeley Technology Law Journal, Spring 1997.
Tarter, “The Superdistribution Model,” Soft Letter: Trends & Strategies in Software Publishing, Nov. 15, 1996.
Secor, “Rights Management in the Digital Age: Trading in Bits, Not Atoms,” Spring 1997.
Weber, “Digital Right Management Technology,” A Report to the International Federation of Reproduction Rights Organisations, Oct. 1995.
White, “ABYSS: An Architecture for Software Protection,” IEEE Transactions On Software Engineering, Jun. 1990.
White, “ABYSS: A Trusted Architecture for Software Protection,” IEEE Symposium on Security and Privacy, Apr. 27-29, 1987.
“Boxing Up Bytes”. No publication date available. This reference was cited in U.S. Appl. No. 09/892,371 on Mar. 22, 2002.
Ramanujapuram, “Digital Content & Intellectual Property Rights: A specification language and tools for rights management,” Dr. Dobb's Journal, Dec. 1998.
CN Notice on Reexamination for Application No. 200680025136.1, Jun. 17, 2013.
KR Notice of Final Rejection for Application No. 10-2007-7024145, Reference No. 313361.12, Oct. 23, 2012.
KR Notice of Preliminary Rejection for Application No. 2007-7023842, Reference No. 313361.06, Oct. 24, 2012.
“Black Box Crypton defies the hackers”, Electronics Weekly, Mar. 6, 1985.
Business Wire, “Aladdin Acquires the Assets of Micro Macro Technologies”, Mar. 3, 1999.
Computergram International, “BreakerTech Joins Copyright Management Market”, Aug. 5, 1999.
ARM, “Optimising license checkouts from a floating license server”, ARM Technical Support Knowledge Articles, Published on or before Dec. 20, 2003.
Blissmer, “Next step is encryption: Data security may be bundled with Next's operating system”, Electronic Engineering Times, Feb. 3, 1992.
Stevens, “How Secure is your Computer System?”, The Practical Accountant, Jan. 1998.
Olson, “Concurrent Access Licensing”, UNIX Review, Sep. 1988.
PR Newswire, “Sony Develops Copyright Protection Solutions for Digital Music Content”, Feb. 25, 1999.
“Solution for Piracy”, Which Computer?, Nov. 1983.
Gold, “Finland—Data Fellows Secures ICSA Certification”, Newsbytes, Jan. 7, 1998.
Thompson, “Digital Licensing”, IEEE Internet Computing, Jul.-Aug. 2005.
Ahuja, “The Key to Keys”, Dataquest, Aug. 31, 1997.
Malamud, “Network-Based Authentication: The Key to Security”, Network Computing, Jun. 1991.
Kopeikin, “Secure Trading on the Net”, Telecommunications, Oct. 1996.
Information Week, “The New Network: Planning and Protecting Intranet Electronic Commerce”, Dec. 2, 1996.
Chin, “Reaching Out to Physicians”, Health Data Management, Sep. 1998.
Finnie, “Suppliers Cashing In on the Internet”, Communications Week International, Nov. 14, 1994.
Bank, “Postal Service Announces Plan to put Postmarks on Electronic Mail”, San Jose Mercury News, Apr. 9, 1995.
Dawson, “S-A Unveils Security System”, Broadband Week, Jan. 15, 1996.
Metropolitan Computer Times, “Bankard Set To Intro Virtual Shopping in Philippines”, Newsbytes News Network, Apr. 16, 1997.
Rouvroy, “Reconfigurable Hardware Solutions for the Digital Rights Management of Digital Cinema”, Proceedings of the 2004 ACM Workshop on Digital Rights Management, Oct. 25, 2004.
Housley, “Internet X.509 Public Key Infrastructure Certificate and Certificate Renovation List (CRL) Profile”, Network Working Group, Apr. 2002.
Housley, “Metering: A Pre-pay Technique”, SPIE Proceedings vol. 3022, Storage and Retrieval for Image and Video Databases V, Jan. 15, 1997.
Ogata, “Provably Secure Metering Scheme”, Proceedings of the 6th International Conference on the Theory and Application of Cryptology and Information Security, Dec. 3-7, 2000.
Kim, “A Secure and Efficient Metering Scheme for Internet Advertising”, Journal of KIISE: Computer Systems and Theory, vol. 29, Issue 3, 2002.
Stallings, “Network and Internetwork Security Principles and Practice”, Prentice-Hall, Inc., p. 136, Jan. 1995.
Linn, “Privacy Enhancement for Internet Electronic Mail: Part 1: Message Encryption and Authentication Procedures”, Network Working Group, Feb. 1993.
Kaliski, “Privacy Enhancement for Internet Electronic Mail: Part IV: Key Certification and Related Services”, Network Working Group, Feb. 1993.
Backman, “Smartcards: The Intelligent Way to Security”, Network Computing, May 15, 1998.
“Concatenate”, Free On-Line Dictionary of Computing, Dec. 22, 1995.
Google Groups, “How to Prevent copying DB application to other machines”, Dec. 22, 1998.
Garfield, “Internet Dynamics First to Ship Integrated Security Solution for Enterprise Intranets and Extranets; Conclave Accelerates Enterprise Deployment of Secure, High-Value Intranets and Extranets”, Business Wire, Sep. 15, 1997.
Carozza, “Cylink: Public-Key Security Technology Granted to the Public; Cylink Announces the Renowned Diffie-Hellman Public-Key Technology Has Entered the Public Domain”, Business Wire, Sep. 16, 1997.
Linetsky, “Programming Microsoft DirectShow”, Wordware Publishing, Inc., Oct. 2001.
Pesce, “Programming Microsoft DirectShow for Digital Video and Television”, Microsoft Press, Apr. 16, 2003.
KR Notice of Preliminary Rejection for Application No. 10-2007-7023842, Apr. 18, 2012.
KR Preliminary Rejection for Application No. 10-2007-7024156, Jul. 30, 2012.
KR Notice of Preliminary Rejection for Application No. 10-2007-7024145, Jan. 17, 2012.
TW Search Report for Application No. 094130187, Jul. 27, 2012.
U.S. Appl. No. 60/673,979, filed Apr. 22, 2005, David J. Marsh.
U.S. Appl. No. 11/116,598, filed Apr. 27, 2005, Sumedh N. Barde.
U.S. Appl. No. 11/227,045, filed Sep. 15, 2005, David J. Marsh.
U.S. Appl. No. 11/202,840, filed Aug. 12, 2005, David J. Marsh.
U.S. Appl. No. 11/202,838, field Aug. 12, 2005, Kenneth Reneris.
U.S. Appl. No. 11/191,448, filed Jul. 28, 2005, Sumedh N. Barde.
U.S. Appl. No. 12/390,505, filed Feb. 23, 2009, Sumedh N. Barde.
U.S. Appl. No. 09/525,510, filed Mar. 15, 2000, Marcus Peinado.
U.S. Appl. No. 11/866,041, filed Oct. 2, 2007, Marcus Peinado.
U.S. Appl. No. 10/178,256, filed Jun. 24, 2002, Glenn F. Evans.
U.S. Appl. No. 11/275,991, filed Feb. 8, 2006, Glenn F. Evans.
U.S. Appl. No. 11/275,990, filed Feb. 8, 2006, Glenn F. Evans
U.S. Appl. No. 11/275,993, filed Feb. 8, 2006, Glenn F. Evans.
U.S. Appl. No. 11/938,707, filed Nov. 12, 2007, Glenn F. Evans.
U.S. Appl. No. 60/513,831, filed Oct. 23, 2003, Chadd Knowlton.
U.S. Appl. No. 10/820,666, filed Apr. 8, 2004, Geoffrey Dunbar.
U.S. Appl. No. 10/820,673, filed Apr. 8, 2004, James M. Alkove.
U.S. Appl. No. 11/870,837, filed Oct. 11, 2007, Geoffrey Dunbar.
U.S. Appl. No. 10/838,532, filed May 3, 2004, James M. Alkove.
U.S. Appl. No. 10/798,688, filed Mar. 11, 2004, James M. Alkove.
U.S. Appl. No. 12/715,529, filed Mar. 2, 2010, James M. Alkove.
U.S. Appl. No. 10/968,462, filed Oct. 18, 2004, Benjamin Brooks Cutter.
U.S. Appl. No. 11/018,095, filed Dec. 20, 2004, Amit Jain.
U.S. Appl. No. 13/367,198, filed Feb. 6, 2012, Amit Jain.
U.S. Appl. No. 11/108,327, filed Apr. 18, 2005, Amit Jain.
U.S. Appl. No. 11/184,555, filed Jul. 19, 2005, Adil A. Sherwani.
U.S. Appl. No. 11/129,872, filed May 16, 2005, Darryl E. Havens.
U.S. Appl. No. 60/698,525, filed Jul. 11, 2005, Scott J. Fierstein.
U.S. Appl. No. 11/276,496, filed Mar. 2, 2006, Scott J. Fierstein.
U.S. Appl. No. 11/179,013, filed Jul. 11, 2005, Gareth Howell.
U.S. Appl. No. 13/016,686, filed Jan. 28, 2011, Gareth Howell.
Related Publications (1)
Number Date Country
20060248594 A1 Nov 2006 US
Provisional Applications (1)
Number Date Country
60673979 Apr 2005 US