This application pertains to computer virtualization, and more particularly, to determining the integrity of one or more virtual machines and their associated components.
Businesses are making tremendous investments in computer hardware and data centers. Meanwhile, the costs associated with powering and cooling the data centers are steadily increasing. To make matters worse, data center real estate is at a premium while demand relentlessly expands for more computer hardware to produce the sheer processing power necessary to meet the complex and growing needs of the businesses. Juxtaposing the need for more computer hardware and larger data centers is a troubling statistic that on average only 8-12% of the processing power of any given machine used in a data center is active, while the processors remain essentially idle the rest of the time.
For example, large batch processing machines used by banks are configured to run large batches of reconciliations. But when a machine is not performing the batches of reconciliations, it may in essence be “wasting” processing power until another batch of reconciliations begins, or until the machine is removed or powered off for maintenance. The wasted processing power results in bloated information technology budgets and an overall increase of costs to the businesses.
Virtualization of computer resources is changing the face of computing by offering a way to make use of the idling machines to a higher degree. Virtualization is a broad term that refers to the abstraction of computer resources. In other words, physical characteristics of computing resources may be hidden from the way in which other systems, applications, or end users interact with those resources. The most basic use of virtualization involves reducing the number of servers by increasing the utilization levels of a smaller set of machines. This includes making a single physical resource such as a server or storage device appear to function as one or more logical resources. Additionally, it can make one or more physical resources appear as a single resource. For instance, if a server's average utilization is only 15%, deployment of multiple virtual machines onto that server has the potential to increase the overall utilization by a factor of 5 or more. Thus, not only is the usage of each machine more efficiently managed, but the usability of the system as a whole is also enhanced.
While the virtualization of computer resources promises to deliver many benefits, there are worrisome problems that lurk beneath the surface of this new and exciting computing trend. A virtual machine may be a single instance of a number of discrete identical execution environments on a single computer, each of which runs an operating system (OS). These virtual machines act as individual computing environments and therefore are subject to many of the same operating deficiencies found in standard physical computing environments. The virtual machines can be configured improperly, often by well-intentioned technicians or operators, and then broadly deployed. Operating systems, applications, and configurations can be modified from the expected state, thereby creating a drift between the expected and actual machine configuration.
Additionally, the lifecycle of a virtual machine can vary widely depending upon the specific operation that it was provisioned and intended for. No longer must a physical server be dedicated to running a monthly task (such as billings and reconciliations). A virtual machine can be provisioned with the same OS, applications, and configurations and placed into physical storage until it is ready to execute. Once copied to a physical machine, it can be executed, perform its monthly cycle functions, and then be shutdown and returned to storage. In this way, virtual machines may be used much like physical servers are today, but may operate less frequently, e.g., running for just hours or minutes at a time rather than months or years, as was often the case with a physical server. As a result, no longer are the auditors, technicians, or other operators able to sit down at a specific physical server that is dedicated to a specific task or group of transactions. Instead, virtual resources of an entire data center are used to perform the transactions. It is therefore difficult to know which physical server ran which transaction, what its state was, whether correct software was being used, whether correct controls were in place, whether they were compliant with regulatory environments, and so forth.
Another problem that threatens the viability of the virtualization movement is that of access control, security, and data integrity. Whereas before, gaining access to a data center most often required interaction with physical servers, buildings, and people, in a virtualized environment, such safe guards are lessened. For example, before virtualization, adding a physical server to a data center involved somebody swiping an access card or other security measure to allow access to the data center, carrying a box into the data center under the supervision of other IT professionals or building managers, and installing the physical server into a rack. With the advent of virtualization, theoretically a person can sit in a remote location and install a new server into the virtualized environment without ever needing to physically access the data center. Thus, the ability to control the data center environment is diminished. And while malicious activity accounts for only about 3-5% of data center issues, most of the data center issues are caused by well-intentioned people who are either inadequately trained or make honest mistakes, thereby leading to system or component failures, which can sometimes be very severe—even catastrophic.
Accordingly, a need remains for a way to identify and authenticate the integrity of virtual machines and their components. The present application addresses these and other problems associated with the prior art.
The present application includes a method and system for verifying the integrity of virtual machines and for verifying the integrity of discrete elements of the virtual machines throughout the lifecycle of the virtual machines. The system can include a machine, a virtual machine manager capable of managing one or more virtual machine images installed on the machine, an integrity reference component configured to store a plurality of virtual machine integrity records, and an integrity verification component communicatively coupled to the virtual machine manager and the integrity reference component, the integrity verification component configured to compare a digest of said one or more virtual machine images to a digest of at least one of said plurality of virtual machine integrity records accessible from the integrity reference component.
The foregoing and other features, objects, and advantages of the invention will become more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings.
To solve the problems in the prior art, an embodiment of the invention begins by setting forth a method and system for verifying the integrity of virtual machines in a virtual machine environment. A basic use of virtualization involves reducing the number of physical machines or servers by increasing the utilization levels of a smaller set of physical machines or servers. Virtualization enables administrators to perform this consolidation by treating each physical machine as one or more virtual machines. As a result, there are fewer physical machines to support, which use less rack space and result in reduced power consumption. In addition, virtualization provides an opportunity for administrators to homogenize the physical machine hardware platforms while still running disparate operating systems and applications, including legacy operating systems and applications that might not be usable on more current hardware platforms without a virtualization layer. Further, existing physical machine hardware can be repurposed without modifying the underlying hardware platforms. Virtualization also provides for simpler disaster recovery protection of data because enterprise systems required for business continuity can be deployed into any data center built on virtualized resources, regardless of whether the physical machine hardware platforms are identical.
A virtual machine manager (VMM), also referred to as a “Hyperviser,” executes above the physical machine hardware and can provide the base functionality for accessing devices and memory of the physical machine. The VMM is also responsible for loading and controlling virtual machines, also referred to as virtual machine images. The VMM can control the virtual machines' access to system resources, and can schedule execution cycles in the processor. The VMM can ensure that each virtual machine is sufficiently isolated so that a failure in any one of the virtual machines will not affect the ability of any other virtual machine to execute and continue operation.
A virtual machine image normally appears as a single file, or related set of files, on a normal underlying file system. The structure of the virtual machine image is such that internally it can represent a full file system for a given platform. Each virtual machine image can be dedicated to a particular task such as operating a web interface, a database, or a payment processor, among other possibilities. In other words, logical functions of a business can be separated into virtual machines and executed separately. For example, consider an e-commerce storefront that serves up many different pages of a catalog and controls a shopping cart that users can add items to. Unless the users were actually to purchase an item, a payment processor virtual machine would remain mostly idle, consuming little to no execution resources. Once a consumer decided to purchase the items in the shopping cart, then the payment processor virtual machine can be given execution cycles by the VMM and can process the transaction. Other examples include virtual machines used for bank or financial institution reconciliations, aircraft control system operations, or weather tracking systems, among many other possibilities.
The lifecycle of a virtual machine image includes various states. For example, a virtual machine image can be created, started, suspended, stopped, migrated, or destroyed. One factor of concern in the execution of virtual machines is the quality of the image as it is loaded from storage into the execution environment. Conventionally, virtual machine images are loaded from a storage location (such as a hard disk drive, memory, USB peripheral, etc.), and executed directly by the VMM, which has no expectation or understanding of the quality (i.e., trustworthiness or integrity) of the virtual machine image or of its contents.
Since the virtual machine is loaded from the storage location, the virtual machine image may not be compliant with expected settings and configurations required for proper execution in a given environment. The virtual machine image itself could be corrupted or even maliciously augmented (perhaps by an insider). Since a virtual machine image can be stored as a complete execution-capable environment, it is feasible that another user or system could access the virtual machine, execute it, and change its state by adding software or modifying its configuration, and then replace it back in the original storage location. If such actions are preformed by authorized administrators making authorized changes, such changes would be acceptable. However, the opportunity for unauthorized or unexpected changes exists. As previously mentioned, most of the data center issues are caused by well-intentioned people who are either inadequately trained or make honest mistakes, thereby leading to system or component failures. In other words, changes can be made by both legitimate and illegitimate users. Thus, the original virtual machine image might not be in its original or pristine state.
According to some embodiments of the present invention, an integrity verification component can be communicatively coupled to the VMM or integrated within the VMM to perform a one-way cryptographic hashing function over the virtual machine image. The resulting hash, also referred to herein as a “digest,” can be compared to virtual machine integrity records, which include known good reference values (i.e., known good digests) stored locally in an integrity reference component, or alternatively stored remotely in an integrity reference component accessible over a network. As a result, throughout the course of its lifecycle, the virtual machine image can be verified to be in an expected state for the given environment.
Integrity verification component 105 can also be communicatively coupled to integrity reference component 125, which can store virtual machine integrity records 130 having known good digests 135. Prior to deployment of a virtual machine image 120, integrity verification component 105 can verify the integrity of virtual machine image 120 and create a hash or digest of virtual machine image 120 while in a known good state so as to facilitate the creation of a trusted library of known good reference values, such as those stored as virtual machine integrity records 130 having digests 135 in the integrity reference component 125. Integrity verification component 105 can verify the integrity of a software stack used to create virtual machine images 120 prior to creation of virtual machine images 120. Integrity reference component 125, including virtual machine integrity records 130 and digests 135, can also be digitally signed by an integrity reference provider (not shown).
After deployment of virtual machine images 120, the integrity verification component 105 can be configured to collect measurements, such as a digest, from one or more of the virtual machine images 120 and compare the digest to a digest 135 of at least one of the virtual machine integrity records 130 accessible from integrity reference component 125. Alternatively, integrity verification component 105 can generate the digest based on measurements collected from virtual machine images 120, and compare the generated digest to a digest 135 of at least one of the virtual machine integrity records 130. Integrity verification component 105 can then generate a trust score for one or more of the virtual machine images 120 responsive to the comparison. The trust score can further be generated based on an authenticity score authenticating a source of the collected measurements. Authenticity is an extension of integrity whereby the contents of the integrity reference component 125 also contains an indicator (not shown) of the source of the information derived from the measurements and stored in the integrity reference component 125 (such as in the form of virtual machine integrity records 130), thereby attesting to the origin of the information. Once the trust score has been generated, a determination can be made whether to grant or deny the virtual machine images 120 access to a given virtualized environment based on the trust score.
A trust score is an indication of whether a computer system is trustworthy. Trust scores can be generated in many different ways. In one embodiment, the trust score is the ratio of the number of validated modules on the computer system to the total number of modules on the computer system (validated or not). In another embodiment, the trust score can be scaled to a number between 0 and 1000, where 0 represents a completely untrustworthy computer system, and 1000 represents a completely trustworthy computer system. In yet another embodiment, critical modules can be weighted more highly than other modules, so that a computer system with more validated critical modules can score more highly than a computer system with few validated critical modules, even if the second computer system has more total modules validated. (The definition of “critical” is not intended to refer to modules that are absolutely necessary as much as modules that are identified as important to the organization. Thus, one organization might consider the files relating to the operating system to be “critical”, whereas another organization might consider modules that are custom developed internally (for whatever purpose) to be “critical”.)
Integrity reference component 125 can be locally accessible or directly attached to the integrity verification component, as shown in
While the physical hardware platform/machine 115 of
Integrity verification component 105 can be integrated within VMM 110. Alternatively, integrity verification component 105 can exist as a sub-process having security privileges at least as high as security privileges for VMM 110. In addition, integrity verification component 105 can exist as an integrated physical component of the physical hardware platform/machine 115.
For advanced functions that would enhance performance, or for verifying smaller known sets of applications, a protected and secured version of integrity reference component 125 can be used as a known good manifest of acceptable measurements (not shown). The manifest can be stored locally to the enterprise (for example, on some other physical machine accessible from machine 115 via network 205), or on machine 115 itself. This manifest can be updated from the integrity reference component 125 as needed, when the integrity reference component is updated with additional virtual machine integrity records 130 and digests 135.
It is not necessary to collect measurements 410 for every discrete virtual machine image element 415. Measurement agents 405 can be configured to collect measurements for only important discrete virtual machine image elements 415, however “important” is defined. For example, the important discrete virtual machine image elements 415 can include expected-to-be-static elements of virtual machine image 120 (on the premise that if the static elements change, the virtual machine has potentially been compromised), or the expected-to-be-dynamic elements of virtual machine image 120 (on the premise that the changing elements are the ones that might compromise the virtual machine).
Integrity verification component 105 can compare collected measurements 410 to at least one of the virtual machine integrity records 130 of integrity reference component 125. As previously discussed above, integrity verification component 105 can generate a trust score for one or more virtual machine images 120 responsive to a comparison of a hash or digest of a virtual machine image 120 itself to a digest 135 of a virtual machine integrity record 130 stored in the integrity reference component 125. Furthermore, integrity verification component 105 can generate a trust score for at least one of the discrete virtual machine image elements 415 of virtual machine images 120. The trust score can also be generated based on both the comparison of the digest of virtual machine image 120 itself, and on the comparison of digests of discrete virtual machine image elements 415 of virtual machine images 120 that can be collected using measurement agents 405. In both cases, integrity verification component 105 can generate the trust score using an authenticity score authenticating a source of collected measurements 410, as previously described above.
Integrity reference component 125 can also include metadata 160 to establish relationships between discrete virtual machine image elements 415. For example, metadata 160 can include version or vendor information of discrete virtual machine image elements 415, or other information indicating how the discrete virtual machine image elements relate to one another. Collected measurements 410 can also include metadata such as version or vendor information so that the collected measurements 410 can be compared to metadata 160 stored in integrity reference component 125, and can be used together with the digests 135 in determining the trust score for the virtual machine images 120.
In some embodiments of the present invention, metadata 160 can include a location of each virtual machine image 120 within the underlying file system of physical hardware platform/machine 115, or some other machine. If a virtual machine image 120 is expected to be located at a certain file path of the underlying file system, or at a certain location on a network drive, for example, metadata 160 can include such location information. Collected measurements 410 can also include metadata such as the location information so that the collected measurements 410 can be compared to metadata 160 stored in integrity reference component 125, and can be used together with the digests 135 in determining the trust score for the virtual machine images 120.
As another example, metadata 160 can include information regarding VMM 110 itself, such as whether VMM 110 comes from a pre-approved vendor list (not shown), and can be stored in integrity reference component 125 or included in collected measurements 410. The pre-approved vendor list can be created or maintained by a user or customer, or alternatively, the pre-approved vendor list can be created or maintained by a third party. In either case, the pre-approved vendor list can be stored in the integrity reference component 125 and used to help generate the trust score for the virtual machine images 120.
If the trust score is generated based on the important discrete virtual machine image elements 415 (e.g., the expected-to-be-static elements of virtual machine image 120), then the trust score likely remains the same during the lifecycle of virtual machine image 120 as it transitions from one state to another. However, if the important discrete virtual machine image elements 415 happen to change, then the trust score can be affected and might vary depending on the magnitude of the changes.
Prior to creation of the virtual machine image, the software stack used to create the virtual machine image can be verified as shown at state 505. The virtual machine image can then be created at state 510, and its integrity can be verified, as further discussed below. The virtual machine image can be created from a set of existing software such as an operating system or an application. Once the virtual machine image is created, it can be stored to await execution at a future time, or it can go directly into production where it is started at state 515. The virtual machine image can execute for some period of time such as minutes, days, or years before it transitions to one of three states: a stop state 520, a suspend state 525, or a migrate state 530.
In the stop state 520, the virtual machine image is stopped, no longer receiving cycles for execution, and is unloaded from memory. In the suspend state 525, the virtual machine image is temporarily suspended from execution and will no longer receive execution cycles until re-stared, but may remain in memory. Alternatively, the suspended virtual machine may be stored to disk (indefinitely) until it is restarted. In the migrate state 530, the virtual machine image can be migrated from on physical hardware platform to another. While this can be performed on a suspended virtual machine image, the migration can also occur with an active or started virtual machine image, thus resulting in a “hot” migration. The virtual machine image can also be destroyed, thereby removing its existence from execution and storage.
Traditionally, businesses take great care in provisioning non-virtualized physical hardware platforms to ensure that they are properly established before moving them into production. In virtualized environments, and with the ease of which the virtual machine images can be created, started, and migrated, greater care should be taken to ensure they are properly provisioned. In the create state 510, virtual machine images can be created from sets of software such as an operating system, an application, or a configuration file. Since the virtual machine images can be instantiated (created) at any time, on any number of platforms, the integrity of the software stack can be verified prior to the creation of the virtual machine images, as shown at state 505. The virtual machine image can then be created at the create state 510 responsive to verifying the integrity of the software stack. A digest of the virtual machine can be stored after creation, to support verification of the virtual machine at a later time, such as when the virtual machine is started (by comparing the digest with a digest of the virtual machine taken before it is started).
When the virtual machine image is started at state 515, the virtual machine image can be loaded from a previously stored virtual machine image, or it can be a re-start of a previously suspended in-memory virtual machine image. The integrity of the virtual machine image can be verified when starting the virtual machine image. Thus, the virtual machine image can be started responsive to verifying its integrity, thereby ensuring that the virtual machine image has not been altered from its expected configuration. In particular, when the virtual machine image is migrated from one physical hardware platform to another or restarted from a suspended state, the virtual machine image can be verified, thereby ensuring that the virtual machine image has not been mis-configured before, during, or after a transfer or migration. Therefore, any doubt about the state of the virtual machine image can be removed.
When the virtual machine image is stopped at state 520, the virtual machine image is unloaded from execution and memory. The integrity of the virtual machine image can be verified when stopping the virtual machine image to determine whether it is still has a trustworthy configuration. Thus, the virtual machine image can be stopped responsive to verifying its integrity, thereby ensuring that the virtual machine image has not been altered from its expected configuration. If it is determined that the virtual machine image is not trustworthy, the virtual machine image can be flagged, which can provide an indication of its untrustworthiness when the virtual machine image is later restarted. A digest of the stopped virtual machine can also be recorded, for later use in verifying the virtual machine (e.g., when the virtual machine is restarted).
When the virtual machine image is suspended at state 525, as might happen in advance of a migration, for example, the integrity of the virtual machine image can be verified prior to leaving the physical hardware platform, thereby creating a verifiable audit record of execution and movement. The suspended virtual machine image can be analyzed to determine whether it is still has a trustworthy configuration. The virtual machine image can be suspended responsive to verifying its integrity, or suspended before verifying its integrity. In the case where the virtual machine image is suspended in order to perform a migration, the virtual machine image can be taken out of use or the migration aborted if the virtual machine image is determined to be untrustworthy.
When the virtual machine image is migrated at state 530, the contents of the virtual machine image are moved from one physical hardware platform to another. Depending on the implementation of the migration function of the VMM, verification of the virtual machine image may or may not be desirable. For example, the migrate state 530 can comprise suspend, move, and start operations. In some embodiments of the present invention, the integrity verification component 105 is configured to analyze the virtual machine image when migrating the virtual machine image from one physical hardware platform to another. In some embodiments of the present invention, when the virtual machine image is migrated, the virtual machine image is stopped or suspended on one physical hardware platform, and started on a different physical hardware platform, each of which can include a verification of the integrity of the virtual machine image.
When the virtual machine image is destroyed at state 535, the contents and any existing state information can be erased from both execution and storage. As is the case in highly regulated industries, such as financial services, healthcare, human services, government, and telecommunications, among other possibilities, it can be important to capture the integrity state of the virtual machine image at the time of destruction and create an auditable record of its existence or non-existence as it relates to time. Since the virtual machine image is destroyed, and the virtual machine image lifecycles can vary widely, the creation of an integrity record at the time of destruction can be a valuable record of the state of existence of the virtual machine image during the end of its lifecycle. Thus, the virtual machine image can be destroyed responsive to verifying the integrity of the virtual machine image.
Integration of integrity verification services as described above provides support for higher level commands for controlling the integrity lifecycle of a virtual machine image. Such commands can be issued from the VMM 110 (of
At 615, the digests of the discrete virtual machine image elements (415 of
A determination can be made at 625 as to whether one or more of the virtual machine images (120 of
As previously discussed above, generating the trust score for the virtual machine images (120 of
At 710, the digests of the virtual machine images (120 of
A determination can be made at 720 as to whether one or more of the virtual machine images (120 of
The following discussion is intended to provide a brief, general description of a suitable machine in which certain aspects of the invention can be implemented. Typically, the machine includes a system bus to which is attached processors, memory, e.g., random access memory (RAM), read-only memory (ROM), or other state preserving medium, storage devices, a video interface, and input/output interface ports. The machine can be controlled, at least in part, by input from conventional input devices, such as keyboards, mice, etc., as well as by directives received from another machine, interaction with a virtual reality (VR) environment, biometric feedback, or other input signal. As used herein, the term “machine” is intended to broadly encompass a single machine, a virtual machine, or a system of communicatively coupled machines, virtual machines, or devices operating together. Exemplary machines include computing devices such as personal computers, workstations, servers, portable computers, handheld devices, telephones, tablets, etc., as well as transportation devices, such as private or public transportation, e.g., automobiles, trains, cabs, etc.
The machine can include embedded controllers, such as programmable or non-programmable logic devices or arrays, Application Specific Integrated Circuits, embedded computers, smart cards, and the like. The machine can utilize one or more connections to one or more remote machines, such as through a network interface, modem, or other communicative coupling. Machines can be interconnected by way of a physical and/or logical network, such as an intranet, the Internet, local area networks, wide area networks, etc. One skilled in the art will appreciated that network communication can utilize various wired and/or wireless short range or long range carriers and protocols, including radio frequency (RF), satellite, microwave, Institute of Electrical and Electronics Engineers (IEEE) 545.11, Bluetooth, optical, infrared, cable, laser, etc.
The invention can be described by reference to or in conjunction with associated data including functions, procedures, data structures, application programs, etc. which when accessed by a machine results in the machine performing tasks or defining abstract data types or low-level hardware contexts. Associated data can be stored in, for example, the volatile and/or non-volatile memory, e.g., RAM, ROM, etc., or in other storage devices and their associated storage media, including hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, biological storage, etc. Associated data can be delivered over transmission environments, including the physical and/or logical network, in the form of packets, serial data, parallel data, propagated signals, etc., and can be used in a compressed or encrypted format. Associated data can be used in a distributed environment, and stored locally and/or remotely for machine access.
Having described and illustrated the principles of the invention with reference to illustrated embodiments, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from such principles, and can be combined in any desired manner. And although the foregoing discussion has focused on particular embodiments, other configurations are contemplated. In particular, even though expressions such as “according to an embodiment of the invention” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the invention to particular embodiment configurations. As used herein, these terms can reference the same or different embodiments that are combinable into other embodiments.
Consequently, in view of the wide variety of permutations to the embodiments described herein, this detailed description and accompanying material is intended to be illustrative only, and should not be taken as limiting the scope of the invention. What is claimed as the invention, therefore, is all such modifications as may come within the scope and spirit of the following claims and equivalents thereto.
This application claims the benefit of commonly-assigned U.S. Provisional Patent Application Ser. No. 60/953,314, titled “ARCHITECTURE, METHOD AND APPARATUS FOR THE LIFECYCLE INTEGRITY VERIFICATION OF VIRTUAL MACHINES, THEIR SPECIFIED CONFIGURATIONS, AND THEIR DISCRETE ELEMENTS”, filed Aug. 1, 2007, which is hereby incorporated by reference. This application is a continuation-in-part of commonly-assigned U.S. patent application Ser. No. 11/608,742, titled “METHOD TO VERIFY THE INTEGRITY OF COMPONENTS ON A TRUSTED PLATFORM USING INTEGRITY DATABASE SERVICES”, filed Dec. 8, 2006, now U.S. Pat. No. 8,266,676, issued Sep. 11, 2012, which claims the benefit of commonly-assigned U.S. Provisional Patent Application Ser. No. 60/749,368, titled “METHOD TO VERIFY THE INTEGRITY OF COMPONENTS ON A TRUSTED PLATFORM USING INTEGRITY DATABASE SERVICES”, filed Dec. 9, 2005, and commonly-assigned U.S. Provisional Patent Application Ser. No. 60/759,742, titled “METHOD AND APPARATUS FOR IP NETWORK ACCESS CONTROL BASED ON PLATFORM COMPONENT SIGNATURES AND TRUST SCORES,” filed Jan. 17, 2006, which are hereby incorporated by reference. This application is a continuation-in-part of commonly-assigned U.S. patent application Ser. No. 11/832,781, titled “METHOD TO CONTROL ACCESS BETWEEN NETWORK ENDPOINTS BASED ON TRUST SCORES CALCULATED FROM INFORMATION SYSTEM COMPONENT ANALYSIS”, filed Aug. 2, 2007, now U.S. Pat. No. 7,487,358, issued Feb. 3, 2009, which is a continuation of commonly-assigned U.S. patent application Ser. No. 11/288,820, titled “METHOD TO CONTROL ACCESS BETWEEN NETWORK ENDPOINTS BASED ON TRUST SCORES CALCULATED FROM INFORMATION SYSTEM COMPONENT ANALYSIS”, filed Nov. 28, 2005, now U.S. Pat. No. 7,272,719, issued Sep. 18, 2007, which claims the benefit of commonly-assigned U.S. Provisional Patent Application Ser. No. 60/631,449, titled “METHOD TO HARVEST, SUBMIT, PERSIST, AND VALIDATE DATA MEASUREMENTS EMPLOYING WEB SERVICES”, filed Nov. 29, 2004, commonly-assigned U.S. Provisional Patent Application Ser. No. 60/631,450, titled “METHOD TO VERIFY SYSTEM STATE AND VALIDATE INFORMATION SYSTEM COMPONENTS BY MEANS OF WEB SERVICES USING A DATABASE OF CRYPTOGRAPHIC HASH VALUES”, filed Nov. 29, 2004, and commonly-assigned U.S. Provisional Patent Application Ser. No. 60/637,066, titled “METHOD TO CONTROL ACCESS BETWEEN NETWORK ENDPOINTS BASED ON TRUST SCORES CALCULATED FROM INFORMATION SYSTEM COMPONENTS”, filed Dec. 17, 2004, which are hereby incorporated by reference. This application is related to commonly-assigned U.S. patent application Ser. No. 11/422,146, titled “SYSTEM AND METHOD TO REGISTER A DOCUMENT WITH A VERSION MANAGEMENT SYSTEM”, filed Jun. 5, 2006, which claims the benefit of commonly-assigned U.S. Provisional Patent Application Ser. No. 60/688,035, titled “METHOD TO CERTIFY AND REGISTER INSTANCES OF AN ELECTRONIC DOCUMENT WITH A CENTRALIZED DATABASE ENABLING TRACKING AND ATTESTATION TO THE AUTHENTICITY AND ACCURACY OF COPIES OF THE REGISTERED DOCUMENT”, filed Jun. 7, 2005, and commonly-assigned U.S. patent application Ser. No. 11/624,001, titled “METHOD AND APPARATUS TO ESTABLISH ROUTES BASED ON THE TRUST SCORES OF ROUTERS WITHIN AN IP ROUTING DOMAIN”, filed Jan. 17, 2007, now U.S. Pat. No. 7,733,804, issued Jun. 8, 2010, which claims the benefit of commonly-assigned U.S. Provisional Patent Application Ser. No. 60/824,740, titled “METHOD AND APPARATUS TO ESTABLISH ROUTES BASED ON THE TRUST SCORES OF ROUTERS WITHIN AN IP ROUTING DOMAIN”, and commonly-assigned U.S. patent application Ser. No. 11/422,151, titled “SYSTEM AND METHOD TO MANAGE A DOCUMENT WITH A VERSION MANAGEMENT”, filed Jun. 5, 2006, and commonly-assigned U.S. patent application Ser. No. 11/776,498, titled “METHOD AND SYSTEM TO ISSUE TRUST SCORE CERTIFICATES FOR NETWORKED DEVICES USING A TRUST SCORING SERVICE”, filed Jul. 11, 2007, which claims the benefit of commonly-assigned U.S. Provisional Patent Application Ser. No. 60/807,180, titled “METHOD AND APPARATUS TO ISSUE TRUST SCORE CERTIFICATES FOR NETWORKED DEVICES USING A TRUST SCORING SERVICE”, filed Jul. 12, 2006, all of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5465299 | Matsumoto et al. | Nov 1995 | A |
5535276 | Ganesan | Jul 1996 | A |
5919257 | Trostle | Jul 1999 | A |
6157721 | Shear et al. | Dec 2000 | A |
6209091 | Sudia et al. | Mar 2001 | B1 |
6289460 | Hajmiragha | Sep 2001 | B1 |
6327652 | England et al. | Dec 2001 | B1 |
6330670 | England et al. | Dec 2001 | B1 |
6393420 | Peters | May 2002 | B1 |
6470448 | Kuroda et al. | Oct 2002 | B1 |
6609200 | Anderson et al. | Aug 2003 | B2 |
6823454 | Hind et al. | Nov 2004 | B1 |
6826690 | Hind et al. | Nov 2004 | B1 |
6922782 | Spyker et al. | Jul 2005 | B1 |
6950522 | Mitchell et al. | Sep 2005 | B1 |
6976087 | Westfall et al. | Dec 2005 | B1 |
6978366 | Ignatchenko et al. | Dec 2005 | B1 |
7003578 | Kanada et al. | Feb 2006 | B2 |
7024548 | O'Toole, Jr. | Apr 2006 | B1 |
7100046 | Balaz et al. | Aug 2006 | B2 |
7114076 | Callaghan | Sep 2006 | B2 |
7178030 | Scheidt et al. | Feb 2007 | B2 |
7188230 | Osaki | Mar 2007 | B2 |
7233942 | Nye | Jun 2007 | B2 |
7268906 | Ruhl et al. | Sep 2007 | B2 |
7272719 | Bleckmann et al. | Sep 2007 | B2 |
7310817 | Hinchliffe et al. | Dec 2007 | B2 |
7350204 | Lambert et al. | Mar 2008 | B2 |
7383433 | Yeager et al. | Jun 2008 | B2 |
7457951 | Proudler et al. | Nov 2008 | B1 |
7461249 | Pearson et al. | Dec 2008 | B1 |
7574600 | Smith | Aug 2009 | B2 |
7581103 | Home et al. | Aug 2009 | B2 |
7689676 | Vinberg et al. | Mar 2010 | B2 |
7733804 | Hardjono et al. | Jun 2010 | B2 |
7774824 | Ross | Aug 2010 | B2 |
7793355 | Little et al. | Sep 2010 | B2 |
7844828 | Giraud et al. | Nov 2010 | B2 |
7877613 | Luo | Jan 2011 | B2 |
7904727 | Bleckmann et al. | Mar 2011 | B2 |
7987495 | Maler et al. | Jul 2011 | B2 |
8010973 | Shetty | Aug 2011 | B2 |
8108856 | Sahita et al. | Jan 2012 | B2 |
20020069129 | Akutsu et al. | Jun 2002 | A1 |
20020095589 | Keech | Jul 2002 | A1 |
20020144149 | Hanna et al. | Oct 2002 | A1 |
20020150241 | Scheidt et al. | Oct 2002 | A1 |
20030014755 | Williams | Jan 2003 | A1 |
20030028585 | Yeager et al. | Feb 2003 | A1 |
20030030680 | Cofta et al. | Feb 2003 | A1 |
20030097581 | Zimmer | May 2003 | A1 |
20030115453 | Grawrock | Jun 2003 | A1 |
20030177394 | Dozortsev | Sep 2003 | A1 |
20040107363 | Monteverde | Jun 2004 | A1 |
20040172544 | Luo et al. | Sep 2004 | A1 |
20040181665 | Houser | Sep 2004 | A1 |
20040205340 | Shimbo et al. | Oct 2004 | A1 |
20050021968 | Zimmer et al. | Jan 2005 | A1 |
20050033987 | Yan et al. | Feb 2005 | A1 |
20050048961 | Ribaudo et al. | Mar 2005 | A1 |
20050114687 | Zimmer et al. | May 2005 | A1 |
20050132122 | Rozas | Jun 2005 | A1 |
20050138417 | McNerney et al. | Jun 2005 | A1 |
20050163317 | Angelo et al. | Jul 2005 | A1 |
20050184576 | Gray et al. | Aug 2005 | A1 |
20050257073 | Bade et al. | Nov 2005 | A1 |
20050278775 | Ross | Dec 2005 | A1 |
20060005254 | Ross | Jan 2006 | A1 |
20060015722 | Rowan et al. | Jan 2006 | A1 |
20060048228 | Takemori et al. | Mar 2006 | A1 |
20060074600 | Sastry et al. | Apr 2006 | A1 |
20060117184 | Bleckmann | Jun 2006 | A1 |
20060173788 | Nath Pandya et al. | Aug 2006 | A1 |
20070016888 | Webb | Jan 2007 | A1 |
20070050622 | Rager et al. | Mar 2007 | A1 |
20070130566 | vanRietschote et al. | Jun 2007 | A1 |
20070143629 | Hardjono et al. | Jun 2007 | A1 |
20070174429 | Mazzaferri et al. | Jul 2007 | A1 |
20070180495 | Hardjono et al. | Aug 2007 | A1 |
20070204153 | Tome | Aug 2007 | A1 |
20070260738 | Palekar | Nov 2007 | A1 |
20080092235 | Comlekoglu | Apr 2008 | A1 |
20080126779 | Smith | May 2008 | A1 |
20080189702 | Morgan et al. | Aug 2008 | A1 |
20080256363 | Balacheff et al. | Oct 2008 | A1 |
20090089860 | Forrester et al. | Apr 2009 | A1 |
20110179477 | Starnes | Jul 2011 | A1 |
20110320816 | Yao et al. | Dec 2011 | A1 |
20120023568 | Cha et al. | Jan 2012 | A1 |
20150033038 | Goss | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
0048063 | Aug 2000 | WO |
2006058313 | Jun 2006 | WO |
2006058313 | Jun 2006 | WO |
WO2008024135 | Feb 2008 | WO |
WO2008030629 | Mar 2008 | WO |
WO2009018366 | Feb 2009 | WO |
Entry |
---|
Liu et al., “A Dynamic Trust Model for Mobile Ad Hoc Networks”, May 26, 2004, IEEE, Proceedings of the 10th IEEE International Workshop on Future Trends of Distributed Computing Systems (FTDCS'04), pp. 80-85. |
Number | Date | Country | |
---|---|---|---|
20090089860 A1 | Apr 2009 | US | |
20120291094 A9 | Nov 2012 | US |
Number | Date | Country | |
---|---|---|---|
60953314 | Aug 2007 | US | |
60749368 | Dec 2005 | US | |
60759742 | Jan 2006 | US | |
60631449 | Nov 2004 | US | |
60631450 | Nov 2004 | US | |
60637066 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11288820 | Nov 2005 | US |
Child | 11832781 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11608742 | Dec 2006 | US |
Child | 12179303 | US | |
Parent | 11832781 | Aug 2007 | US |
Child | 11608742 | US |