Computer systems are often targets of attacks by unauthorized individuals, organizations, and states. These attacks may include the installing of malware (i.e., malicious software) onto a computer to compromise (or infect) the software (including firmware) of the computer. For example, malware may be inserted as part of the operating system kernel or may be loaded by an application program. The malware may cause the compromised software to bypass its security measures, reveal its secrets, alter its behavior, deny service to its clients, and so on.
When a computer system interacts with another computer system, each computer system may want to ensure that the other computer system has not been compromised with malware. For example, a client computer system (“client”) requesting services of a server computer system (“server”) may want to ensure that the operating system and application programs have not been compromised. To ensure that a server has not been compromised (e.g., the programs of the server have not been compromised), a client can request evidence that the server has not been compromised and the server can provide that evidence as an assertion of its state in a process known as “remote attestation.” The server typically collects the evidence during its initialization to indicate the state of the server at the time of initialization. If the evidence provided by the server is what that client expects, then the client can trust that the server has not been compromised. The server can similarly ensure that the client has not been compromised as a counter attestation of the client.
Many servers provide a trusted computing component (“TCC”), that is, hardware designed specifically to collect and maintain the evidence needed to support remote attestation. Many servers include a TCC that is a trusted platform module (“TPM”) as specified by the Trusted Computing Group (“TCG”). The TCG is a consortium of hardware and software organizations that include Intel, AMD, Microsoft, and IBM. The TPM is a hardware component of a server that can securely record the state (e.g., code and data) of the server. The state may be associated with multiple layers of the software stack, including the firmware, BIOS, boot loader, hypervisor, and operating system. A server can provide to a client the measurement (or measurements) of the state (or states) as evidence of the server's state at initialization. The measurement may be a hash of the state. A client can compare a measurement received from a server to what the client knows is a “known-good” measurement of a known-good state. Based on the results of that comparison, the client can decide whether to request services of or otherwise interact with the server.
A TPM is a secure hardware processor that can perform cryptographic operations and store cryptographic keys or other values persistently. In current implementations, a TPM is a discrete chip that interfaces with the main central processing unit (“CPU”) of a computer, but in future implementations the TPM may be integrated directly into the CPU. A TPM contains a set of fixed-size platform configuration registers (“PCRs”) that store the resulting values of cryptographic one-way hashes of state information.
Once initialized, such as after a hardware reset, a TPM only allows a PCR to be “extended” by computing a cryptographic hash of its existing value (e.g., an initial value of zero) concatenated with an additional measurement M (i.e., PCR←hash(PCR, M)). The measurement M is typically a secure hash of state information generated by a Secure Hashing Algorithm (“SHA”), such as a SHA-256, generating a hash of a region of memory. After reset, initialization code (e.g., an authenticated code module) may set a PCR (e.g., PCR18) to a measurement that is the hash of certain computer code. For example, the initialization code may generate a value for PCR18 based on individual measurements (e.g., hashes) of several components, including the BIOS and other firmware, a measured launch environment (“MLE”), such as TBOOT, along with its command line, the operating system kernel, the kernel command line, and so on. The initialization code generates a hash of state information that is a measurement for the state of a component code and extends the value in the PCR with that hash. The initialization code then may pass control to that component to continue with measuring of other state information for another component, extending the PCR with the hash, and then passing control the other component. This process is repeated until the PCR is extended to reflect the measurements defined for that PCR. Since a PCR can be modified only via such extension operations, the TPM provides a means of storing secure measurements of state information, including code, data, configuration information, and so on. The TPM can also generate a digitally signed “TPM quote” that contains its PCR values (i.e., the measurements) together with a cryptographic signature. This allows a client to verify that the measurements were generated and protected by a valid TPM of the attesting server.
A sequence of extension operations on a single PCR is useful for representing a series of measurements compactly as a single, fixed-size hash value, computed as a chain of hashes. For example, a PCR may be initially set to a hash of the firmware, followed by extending the PCR by hashes of the BIOS, boot loader, hypervisor, and operating system in sequence. The result of such a series of measurements is referred to as a combined measurement. To verify that the set of measurements combined into a single PCR represents a known-good state, a client needs access to known-good combined measurements, referred to as a “whitelist,” of known-good states. When a PCR value is constructed or generated by extending it with n individual constituent measurements M1, . . . , Mn and each measurement Mi has Gi known-good states, the size of the resulting whitelist grows multiplicatively, with G1× . . . ×Gn known-good states. Even modest values of n and Gi can result in a very large number of known-good combined measurements that require large amounts of storage and/or significant computational resources to generate. As an example, in a typical data center, there may be a few known-good versions of TBOOT, many known-good TBOOT command-line options and parameters, dozens of known-good versions of OS kernels corresponding to different operating systems, builds, optimization levels, and configurations, and hundreds of known-good kernel command lines corresponding to different valid options and parameters. A whitelist for PCR18 for such a data center may contain many thousands or millions of possible known-good PCR18 values. For example, if GTBOOT=6, GTBOOTcmd=10, GOS=30, and GOScmd=120, the number of known-good values for the PCR18 whitelist would be GTBOOT×GTBOOTcmd×GOS×GOScmd=216,000. In practice, the whitelist will continue to grow over time, as new versions of each component are released.
The maintenance of such a whitelist of known-good combined measurements in a large data center can be a challenge. When a new version of a component for a server is released, the whitelist needs to be updated and distributed to the clients. With such a release, a system administrator may need to manually load a known-good combined measurement for that version for every possible combination of versions of the other components. Continuing with the PCR18 example, if a new version of TBOOT is released, then the number of additional combinations would be GTBOOTcmd×GOS×GOScmd=36,000. Because of the overhead needed to maintain such a whitelist, an organization may implement policies that limit the diversity of hardware and software supported by the data center.
A method and system for asserting and verifying assertions of a known-good state of a computer system is provided. In some embodiments, an attestation system allows a challenger computer system (“challenger”) and a prover computer system (“prover”) to conduct an attestation so that the challenger can verify an assertion of the prover. To conduct the attestation, the prover sends as an assertion of its state a combined measurement of resources along with a constituent measurement of each resource to the challenger. For example, the resources may include a BIOS, a TBOOT, a TBOOT command line, an operating system kernel, and a kernel command line. The challenger verifies the assertion by verifying that the asserted constituent measurements represent known-good measurements and verifying that the asserted combined measurement can be generated from the asserted constituent measurements. The prover may send multiple combined measurements representing states of different sets of resources of the prover along with a set of constituent measurements for each combined measurement. In such a case, the challenger separately verifies each combined measurement. To verify the asserted constituent measurements, the challenger determines whether each asserted constituent measurement for a resource is a known-good measurement for that resource. For example, the challenger may maintain a whitelist for each resource that contains the known-good measurements for that resource. If each asserted constituent measurement for a resource is determined to be a known-good measurement for that resource (e.g., in the whitelist for that resource), then the asserted constituent measurements are verified. To verify the asserted combined measurement, the challenger generates a combined measurement from the asserted constituent measurements received from the prover. If the generated combined measurement is the same as the asserted combined measurement, then the asserted combined measurement is verified. The challenger does not need to compare the asserted combined measurement to a previously known-good combined measurement. Rather, the challenger dynamically generates a known-good combined measurement from the constituent measurements provided by the prover. If each asserted constituent measurement is verified and the asserted combined measurement is verified to match the generated combined measurement, then the assertion is verified. Since the challenger does not need to compare the asserted combined measurement to a known-good combined measurement, the challenger need not maintain a whitelist of known-good combined measurements and can avoid the overhead and complexity of maintaining a whitelist of all possible combined measurements and/or generating possible combined measurements from whitelists of measurements of resources.
In some embodiments, a prover employing the attestation system generates a combined measurement of its state as an assertion of its state. The prover may include a trusted platform component that stores a combined measurement that is generated from constituent measurements for resources of the prover that can be trusted as an assertion of a state of each of the resources. For example, the trusted platform component may be a trusted platform module with one or more PCRs for storing combined measurements. Each resource may have variations such as different versions of a MLE and different combinations of options of a kernel command line. A resource may be any software component (e.g., computer instructions), data structure, configuration data, hardware component, and so on that can have its state measured. The prover may generate a measurement for each resource using a hash algorithm, such as a SHA-256 hash, of that resource. The prover generates the combined measurement by using the measurements of the resources as constituent measurements of the combined measurement. The prover may generate the combined measurement as a hash chain of the measurements as the constituent measurements of the combined measurement. For example, such a combined measurement may be generated by a TPM at initialization of the prover by extending a PCR. The prover then sends to a challenger the combined measurement, which may be signed with a key of the TPM, and the constituent measurements as an assertion. The prover may also send to the challenger an indication of an algorithm used to generate the constituent measurement or the algorithm used to generate the combined measurement so that the challenger can use the same algorithm(s). The prover may also send to the challenger an indication of a variation (e.g., version number) of a software component that generates a constituent measurement. In some embodiments, the constituent measurements may be the same constituent measurements that are generated for use in generating the combined measurement. For example, each component that generates a constituent measurement when extending a combined measurement of a PCR may record its constituent measurement for sending to a challenger. Alternatively, the prover may separately generate the constituent measurements after the combined measurement is generated.
In some embodiments, a challenger employing the attestation system verifies an assertion of a prover that includes an asserted combined measurement and asserted constituent measurements. The challenger may initially send to a prover a request for an assertion of the constituent measurements of resources that the prover uses to generate a combined measurement of its resources. For example, if a challenger is a client that wants to request services of a server, the client sends to the server a request that the server provide an assertion of its constituent measurements. The challenger then receives from the prover an assertion that includes the constituent measurements. For each resource, the challenger may maintain a whitelist of known-good measurements for the resource. To verify the constituent measurements, the challenger determines whether the constituent measurement of each resource is in the whitelist for that resource. If the challenger determines that a constituent measurement for a resource is not in the whitelist for that resource, then the challenger cannot verify the constituent measurements. Without being able to verify the constituent measurements, the challenger cannot verify the assertion, may suppress further verification of the assertion, and may assume that the prover cannot be trusted. If the challenger determines that each constituent measurement for a resource is in the whitelist for that resource, then the challenger has verified the constituent measurements. The challenger also generates a combined measurement from the constituent measurements; such a combined measurement would be the same as that generated by the prover given the same constituent measurements. When the constituent measurements have been verified, the challenger sends to the prover a request for the combined measurement. When the challenger receives from the prover an assertion of the combined measurement, the challenger compares the asserted combined measurement to the combined measurement that it generated. If the combined measurements are the same, then the assertion of the prover has been verified, and the challenger may assume that the prover can be trusted. For example, when a client verifies the assertion of a server, the client may then request services of that server knowing that the server is trusted and has not been compromised.
In some embodiments, the attestation system allows a challenger performing remote attestation to determine whether a PCR represents a known-good combined measurement without having to store the whitelist of possible combined measurements. The attestation system allows the size of the whitelist that is to be stored for each PCR to be reduced from the multiplicative product of known-good measurements G1× . . . ×Gn to a much smaller size that is the additive sum of known-good measurements G1+ . . . +Gn. Moreover, if it does not need to verify a constituent measurement for a certain resource of a prover, the attestation system need not store a whitelist for that resource. In such a case, a challenger can simply proceed assuming that the measurement for that resource is verified.
The attestation system also provides a challenger an opportunity to pre-calculate some or all of the known-good combined measurements. For example, a challenger may pre-calculate and store as a whitelist the pre-calculated combined measurements for typical configurations of servers. When a challenger receives an assertion from a server, the challenger can first check to see if the asserted combined measurement is in the whitelist. If in the whitelist, the challenger has verified the assertion and can avoid the overhead of verifying the constituent measurements and generating a combined measurement. If not in the whitelist, the challenger can proceed with verifying the constituent measurements and generating a combined measurement. A challenger can base a decision on how many known-good combined measurements to pre-calculate based on a desired tradeoff between storage requirements and verification speed. For example, a server that needs to frequently verify assertions of clients may decide that the overhead of storing a large whitelist is well worth the resulting reduced overhead of the verification process. In addition, when a challenger verifies that an asserted combined measurement matches a generated combined measurement, it may cache that combined measurement as a known-good combined measurement (e.g., adding to the whitelist) to avoid the overhead of verifying the constituent measurements and generating a combined measurement when the same asserted combined measurement is subsequently received. The attestation system can be used to avoid the manually intensive process of 1) installing various combinations of variations of resources on a computer system so that the computer system can generate a combined measurement for each possible combination and then 2) storing each generated combined measurement in a whitelist of combined measurements. Thus, even if a whitelist of combined measurements is to be used, the attestation system provides a more efficient way to generate that whitelist.
In some embodiments, the attestation system may further reduce the size of a whitelist of a resource by employing a representation that is more compact than storing each known-good measurement individually. For a given resource, a challenger may store known-good measurements for sub-resources of that resource. For example, if the resource is a software system, then the challenger may store a measurement for each module that makes up the software system. If a prover asserts a constituent measurement for a resource along with sub-constituent measurements for sub-resources of that resource, then a challenger can verify the asserted sub-constituent measurements of the sub-resources using the whitelists, generate a constituent measurement from the sub-constituent measurements, and verify whether the generated constituent measurement matches the asserted constituent measurement. This process of storing sub-resources of resources may be recursively applied to any level (e.g., sub-sub-resources of sub-resources of resources). In addition, the attestation system may store known-good measurements or other data associated with resources or sub-resources using compact patterns, such as regular expressions. In some embodiments, a prover may assert to a challenger raw data from which the challenger needs to generate a constituent measurement Mi.
The process of verifying asserted constituent measurements, generating a combined measurement from the asserted constituent measurements, and comparing the generated combined measurement to the asserted combined measurement to verify the assertion is referred to as “dynamic whitelisting.” Dynamic whitelisting is achieved by effectively simulating the TPM PCR extension operation in software, using the individual measurements Mi as inputs. Thus, in addition to a TPM quote containing signed PCR values, a prover also supplies the constituent measurements for all of the resources that were used to compute each PCR value. The prover may also provide metadata describing the constituent resources, such as version numbers. For some resources, such as a short command-line string, the prover may even include some or all of the raw data of a resource itself. The constituent measurements, metadata, and data are referred to collectively as assertions of the constituent measurements. A prover may also supply an identifier that specifies which measurement algorithm a TPM used for PCR extension to accommodate potential differences across diverse TPM implementations or generations. For example, TPM 1.2 uses the SHA-1 secure hash algorithm, while TPM 2.0 supports the SHA-256 algorithm, and some versions may even support downloading custom algorithms into the TPM processor.
A prover may also supply metadata that includes identifiers, such as component version numbers, to specify the particular algorithms used by the various software components to generate constituent measurements. The metadata may be used as inputs for TPM PCR extension because measurement algorithm details may vary across both components and component versions. As one example, TBOOT software measures both the kernel and the kernel command line. However, some versions of TBOOT make additional measurements, for example, the Launch Control Policy of the TPM's non-volatile memory. In general, component-specific measurement algorithms may use different hash functions, make different measurements, and permute the order in which measurements are performed. The prover may provide this metadata to a challenger to ensure that the challenger uses compatible algorithms when verifying the constituent measurements.
As an example of the computational savings of the attestation system, if a TPM of a prover employs hash chaining extensions of a fixed-length PCR value, the extensions will take the form SHA(current PCR value∥new value). If there are n=10 resources and each of the 10 resources has on average G=5 known-good measurements, then there would be G^n=5^10=9,765,625 possible combined measurements in a typical whitelist of combined measurements. With the attestation system, however, only G×n=5×10=50 possible measurements need be stored using dynamic whitelisting in one embodiment. For example, if GTBOOT=6, GTBOOTcmd=10, GOS=30, and GOScmd=120, then storage for the whitelist would be reduced from GTBOOT×GTBOOTcmd×GOS×GOScmd=216,000 to GTBOOT+GTBOOTcmd+GOS+GOScmd=166, which is a savings of more than 99.9%. In addition, the attestation system can provide the identification of the specific resource whose measurement could not be verified.
As described above, the attestation system can reduce the size of the whitelists of some resources by computing their measurements dynamically from sub-resource data or measurements. For example, the list of all GOScmd known-good measurements for the OS kernel command line may itself be large. In practice, the command line may even include data specific to a single physical server, for example, the “root=[Disk UUID]” parameter commonly used for booting Ubuntu Linux kernels. In such cases, the attestation system may ensure that the parameter is well-formed, for example, by pattern-matching using a regular expression to verify the measurement for the sub-resource. This approach to verification allows a challenger to compute the known-good measurement for a resource dynamically, using sub-resource data, such as portions of the command-line string, as inputs.
Once all of the constituent measurements have been verified, a challenger may compute the expected final PCR value, using the specified measurement algorithm. As one example, for PCR18, the challenger first verifies the TBOOT measurement. Based on the particular TBOOT measurement, the challenger determines which resource that specific version of TBOOT should measure next (such as the OS kernel), as well as the specific hash function that should be used to perform the measurement (such as SHA-1). The challenger then generates this measurement in software. This PCR computation approach yields the chained hash resulting from a series of simulated TPM extension operations of the form PCR←hash(PCR, Mi). If the dynamically computed PCR value matches the PCR value obtained using the TPM quote operation (i.e., the asserted combined measurement), then the assertion is verified. If it does not match, the assertion is not verified.
In some embodiments, the attestation system may use a modified remote-attestation protocol that allows various assertions such as the constituent measurements Mi for resources or sub-resources measurement algorithm identifiers, resource or sub-resource data for generating measurements, or other auxiliary data. Alternatively, the attestation system may support sending such assertions out-of-band, for example, as a separate step, prior to the conventional attestation protocol. The attestation system does not need to require that such assertions be sent securely. If an assertion is modified (e.g., intentionally or inadvertently corrupted) in transit, then the dynamically computed combined measurement for a PCR value will not match the TPM quoted PCR value. A secure channel may nevertheless be needed to ensure that the confidentiality of any secret data is protected.
The attestation system simplifies whitelist generation because a system administrator need not collect TPM PCR values generated by a computer system having the known-good resources installed and load a combined measurement for each combination of known-good constituent measurements for each combination of known-good resources. Instead, the attestation system generates individual constituent measurements independently. The attestation system can generate these measurements using software that simulates the TPM PCR extension operations without the need for any hardware support that is specific to such operations.
The attestation system can efficiently allow the measurement of resources containing data that is unique to each individual prover. The use of such measurements in prior systems was generally not practical because unique data such as a random value could not have been captured in advance and measured into a PCR value. For example, a server may randomly generate a new public key to be used for secure, encrypted communication with clients. In this case, there is no known-good measurement for this key and thus no need for a client to maintain a whitelist of known-good measurements for this resource. Nevertheless, the client still can verify that the key provided by the server matches the key used to generate the PCR value. A client can accept any measurement of the key, as long as the PCR value generated by the client using the measurement of the key matches the asserted PCR value. In other words, if the client's dynamically computed PCR value, which includes a measurement of the asserted key, matches the TPM-quoted PCR value, then the client knows that the asserted key is correct.
The computing devices (i.e., computer systems) on which the attestation system may be implemented may include a central processing unit, input devices, output devices (e.g., display devices and speakers), storage devices (e.g., memory and disk drives), network interfaces, graphics processing units, accelerometers, cellular radio link interfaces, global positioning system devices, trusted platform modules, and so on. The input devices may include keyboards, pointing devices, touch screens, gesture recognition devices (e.g., for air gestures), head and eye tracking devices, microphones for voice recognition, and so on. The computing devices may include desktop computers, laptops, tablets, e-readers, personal digital assistants, smartphones, gaming devices, servers, and computer systems such as massively parallel systems. The computing devices may access computer-readable media that include computer-readable storage media and data transmission media. The computer-readable storage media are tangible storage means that do not include a transitory, propagating signal. Examples of computer-readable storage media include memory such as primary memory, cache memory, and secondary memory (e.g., DVD) and include other storage means. The computer-readable storage media may have recorded upon or may be encoded with computer-executable instructions or logic that implements the attestation system. The data transmission media is used for transmitting data via transitory, propagating signals or carrier waves (e.g., electromagnetism) via a wired or wireless connection.
The attestation system may be described in the general context of computer-executable instructions, such as program modules and components, executed by one or more computers, processors, or other devices. Generally, program modules or components include routines, programs, objects, data structures, and so on that perform particular tasks or implement particular data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. Aspects of the attestation system may be implemented in hardware using, for example, an application-specific integrated circuit (“ASIC”).
From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. As one example, in some embodiments, the attestation system may make policy decisions based on “blacklists” of “known-bad” measurements of malware, instead of whitelists of known-good measurements of valid resources. For each resource, however, the number of known-bad measurements will often be much larger than the number of known-good measurements. As a result, in existing systems, the storage requirements for blacklists for combined measurements would be even larger than for whitelists. The attestation system may allow blacklists to be employed to identify bad resources (e.g., malware) using resource-specific blacklists to avoid the high storage requirements and without needing to perform the complete PCR computation. In some embodiments, the attestation system may use a combination of both whitelists and blacklists or may use rules or algorithms to classify computer systems based on asserted measurements. Accordingly, the invention is not limited except as by the appended claims.
This application claims the benefit of U.S. Provisional Patent Application No. 61/938,070 entitled “FLEXIBLE AND SCALABLE ATTESTATION USING DYNAMICALLY COMPUTED WHITELISTS,” filed Feb. 10, 2014, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5875472 | Bauman et al. | Feb 1999 | A |
6026475 | Woodman | Feb 2000 | A |
6044478 | Green et al. | Mar 2000 | A |
6129458 | Waters et al. | Oct 2000 | A |
6223256 | Gaither et al. | Apr 2001 | B1 |
6389442 | Yin et al. | May 2002 | B1 |
6697927 | Bonola et al. | Feb 2004 | B2 |
6957304 | Wilkerson et al. | Oct 2005 | B2 |
6970960 | Sarfati et al. | Nov 2005 | B1 |
7266661 | Walmsley et al. | Sep 2007 | B2 |
7434000 | Barreh et al. | Oct 2008 | B1 |
7577851 | Inamura et al. | Aug 2009 | B2 |
7657756 | Hall et al. | Feb 2010 | B2 |
7671864 | Román et al. | Mar 2010 | B2 |
7774622 | Mitra et al. | Aug 2010 | B2 |
8037250 | Barreh et al. | Oct 2011 | B1 |
8135962 | Strongin et al. | Mar 2012 | B2 |
8266676 | Hardjono | Sep 2012 | B2 |
8352718 | Rao et al. | Jan 2013 | B1 |
8549288 | Bade | Oct 2013 | B2 |
8615665 | Fitton et al. | Dec 2013 | B2 |
8726364 | Smith et al. | May 2014 | B2 |
8738932 | Lee et al. | May 2014 | B2 |
8782433 | Kaabouch et al. | Jul 2014 | B2 |
8812796 | Gray et al. | Aug 2014 | B2 |
8886959 | Tamiya et al. | Nov 2014 | B2 |
8904477 | Walker et al. | Dec 2014 | B2 |
8924743 | Wolfe et al. | Dec 2014 | B2 |
8949797 | Christodorescu et al. | Feb 2015 | B2 |
8990582 | McGrew et al. | Mar 2015 | B2 |
9164924 | Horovitz et al. | Oct 2015 | B2 |
9361449 | Sugano | Jun 2016 | B2 |
9477603 | Waldspurger et al. | Oct 2016 | B2 |
9639482 | Weis et al. | May 2017 | B2 |
20020004860 | Roman et al. | Jan 2002 | A1 |
20020116584 | Wilkerson et al. | Aug 2002 | A1 |
20020116595 | Morton et al. | Aug 2002 | A1 |
20020138700 | Holmberg et al. | Sep 2002 | A1 |
20030033480 | Jeremiassen et al. | Feb 2003 | A1 |
20030065892 | Bonola et al. | Apr 2003 | A1 |
20030188178 | Strongin et al. | Oct 2003 | A1 |
20030236947 | Yamazaki et al. | Dec 2003 | A1 |
20040111639 | Schwartz et al. | Jun 2004 | A1 |
20060015748 | Goto et al. | Jan 2006 | A1 |
20060020941 | Inamura et al. | Jan 2006 | A1 |
20060080553 | Hall et al. | Apr 2006 | A1 |
20060179228 | Thompson et al. | Aug 2006 | A1 |
20070239938 | Pong et al. | Oct 2007 | A1 |
20070288228 | Taillefer et al. | Dec 2007 | A1 |
20080010413 | Kailas et al. | Jan 2008 | A1 |
20080022160 | Chakraborty et al. | Jan 2008 | A1 |
20080109660 | Mitra et al. | May 2008 | A1 |
20080229118 | Kasako et al. | Sep 2008 | A1 |
20080235804 | Bade | Sep 2008 | A1 |
20090094601 | Vstovskiy | Apr 2009 | A1 |
20090254895 | Chen et al. | Oct 2009 | A1 |
20090328195 | Smith et al. | Dec 2009 | A1 |
20100005300 | Klotsche et al. | Jan 2010 | A1 |
20100062844 | Crowder, Jr. | Mar 2010 | A1 |
20100064144 | Kaabouch et al. | Mar 2010 | A1 |
20100115620 | Alme et al. | May 2010 | A1 |
20100268692 | Resch | Oct 2010 | A1 |
20100281223 | Wolfe et al. | Nov 2010 | A1 |
20100281273 | Lee et al. | Nov 2010 | A1 |
20100287385 | Conte et al. | Nov 2010 | A1 |
20110022818 | Kegel et al. | Jan 2011 | A1 |
20110040940 | Wells et al. | Feb 2011 | A1 |
20110047362 | Eichenberger et al. | Feb 2011 | A1 |
20110113260 | Ma et al. | May 2011 | A1 |
20110167278 | Goto et al. | Jul 2011 | A1 |
20110258610 | Aaraj et al. | Oct 2011 | A1 |
20110314468 | Zhou et al. | Dec 2011 | A1 |
20120124296 | Bryant et al. | May 2012 | A1 |
20120317569 | Payne, Jr. | Dec 2012 | A1 |
20130067245 | Horovitz et al. | Mar 2013 | A1 |
20130090091 | Weng | Apr 2013 | A1 |
20130125244 | Sugano | May 2013 | A1 |
20130159726 | McKeen et al. | Jun 2013 | A1 |
20130191651 | Muff et al. | Jul 2013 | A1 |
20130254494 | Oxford et al. | Sep 2013 | A1 |
20130263121 | Franke et al. | Oct 2013 | A1 |
20140007087 | Scott-Nash | Jan 2014 | A1 |
20140108649 | Barton et al. | Apr 2014 | A1 |
20140173275 | Johnson | Jun 2014 | A1 |
20140201452 | Meredith et al. | Jul 2014 | A1 |
20150067265 | Weis et al. | Mar 2015 | A1 |
20150089152 | Busaba | Mar 2015 | A1 |
20150089153 | Busaba | Mar 2015 | A1 |
20150089154 | Busaba | Mar 2015 | A1 |
20150089155 | Busaba | Mar 2015 | A1 |
20150089159 | Busaba | Mar 2015 | A1 |
20150089502 | Weis et al. | Mar 2015 | A1 |
20150134932 | Mcnairy | May 2015 | A1 |
20150149732 | Kiperberg et al. | May 2015 | A1 |
20150186295 | Long et al. | Jul 2015 | A1 |
20150269091 | Horowitz et al. | Sep 2015 | A1 |
20150378731 | Lai | Dec 2015 | A1 |
20160224475 | Horovitz et al. | Aug 2016 | A1 |
Entry |
---|
Notice of Allowance mailed Jun. 27, 2016, for U.S. Appl. No. 14/479,239 of Waldspurger, C. et al., filed Sep. 5, 2014. |
Non-Final Office Action mailed Jul. 28, 2016, for U.S. Appl. No. 14/497,111 of Horovitz, O. et al., filed Sep. 25, 2014. |
Non-Final Office Action mailed Jul. 27, 2016, for U.S. Appl. No. 14/663,217 of Horovitz, O. et al., filed Mar. 19, 2015. |
First Office Action mailed Aug. 2, 2016, for Japanese Patent Application No. 2014-530797, 7 pages. |
Deayton, Peter et al., “Set Utilization Based Dynamic Shared Cache Partitioning”, Parallel and Distributed Systems (ICPADS), 2011 IEEE 17th International Conference, Dec. 7-9, 2011, pp. 284-291. |
Li, Zhiyuan , “Reducing Cache Conflicts by Partitioning and Privatizing Shared Arrays”, Parallel Architectures and Compilation Techniques Proceedings, International Conference on 1999, 1999, pp. 183-190. |
Rajimwale, Abhishek et al., “Coerced Cache Eviction and Discreet Mode Journaling: Dealing with Misbehaving Disks”, IEEE/IFIP 41st International Conference on Dependable Systems & Networks (DSN), Jun. 27-30, 2011, pp. 518-529. |
“Intel 64 and IA-32 Architectures Software Developer's Manual, vol. 3 (3A, 3B & 3C): System Programming Guide,” Intel Corporation, Dec. 2015, pp. 1-1698. [Part 1]. |
“Intel 64 and IA-32 Architectures Software Developer's Manual, vol. 3 (3A, 3B & 3C): System Programming Guide,” Intel Corporation, Dec. 2015, pp. Jan. 1698. [Part 2]. |
“Intel 64 and IA-32 Architectures Software Developer's Manual, vol. 3 (3A, 3B & 3C): System Programming Guide,” Intel Corporation, Dec. 2015, pp. 1-1698. [Part 3]. |
“Intel 64 and IA-32 Architectures Software Developer's Manual, vol. 3 (3A, 3B & 3C): System Programming Guide,” Intel Corporation, Dec. 2015, pp. 1-1698. [Part 4]. |
“Intel 64 and IA-32 Architectures Software Developer's Manual, vol. 3 (3A, 3B & 3C): System Programming Guide,” Intel Corporation, Dec. 2015, pp. 1-1698. [Part 5]. |
“Intel 64 and IA-32 Architectures Software Developer's Manual, vol. 3 (3A, 3B & 3C): System Programming Guide,” Intel Corporation, Dec. 2015, pp. 1-1698. [Part 6]. |
“Intel 64 and IA-32 Architectures Software Developer's Manual, vol. 3 (3A, 3B & 3C): System Programming Guide,” Intel Corporation, Dec. 2015, pp. 1-1698. [Part 7]. |
Trusted Computing Group, [retrieved on Nov. 12, 2015], Retrieved from the internet: <http://www.trustedcomputinggroup.org>, 2015, 1 page. |
Cache management via page coloring, Wikipedia, [retrieved on Nov. 12, 2015], Retrieved from the Internet: <http://en.Wikipedia.org/wiki/cache—coloring>, Nov. 6, 2015, 2 pages. |
Hardware security module, Wikipedia, [retrieved on Nov. 12, 2015], Retrieved from the Internet: <http://en.wikipedia.org/wiki/hardware—security—module>, Oct. 21, 2015, 5 pages. |
Intel® Trusted Execution Technology (Intel® TXT), Software Development Guide, Measured Launch Environment Developer's Guide, Revision 012, Document: 315168-012, [retrieved on Nov. 12, 2015], Retrieved from the internet: <http://download.Intel.com/technology/security/downloads/315168.pdf>, Jul. 2015, pp. 1-169. |
Anati Ittai, et al., “Innovative Technology for CPU Based Attestation and Sealing”, Proceedings of the Second International Workshop on Hardware and Architectural Support for Security and Privacy (HASP '13), Jun. 2013, pp. 1-7. |
Baumann, A., et al., “Shielding Applications from an Untrusted Cloud wth Haven,” Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation, Oct. 6-8, 2014, pp. 267-283. |
Bellard, F., “QEMU, a fast and portable dynamic translator”, Proceedings of the USENIX 2005 Annual Technical Conference, FREEN/X Track, Apr. 2005, pp. 41-46. |
Bochs, “The Cross-Platform IA-32 Emulator” [retrieved on Aug. 26, 2015] Retrieved from the Internet: <http://bochs.sourceforge.net/>, May 3, 2015, 2 pages. |
Bugnion, E. et al., “Compiler-directed page coloring for multiprocessors”, Proceedings of the Seventh International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS VII), ACM, Oct. 1996, 12 pages. |
Chen, X. et al., “Overshadow: A Virtualization-Based Approach to Retrofitting Protection in Commodity Operating Systems”, Proceedings of the Thirteenth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '08), ACM, Mar. 1-5, 2008, pp. 2-13. |
Chen, X., et al., “Operating System Controlled Processor-Memory Bus Encryption”, in Proceedings of Design, Automation and Test in Europe, 2008, (Date'08), IEEE, pp. 1154-1159. |
Iyer, Ravi, “COoS: A Framework for Enabling OoS in Shared Caches of CMP Platforms,” In Proceedings of the 18th Annual International Conference on Supercomputing (ICS '04), ACM, Jun. 26-Jul. 1, 2004, pp. 257-266. |
McKeen, F., et al., “Innovative Instructions and Software Model for Isolated Execution”, Proceedings of the Second International Workshop on Hardware and Architectural Support for Security and Privacy (HASP '13), ACM, Jun. 2013, pp. 1-8. |
Muller, T. et al., “TRESOR Runs Encryption Securely Outside RAM”, in Proceedings of the 20th USENIX Security Symposium [retrieved Nov. 12, 2015] Retrieved from the Internet: <http://www.usenix.org/events/sec11/tech/full—papers/muller.pdf>, Aug. 2011, pp. 1-16. |
Peterson, Peter A.H., “Cryptkeeper: Improving Security with Encrypted RAM”, in IEEE International Conference on Technologies for Homeland Security (HST 2010), Nov. 2010, pp. 1-7. |
Ports, Dan R.K. et al., “Towards application security on untrusted operating systems”, Proceedings of the Third Conference on Hot Topics in Security (HOTSEC '08), Jul. 2008, 7 pages. |
Rosenblum, M., et al., “Using the SimOS machine simulator to study complex computer systems”, ACM Transactions on Modeling and Computer Simulation, vol. 7, Issue 1, Jan. 1997, pp. 78-103. |
Vasudevan, A., et al. “CARMA: A Hardware Tamper-Resistant Isolated Execution Environment on Commodity x86 Platforms”, in Proceedings of the ACM Symposium on Information,Computer and Communications Security (ASIACCS 2012), May 2012, 5 pages. |
Zhang, X. et al. 2009, “Towards practical page coloring-based multicore cache management”, Proceedings of the 4th ACM European Conference on Computer Systems (EuroSys '09), ACM, Apr. 1-3, 2009, pp. 89-102. |
Advisory Action mailed Aug. 19, 2014, for U.S. Appl. No. 13/614,935 of Horovitz, O. et al., filed Sep. 13, 2015. |
Extended European Search Report mailed Aug. 5, 2015, for European Patent Application No. 12831564.5, 7 pages. |
Final Office Action mailed Jun. 5, 2014, for U.S. Appl. No. 13/614,935 of Horovitz, O., filed Sep. 13, 2012. |
International Search Report and Written Opinion of International Application No. PCT/US12/55210, Jan. 25, 2013, 11 pages. |
Non-Final Office Action mailed Feb. 19, 2015, for U.S. Appl. No. 13/614,935 of Horovitz, O., filed Sep. 13, 2012. |
Non-Final Office Action mailed Nov. 18, 2013, for U.S. Appl. No. 13/614,935 of Horovitz, O., filed Sep. 13, 2012. |
Notice of Allowance mailed Jul. 15, 2015, for U.S. Appl. No. 13/614,935 of Horovitz, O., filed Sep. 13, 2012. |
Restriction Requirement mailed Aug. 27, 2013, for U.S. Appl. No. 13/614,935 of Horovitz, O. et al., filed Sep. 13, 2015. |
U.S. Appl. No. 13/614,935, of Horovitz, O., et al. filed Sep. 13, 2012. |
U.S. Appl. No. 14/479,239 of Horovitz, O. et al., filed Aug. 5, 2014. |
U.S. Appl. No. 14/497,111 of Horovitz, O. et al., filed Sep. 25, 2014. |
U.S. Appl. No. 14/504,203 of Horovitz, O. et al., filed Oct. 1, 2014. |
U.S. Appl. No. 14/663,217 of Horovitz, O. et al., filed Mar. 19, 2015. |
U.S. Appl. No. 14/820,428 of Horovitz, O. et al., filed Aug. 6, 2015. |
Notice of Allowance mailed Jan. 25, 2017 for U.S. Appl. No. 14/820,428 of Horoivitz, O. et al., filed Aug. 6. 2015. |
Final Office Action mailed Dec. 27, 2016, for U.S. Appl. No. 14/497,111 of Horovitz, O. et al., filed Sep. 25, 2014. |
Final Office Action mailed Jan. 13, 2017, for U.S. Appl. No. 14/663,217 of Horovitz, O. etal., filed Mar. 19, 2015. |
Second Office Action mailed Dec. 6, 2016 for Japanese Patent Application No. 2014-530797, 4 pages. |
Non-Final Office Action mailed Oct. 6, 2016, U.S. Appl. No. 14/820,428 of Horovitz, O. filed Aug. 6, 2015. |
U.S. Appl. No. 15/274,981 of Waldspurger, C., et al., filed Sep. 23, 2016. |
Non-Final Office Action mailed Nov. 30, 2016, for U.S. Appl. No. 14/504,203 of Horovitz, O. et al., filed Oct. 1, 2014. |
Final Office Action dated May 18, 2017 for U.S. Appl. No. 14/504,203 of Horovitz, O. et al., filed Oct. 1, 2014. |
Non-Final Office Action dated May 26, 2017, for U.S. Appl. No. 14/497,111 of Horovitz, O. et al., filed Sep. 25, 2014. |
Notice of Allowance dated Jun. 15, 2017 of U.S. Appl. No. 14/663,217 by Horovitz, O., et al., filed Mar. 19, 2015. |
Number | Date | Country | |
---|---|---|---|
20150227744 A1 | Aug 2015 | US |
Number | Date | Country | |
---|---|---|---|
61938070 | Feb 2014 | US |