Secure collaboration between processors and processing accelerators in enclaves

Information

  • Patent Grant
  • 11921905
  • Patent Number
    11,921,905
  • Date Filed
    Wednesday, July 18, 2018
    7 years ago
  • Date Issued
    Tuesday, March 5, 2024
    a year ago
Abstract
Aspects of the disclosure relate to providing a secure collaboration between one or more PCIe accelerators and an enclave. An example system may include a PCIe accelerator apparatus. The PCIs accelerator apparatus may include the one or more PCIe accelerators and a microcontroller configured to provide a cryptographic identity to the PCIe accelerator apparatus. The PCIe accelerator apparatus may be configured to use the cryptographic identity to establish communication between the PCIe accelerator apparatus the enclave.
Description
BACKGROUND

Enclave technologies may enable software programmers to develop secure applications that are contained inside secure execution environments called enclaves. An application that runs inside an enclave typically has safeguards like memory and code integrity, and memory encryption. These safeguards protect the enclave from code that executes outside of the enclave, like the operating system, hypervisor, or other system software. In cloud-based computing, this can provide safeguards against intrusions by all sorts of actors, including personnel of the cloud operator. For instance, cloud-based machine learning workloads can include very sensitive information, such as personal data or location information. These workloads can also consume computational resources from central processing units (CPUs) as well as from various processing accelerators. Protecting the integrity of such workloads without compromising efficiency is an important goal for such systems. For instance, moving the processing parts of the workload from the accelerators back to the CPU and running them inside a CPU enclave may be useful from security perspective, but may dramatically reduce efficiency of the computations.


SUMMARY

Aspects of the disclosure provide a system for providing a secure collaboration between one or more PCIe accelerators and an enclave. The system includes an PCIe accelerator apparatus including the one or more PCIe accelerators and a microcontroller configured to provide a cryptographic identity to the PCIe accelerator apparatus. The PCIe accelerator apparatus is configured to use the cryptographic identity to establish communication between the PCIe accelerator apparatus the enclave.


In one example, the system also includes a circuit board on each of the one or more PCIe accelerators and the microcontroller are arranged. In another example, each of the one or more PCIe accelerators is a tensor processing unit. In another example, each of the one or more PCIe accelerators is a graphical processing unit. In another example, the PCIe accelerator apparatus further comprises an application processor configured to communicate with the enclave. In this example, the application processor is incorporated into the microcontroller. In addition or alternatively, the application processor further includes a dedicated function for communicating with an operating system of a computing device on which the enclave resides. In this example, the dedicated function is configured to enable a communication path between the application processor and the enclave via the computing device. In addition or alternatively, the system also includes the computing device. In addition or alternatively, the system also includes memory on which the enclave is stored. In another example, the PCIe accelerator apparatus further includes a cryptographic engine configured to encrypt information entering the PCIe accelerator apparatus. In another example, the PCIe accelerator apparatus further includes a cryptographic engine configured to decrypt information leaving the PCIe accelerator apparatus. In this example, the cryptographic engine is a line-rate cryptographic engine. In addition or alternatively, the cryptographic engine is arranged in a PCIe path of all of the one or more PCIe accelerators. In addition or alternatively, the PCIe accelerator apparatus further comprises an application processor configured to manage keys used by the cryptographic engine.


Another aspect of the disclosure provides a method for providing a secure collaboration between one or more PCIe accelerators and an enclave. The method includes retrieving, by one or more PCIe accelerator, encrypted one or both of code or data out of memory of a host computing device; decrypting, by the one or more PCIe accelerator, the encrypted one or both of code or data using a cryptographic engine; processing, by the one or more PCIe accelerators, the unencrypted one or both of code or data using and generate results; encrypting, by the one or more PCIe accelerators, the results; and sending, by the one or more PCIe accelerators, the encrypted results back to the memory of the host computing device for storage.


In one example, the method also includes negotiating, by the one or more PCIe accelerators, a cryptographic session with an enclave. In another example, the cryptographic session is negotiated through host OS-mediated communication. In another example, the encrypted one or both of code or data are retrieved using direct memory access. In another example, the encrypted results are sent using direct memory access.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a functional diagram of an example system in accordance with aspects of the disclosure.



FIG. 2 is a functional diagram of an example Tensor Processing Unit (TPU) in accordance with aspects of the disclosure.



FIG. 3 are example representation of a software stack in accordance with aspects of the disclosure.



FIG. 4 is an example flow diagram in accordance with aspects of the disclosure.



FIG. 5 is a diagram of an example system in accordance with aspects of the disclosure.





DETAILED DESCRIPTION
Overview

Aspects of the disclosure relate to enabling secure collaboration between CPUs and processing accelerators using an enclave-based system. For instance, a computing device may include a plurality of processors and memory. The memory may include one or more enclaves that can be used to store data and instructions while at the same time limit the use of such data and instructions by other applications. For instance the data may include sensitive information such as passwords, credit card data, social security numbers, or any other information that a user would want to keep confidential. The plurality of processors may include CPUs as well as hardware based processors or accelerators such as Peripheral Component Interconnect Express (PCIe) accelerators including special-purpose integrated circuits that can be used to perform neural network computations.


In order to secure the processing of a PCIe accelerator, a PCIe accelerator apparatus may include hardware that may be arranged on the circuit board with one or more PCIe accelerators in order to give the PCIe accelerator apparatus a cryptographic hardware identity and the ability to perform authenticated encryption and/or decryption. For instance, an application that wants to use a PCIe accelerator securely, may runs its application logic as well as the entire or one or more parts of the PCIe accelerator software stack inside an enclave or a set of enclaves.


When a computing device's operating system allocates one or more PCIe accelerators for use by an application, a PCIe accelerator apparatus and the enclave may negotiate a cryptographic session through OS-mediated communication. The enclave may then use this cryptographic session to encrypt the PCIe accelerator code and data, and may hand those out to the OS, which, in turn may hand them to the one or more PCIe accelerators. The one or more PCIe accelerator retrieve the code and/or data out of the computing device's memory, decrypt those using a cryptographic engine, process the data using the code, and generate results. The one or more PCIe accelerator also encrypt the results with the same cryptographic session before sending them back to memory of the processing device.


For instance, in order to provide a secure collaboration between processors and processing accelerators in enclaves, a PCIe accelerator apparatus may include a plurality of PCIe accelerators arranged on a circuit board, an application processor, a microcontroller and a cryptographic engine.


The microcontroller may endow the PCIe accelerator apparatus with a cryptographic hardware identity and also ensures the integrity of the code running on the AP. The application processor may also utilize services provided by the microcontroller to assert the PCIe accelerator apparatus's hardware identity during session establishment between the enclave and the application processor. The cryptographic engine may be arranged in the PCIe direct memory access (DMA) path of the PCIe accelerator and may provide encryption and decryption operations for the PCIe accelerators. The application processor may be configured to manage the keys used by the cryptographic engine and may also be responsible for ensuring semantic integrity of any buffers being decrypted by the cryptographic crypto engine.


The features described here provide for secure processing of information on a processing accelerator such as a TPU, a GPU, or other types of PCIe accelerators. This is achieved by providing additional hardware on a circuit board of one or more PCIe accelerator to provide that apparatus with a cryptographic hardware identity and the ability to perform authenticated encryption and decryption at PCIe line rate with minimal additional latency. In addition, the features described herein may enable PCIe accelerator to directly consume data that is encrypted at rest, without it having to be decrypted and re-encrypted with the session key.


EXAMPLE SYSTEMS


FIG. 1 includes an example enclave system 100 in which the features described herein may be implemented. It should not be considered as limiting the scope of the disclosure or usefulness of the features described herein. In this example, enclave system 100 can include computing devices 110, 120, 130 and storage system 140 connected via a network 150. Each computing device 110, 120, 130 can contain one or more processors 112, memory 114 and other components typically present in general purpose computing devices.


Although only a few computing devices and a storage systems are depicted in the system 100, the system may be expanded to any number of additional devices. In addition to a system including a plurality of computing devices and storage systems connected via a network, the features described herein may be equally applicable to other types of devices such as individual chips, including those incorporating System on Chip (Soc) or other chips with memory, that may include one or more enclaves.


Memory 114 of each of computing devices 110, 120, 130 can store information accessible by the one or more processors 112, including instructions that can be executed by the one or more processors. The memory can also include data that can be retrieved, manipulated or stored by the processor. The memory can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.


The instructions can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the one or more processors. In that regard, the terms “instructions,” “application,” “steps,” and “programs” can be used interchangeably herein. The instructions can be stored in object code format for direct processing by a processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods, and routines of the instructions are explained in more detail below.


Data may be retrieved, stored or modified by the one or more processors 112 in accordance with the instructions. For instance, although the subject matter described herein is not limited by any particular data structure, the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents. The data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data.


The one or more processors 112 can be any conventional processors, such as a commercially available CPU. In addition or alternatively, the processors can be dedicated components such as an application specific integrated circuit (“ASIC”) or other hardware-based processor, such as PCIe accelerators including Tensor Processing Units (TPU), graphical processing units (GPU), etc. Although not necessary, one or more of computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, etc. faster or more efficiently.


Although FIG. 1 functionally illustrates the processors, memory, and other elements of computing device 110 as being within the same block, the processors, computer, computing device, or memory can actually comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing. For example, the memory can be a hard drive or other storage media located in housings different from that of the computing devices 110. Accordingly, references to a processor, computer, computing device, or memory will be understood to include references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel. For example, the computing devices 110 may include server computing devices operating as a load-balanced server farm, distributed system, etc. Yet further, although some functions described below are indicated as taking place on a single computing device having a single processor, various aspects of the subject matter described herein can be implemented by a plurality of computing devices, for example, communicating information over network 150.


Each of the computing devices 110, 120, 130 can be at different nodes of a network 150 and capable of directly and indirectly communicating with other nodes of network 150. Although only a few computing devices are depicted in FIG. 1, it should be appreciated that a typical system can include a large number of connected computing devices, with each different computing device being at a different node of the network 150. The network 150 and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks. The network can utilize standard communications protocols, such as Ethernet, WiFi and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the subject matter described herein are not limited to any particular manner of transmission of information.


Like the memory discussed above, the storage system 140 may also store information that can be accessed by the computing devices 110, 120, 130. However, in this case, the storage system 140 may store information that can be accessed over the network 150. As with the memory, the storage system can include any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.


In this example, the instructions of each of computing devices 110, 120, 130 may include one or more applications. These applications may define enclaves 160, 170, 180, 190 within memory, either locally at memory 114 or remotely at the storage system 140. Each enclave can be used to store data and instructions while at the same time limit the use of such data and instructions by other applications. For instance the data may include sensitive information such as passwords, credit card data, social security numbers, or any other information that a user would want to keep confidential. The instructions may be used to limit the access to such data. Although computing device 110 includes only two enclaves, computing device 120 includes only 1 enclave, computing device 130 includes no enclaves, and storage system 140 includes only 1 enclave, any number of enclaves may be defined with the memory of the computing devices 110, 120 or storage system 140.


As noted above, processors 112 may include CPUs as well as hardware based processors or accelerators such as TPUs, GPUs, and other PCIe accelerators. A TPU is a special-purpose integrated circuit that can be used to perform neural network computations. As shown in the example functional diagram of a TPU 200 in FIG. 2, a TPU may include host interface 202. The host interface 202 can include one or more PCIe connections that enable the TPU 200 to receive instructions that include parameters for a neural network computation. The host interface 202 can send the instructions to a sequencer 206, which converts the instructions into low level control signals that control the circuit to perform the neural network computations. The sequencer 206 can send the control signals to a unified buffer 208, a matrix computation unit 212, and a vector computation unit 214. In some implementations, the sequencer 206 also sends control signals to a DMA engine 204 and dynamic memory 210 which can be a memory unit. The host interface 202 can send sets of weight inputs and an initial set of activation inputs to the DMA engine 204. The DMA 204 can store the sets of activation inputs at the unified buffer 208.


The unified buffer 208 is a memory buffer. It can be used to store the set of activation inputs from the DMA engine 204 and outputs of the vector computation unit 214. The DMA engine 204 can also read the outputs of the vector computation unit 214 from the unified buffer 208. The dynamic memory 210 and the unified buffer 208 can send the sets of weight inputs and the sets of activation inputs, respectively, to the matrix computation unit 212. The matrix computation unit 212 can process the weight inputs and the activation inputs and provide a vector of outputs to the vector computation unit 214. In some implementations, the matrix computation unit sends the vector of outputs to the unified buffer 208, which sends the vector of outputs to the vector computation unit 214. The vector computation unit can process the vector of inputs and store a vector of processed outputs to the unified buffer 208. The vector of processed outputs can be used as activation inputs to the matrix computation unit 212, e.g., for use in a subsequent layer in the neural network.



FIG. 3 provides an example representation of a TPU software stack. A TPU workload may begin as an application written to an application programming interface (API) for interacting with the TPU such as the TensorFlow software library. API libraries and a compiler running on the host CPU, such as one of the processors 112, may generate the executable instructions for one or more TPUs, for instance, based on a sequence of API calls to the TensorFlow library. The application, utilizing the services exposed by a kernel driver program of the computing device 110, may communicate memory-locations of the generated code buffers as well as data buffers to the TPU. Using the DMA engine 204, the TPU DMAs the generated executable instructions, along with the associated data from the host memory, such as memory 114, into local memory of the TPU, such as the dynamic memory 210 and the unified buffer 208 discussed above. The TPU then executes the DMA-ed instructions, processing the fetched data from the host memory to generate the output. Finally, the TPU DMAs the results back into the host memory, where the application picks the results up.


In order to secure the processing of a TPU, hardware may be arranged on the TPU circuit board in order to give the TPU or that entire apparatus a cryptographic hardware identity and the ability to perform authenticated encryption and/or decryption at PCIe line rate with minimal additional latency. For instance, an application that wants to use TPUs securely, may run its application logic as well as the entire or one or more parts of the TPU software stack inside an enclave or a set of enclaves. Where multiple enclaves are involved, a “primary” enclave may be responsible for dealing with the one or more TPUs and the primary and other enclave or enclaves may communicate with one another over secure connections for instance, using a remote procedure call handshake protocol which enables secure communications between different entities.


In some instances, the host operating system (OS) may allocates one or more PCIe accelerators for use by the application. FIG. 4 is an example flow diagram of how the PCIe accelerators may operate in order to process data and/or code according to the requirements, needs or requests of the application. For instance, at block 410, one or more PCIe accelerators of a PCIe accelerator apparatus negotiate a cryptographic session with an enclave (or primary enclave) through host OS-mediated communication. The enclave then use this cryptographic session to encrypt the PCIe accelerator code and/or data and sends this encrypted code and/or data out to the host OS, which, in turn may store the encrypted code and/or date in memory to be accessed by the one or more PCIe accelerators. As such, at block 420, the one or more PCIe accelerators retrieve the encrypted code and/or data out of memory of the host computing device. At block 430, the one or more PCIe accelerators decrypt the encrypted code and/or data using a cryptographic engine, such as a cryptographic engine 560 discussed further below. The one or more PCIe accelerators process the (now unencrypted) code and/or data using and generate results at bock 440. The one or more PCIe accelerator also encrypt the results with the same cryptographic session before sending them back to memory of the host computing device for storage at block 450. The encrypted results can then be accessed as needed by the application.


In the example of a TPU-board, the TPU-board may receive encrypted code and/or data, decrypt the received encrypted code and/or data, process the decrypted code and/or data to generate results, encrypt the results, and send the encrypted results back to the host operating system. In order to do so, the TPU-board may negotiate a cryptographic session with the enclave (or primary enclave) through Host OS-mediated communication. The enclave may then use this cryptographic session to encrypt the TPU code and/or data, and sends the encrypted code and/or data t to the Host OS, which, in turn hands them to the one or more TPUs of the TPU-board. As such, the one or more TPUs DMA the code and/or data out of the host memory. The accessed code and/or data is then decrypted using a cryptographic engine of the one or more TPUs. Thereafter the unencrypted code and/or data is processed by the one or more TPUs in order to generate results. The one or more TPUs also encrypt the results with the same cryptographic session before DMA-ing the encrypted results back to host operating system for storage at the host memory. The encrypted results can then be accessed as needed by the application.


This system may enable TPUs to directly consume data that is encrypted at rest, without it having to be decrypted and re-encrypted with the session key. This is needed, for example, to enable users to encrypt their data-sets on premises before transmitting them to third parties. Requiring these data-sets to be decrypted and re-encrypted with the session key may incur unreasonable overheads.



FIG. 5 provides a diagram of an example system 500 for providing a secure collaboration between processors and processing accelerators in enclaves. System 500 includes a PCIe accelerator apparatus 502, an enclave 520, and a host 530 including a host OS. In this example, the enclave 520 may represent one or more enclaves such as enclaves 160, 170, 180, or 190, and the host may represent any of computing devices 110, 120, and 130.


The PCIe accelerator apparatus 502 includes a plurality of TPU 200 arranged on a TPU circuit board or TPU board 510, an Application Processor (AP) 540, a microcontroller 550 and the cryptographic engine 560 each connected to the TPU board 510. In some examples, the entire logic on the PCIe board could be integrated into a single ASIC, which is then soldered onto the host computing device's main board. Alternatively, the entire logic could be integrated as an IP block into the SoC containing the CPU.


The AP 540 may be a general-purpose application processor. During operation, the AP 540 may expose a dedicated BDF (Bus/Device/Function) to the host OS. The host OS can use memory mapped input output (MMIO) registers of this BDF to enable a communication path between the enclave 520 and the AP. This communication may be used for enabling session-establishment between the enclave 520 and the AP 540 as well as for session life-time management by the host OS. This interface may also be utilized by the OS to update the firmware of the microcontroller 550.


The microcontroller 550 may be a low-power microcontroller, such as a Titan chip by Google LLC, or a combination of a commercially available low-power microcontroller and a Trusted Platform Module (TPM), which provides hardware security by providing a cryptographic identity to hardware to which the microcontroller is attached. In this regard, the microcontroller 550 may provide the TPU board 510 and/or TPUs 200 with a cryptographic identity.


The microcontroller may include various components such as a secure application processor, a cryptographic co-processor, a hardware random number generator, a key hierarchy, embedded static RAM (SRAM), embedded flash, and a read-only memory block. The microcontroller 550 may include unique keying material securely stored in a registry database. The contents of this database may cryptographically protected using keys maintained in an offline quorum-based Certification Authority (CA). The microcontroller 550 can generate Certificate Signing Requests (CSRs) directed at the microcontroller 550's CA, which can verify the authenticity of the CSRs using the information in the registry database before issuing identity certificates.


The microcontroller-based identity system not only verifies the provenance of the chips creating the CSRs, but also verifies the firmware running on the chips, as the code identity of the firmware is hashed into the key hierarchy of the microcontroller. This property enables remediation and allows us to fix bugs in Titan firmware, and issue certificates that can only be wielded by patched Titan chips. The microcontroller-based identity system may also enable back-end systems to securely provision secrets and keys to the host 530, host OS, or jobs running on the host. The microcontroller may also be able to chain and sign critical audit logs and make those logs tamper-evident. To offer tamper-evident logging capabilities, the microcontroller 550 may cryptographically associates the log messages with successive values of a secure monotonic counter maintained by the microcontroller, and signs these associations with the controller's private key. This binding of log messages with secure monotonic counter values ensures that audit logs cannot be altered or deleted without detection, even by insiders with root access to the host 530.


As noted above, the microcontroller 550 may endow the PCIe accelerator apparatus with a cryptographic hardware identity and may also ensure the integrity of the code running on the AP 540. The AP may utilize services provided by the microcontroller to assert the PCIe accelerator apparatus's hardware identity. These services may include, for instance, the microcontroller certifying keys, such as Diffie Hellman keys, generated by the EP as belonging to the AP as well as generating assertions of the microcontroller's identity for authentication and authorization processes.


The cryptographic engine 560 may be arranged in the DMA or PCIe path of the TPUs 200 and may provide encryption and decryption operations for the TPUs 200. The cryptographic engine 560 may enable the TPUs 200 to decrypt information such as code and/or data buffers read from the host memory and encrypt information such as result buffers written back to the host. The cryptographic engine may include a line-rate cryptographic engine. As an example, line-rate may refer to the maximum supported data-transfer rate of the interface; in this case, PCIe. For instance, PCIe gen 3 supports 12.5 giga-bits-per-second of data transfer in each direction, per lane. In a 16-lane interface, this would amount to 25 giga-bytes-per-second data transfer rate in each direction. So the cryptographic engine must be enabled enough to support 25 giga-bytes-per-second of decrypt and 25 giga-bytes-per-second of encrypt.


The AP 540 may be configured to manage the keys used by the cryptographic engine 560. The AP 540 may also be responsible for ensuring semantic integrity of any buffers being decrypted by the cryptographic engine 560. This will allow the TPUs to utilize decrypted buffers in a manner the source enclave, for instance, enclave 520, intended them to utilize the buffer. As such, for example, the host OS would not be able to confuse the TPU into using a data buffer as a code buffer. The enclave 520 will create a small amount of semantic metadata, cryptographically bind it to each of the DMA buffers including the code and/or data being transferred from the host to the TPUs and communicate the information to the AP 540. The AP will then utilize this information to direct the DMA crypto engine appropriately. In some instances, AP 540 and the microcontroller 550 may be the same physical chip.


In operation, the aforementioned security details may be “hidden” from the programmer of the application. From the programmer's point of view, it should be possible to enable a few compile-time flags to get the protections without having to rewrite the programmer's code.


Encrypting the code and/or data buffers on CPU, only to be decrypted on the TPU may add performance bottlenecks. Enabling TPUs to directly DMA into an enclave's memory might ease such bottlenecks. This may require additional features. For instance, enclave implementations may need to add enhancements to selectively allow accelerator-initiated DMA from trusted accelerators. The enclave code must have control over which accelerators it wants to trust. In addition, accelerators may need to be endowed with industry-standard cryptographic identity. This identity must be verified by an input-output memory management unit (IOMMU) that allows an enclave to specify which of the verified identities are trusted. In addition, accelerators, at least to some extent, may need to understand the enclave access-control model as well as CPU virtual addresses while also utilizing ATS (Address Translation Service) and translation caching to prevent OS tampering.


At present, TPUs are single-context devices. That is, a collection of TPUs can only work on one application at a time. In the event that TPUs are expanded into multi-context devices, the features described herein can be applied to multi-context TPUs as well. In addition, although the examples herein relate specifically to TPUs, the features described herein may be applied to other types of PCIe accelerators, such as GPUs, as well.


Most of the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. As an example, the preceding operations do not have to be performed in the precise order described above. Rather, various steps can be handled in a different order, such as reversed, or simultaneously. Steps can also be omitted unless otherwise stated. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.

Claims
  • 1. A system for providing a secure collaboration between one or more PCIe accelerators and an enclave defined within memory of a host computing device, the system comprising: a PCIe accelerator apparatus including: the one or more PCIe accelerators;a microcontroller configured to provide a cryptographic hardware identity to the PCIe accelerator apparatus, wherein the PCIe accelerator apparatus is configured to use the cryptographic hardware identity to negotiate a cryptographic session between the PCIe accelerator apparatus and the enclave; anda cryptographic engine separate from the one or more PCIe accelerators and arranged in a PCIe path of the one or more PCIe accelerators to provide encryption and decryption operations between the one or more PCIe accelerators and the enclave during the cryptographic session.
  • 2. The system of claim 1, further comprising a circuit board on which each of the one or more PCIe accelerators and the microcontroller are arranged.
  • 3. The system of claim 1, wherein each of the one or more PCIe accelerators is one of a tensor processing unit or a graphical processing unit.
  • 4. The system of claim 1, wherein the PCIe accelerator apparatus further comprises an application processor configured to communicate with the enclave.
  • 5. The system of claim 4, wherein the application processor is incorporated into the microcontroller.
  • 6. The system of claim 4, wherein the application processor further includes a dedicated function for communicating with an operating system of a computing device on which the enclave resides.
  • 7. The system of claim 6, wherein the dedicated function is configured to enable a communication path between the application processor and the enclave via the computing device.
  • 8. The system of claim 1, wherein the cryptographic engine is configured to decrypt information from the enclave that is entering the PCIe accelerator apparatus during the cryptographic session.
  • 9. The system of claim 1, wherein the cryptographic engine is configured to encrypt information from the PCIe accelerator apparatus that is entering the enclave during the cryptographic session.
  • 10. The system of claim 1, wherein the cryptographic engine is a line-rate cryptographic engine.
  • 11. The system of claim 1, wherein the PCIe accelerator apparatus further comprises an application processor configured to manage keys used by the cryptographic engine.
  • 12. The system of claim 1, wherein the cryptographic engine is a separate component from the one or more PCIe accelerators in the PCIe accelerator apparatus.
  • 13. The system of claim 1, wherein the cryptographic engine is arranged between the one or more PCIe accelerators and the enclave.
  • 14. A method for providing a secure collaboration between one or more PCIe accelerators of a PCIe accelerator apparatus and an enclave defined within memory of a host computing device, the method comprising: negotiating, by the PCIe accelerator apparatus, a cryptographic session with the enclave using a cryptographic hardware identity provided by a microcontroller of the PCIe accelerator apparatus; andduring the cryptographic session: retrieving, by the PCIe accelerator apparatus, encrypted one or both of code or data from the enclave;decrypting, by a cryptographic engine of the PCIe accelerator apparatus, the encrypted one or both of code or data, the cryptographic engine being separate from the one or more PCIe accelerators and arranged in a PCIe path of the one or more PCIe accelerators;retrieving, by the one or more PCIe accelerators, the unencrypted one or both of code or data;processing, by the one or more PCIe accelerators, the unencrypted one or both of code or data to generate results;encrypting, by the cryptographic engine, the results; andsending, by the PCIe accelerator apparatus, the encrypted results back to the enclave.
  • 15. The method of claim 14, wherein the cryptographic session is negotiated through host OS-mediated communication.
  • 16. The method of claim 14, wherein the encrypted one or both of code or data are retrieved using direct memory access.
  • 17. The method of claim 14, wherein the encrypted results are sent using direct memory access.
  • 18. A non-transitory computer-readable medium storing instructions executable by one or more processors for providing a secure collaboration between one or more PCIe accelerators of a PCIe accelerator apparatus and an enclave defined within memory of a host computing device, the instructions comprising: negotiating a cryptographic session with the enclave using a cryptographic hardware identity provided by a microcontroller of the PCIe accelerator apparatus; andduring the cryptographic session: retrieving encrypted one or both of code or data from the enclave;decrypting the encrypted one or both of code or data using a cryptographic engine, the cryptographic engine being separate from the one or more PCIe accelerators and arranged in a PCIe path of the one or more PCIe accelerators;retrieving the unencrypted one or both of code or data;processing the unencrypted one or both of code or data to generate results;encrypting the results using the cryptographic engine; andsending the encrypted results back to the enclave.
CROSS-REFERENCE TO RELATED APPLICATION

The present application is a national phase entry under 35 U.S.C. § 371 of International Application No. PCT/US2018/042695, filed Jul. 18, 2018, which claims the benefit of the filing date of U.S. Provisional Application No. 62/664,438, filed Apr. 30, 2018 and U.S. Provisional Application No. 62/672,680, filed May 17, 2018, the disclosure of which is hereby incorporated herein by reference.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2018/042695 7/18/2018 WO
Publishing Document Publishing Date Country Kind
WO2019/212581 11/7/2019 WO A
US Referenced Citations (182)
Number Name Date Kind
5970250 England et al. Oct 1999 A
6779112 Guthery Aug 2004 B1
6938159 O'Connor et al. Aug 2005 B1
7624433 Clark et al. Nov 2009 B1
7661131 Shaw et al. Feb 2010 B1
7788711 Sun et al. Aug 2010 B1
7904940 Hernacki et al. Mar 2011 B1
8145909 Agrawal et al. Mar 2012 B1
8155036 Brenner et al. Apr 2012 B1
8220035 Pravetz et al. Jul 2012 B1
8245285 Ravishankar et al. Aug 2012 B1
8353016 Pravetz et al. Jan 2013 B1
8442527 Machiraju et al. May 2013 B1
8631486 Friedman et al. Jan 2014 B1
9246690 Roth et al. Jan 2016 B1
9251047 McKelvie Feb 2016 B1
9374370 Bent, II et al. Jun 2016 B1
9444627 Varadarajan et al. Sep 2016 B2
9444948 Ren et al. Sep 2016 B1
9460077 Casey Oct 2016 B1
9531727 Himberger et al. Dec 2016 B1
9584517 Roth et al. Feb 2017 B1
9615253 Osborn Apr 2017 B1
9710748 Ross et al. Jul 2017 B2
9735962 Yang Aug 2017 B1
9754116 Roth et al. Sep 2017 B1
9779352 Hyde et al. Oct 2017 B1
9940456 Nesher et al. Apr 2018 B2
10007464 Blinzer Jun 2018 B1
10021088 Innes et al. Jul 2018 B2
10375177 Bretan Aug 2019 B1
10382202 Ohsie et al. Aug 2019 B1
10530777 Costa Jan 2020 B2
10685131 Lunsford et al. Jun 2020 B1
11210412 Ghetti et al. Dec 2021 B1
20030005118 Williams Jan 2003 A1
20030074584 Ellis Apr 2003 A1
20030200431 Stirbu Oct 2003 A1
20030202523 Buswell et al. Oct 2003 A1
20030236975 Birk et al. Dec 2003 A1
20040030764 Birk et al. Feb 2004 A1
20040093522 Bruestle et al. May 2004 A1
20050050316 Peles Mar 2005 A1
20050144437 Ransom et al. Jun 2005 A1
20050166191 Kandanchatha et al. Jul 2005 A1
20050198622 Ahluwalia et al. Sep 2005 A1
20050204345 Rivera et al. Sep 2005 A1
20060021004 Moran et al. Jan 2006 A1
20060021010 Atkins et al. Jan 2006 A1
20060095969 Portolani et al. May 2006 A1
20060130131 Pai et al. Jun 2006 A1
20060174037 Bemardi et al. Aug 2006 A1
20060225055 Tieu Oct 2006 A1
20060274695 Krishnamurthi et al. Dec 2006 A1
20080005573 Morris et al. Jan 2008 A1
20080005798 Ross Jan 2008 A1
20080091948 Hofmann et al. Apr 2008 A1
20080244257 Vaid et al. Oct 2008 A1
20090077655 Sermersheim et al. Mar 2009 A1
20090228951 Ramesh et al. Sep 2009 A1
20100150352 Mansour et al. Jun 2010 A1
20100313246 Irvine et al. Dec 2010 A1
20110075652 Ogura Mar 2011 A1
20110113244 Chou et al. May 2011 A1
20110225559 Nishide Sep 2011 A1
20120151568 Pieczul et al. Jun 2012 A1
20120159184 Johnson et al. Jun 2012 A1
20120272217 Bates Oct 2012 A1
20120275601 Matsuo Nov 2012 A1
20130111549 Sowatskey et al. May 2013 A1
20130125197 Pravetz et al. May 2013 A1
20130159726 McKeen et al. Jun 2013 A1
20130166907 Brown et al. Jun 2013 A1
20130247164 Hoggan Sep 2013 A1
20130312074 Sarawat et al. Nov 2013 A1
20130312117 Sapp, II et al. Nov 2013 A1
20140047532 Sowatskey Feb 2014 A1
20140086406 Polzin et al. Mar 2014 A1
20140089617 Polzin et al. Mar 2014 A1
20140089712 Machnicki et al. Mar 2014 A1
20140157410 Dewan et al. Jun 2014 A1
20140189246 Xing et al. Jul 2014 A1
20140189326 Leslie et al. Jul 2014 A1
20140267332 Chhabra et al. Sep 2014 A1
20140281544 Paczkowski et al. Sep 2014 A1
20140281560 Ignatchenko et al. Sep 2014 A1
20140297962 Rozas et al. Oct 2014 A1
20140337983 Kang et al. Nov 2014 A1
20150007291 Miller Jan 2015 A1
20150033012 Scarlata et al. Jan 2015 A1
20150089173 Chhabra et al. Mar 2015 A1
20150131919 Dewangan et al. May 2015 A1
20150178226 Scarlata et al. Jun 2015 A1
20150200949 Willhite et al. Jul 2015 A1
20150249645 Sobel et al. Sep 2015 A1
20150281279 Smith et al. Oct 2015 A1
20160036786 Gandhi Feb 2016 A1
20160080379 Saboori et al. Mar 2016 A1
20160110540 Narendra Trivedi et al. Apr 2016 A1
20160134599 Ross et al. May 2016 A1
20160147979 Kato May 2016 A1
20160171248 Nesher et al. Jun 2016 A1
20160173287 Bowen Jun 2016 A1
20160179702 Chhabra et al. Jun 2016 A1
20160188350 Shah et al. Jun 2016 A1
20160188873 Smith et al. Jun 2016 A1
20160188889 Narendra Trivedi et al. Jun 2016 A1
20160219044 Karunakaran et al. Jul 2016 A1
20160219060 Karunakaran et al. Jul 2016 A1
20160226913 Sood et al. Aug 2016 A1
20160283411 Sheller et al. Sep 2016 A1
20160285858 Li et al. Sep 2016 A1
20160316025 Lloyd et al. Oct 2016 A1
20160330301 Raindel Nov 2016 A1
20160350534 Poornachandran et al. Dec 2016 A1
20160359965 Murphy et al. Dec 2016 A1
20160366123 Smith et al. Dec 2016 A1
20160371540 Pabbichetty Dec 2016 A1
20160380985 Chhabra et al. Dec 2016 A1
20170093572 Hunt et al. Mar 2017 A1
20170126660 Brannon May 2017 A1
20170126661 Brannon May 2017 A1
20170185766 Narendra Trivedi et al. Jun 2017 A1
20170201380 Schaap et al. Jul 2017 A1
20170223080 Velayudhan et al. Aug 2017 A1
20170256304 Poornachandran et al. Sep 2017 A1
20170264643 Bhuiyan et al. Sep 2017 A1
20170286320 Chhabra et al. Oct 2017 A1
20170286721 King Oct 2017 A1
20170331815 Pawar et al. Nov 2017 A1
20170338954 Yang et al. Nov 2017 A1
20170346848 Smith et al. Nov 2017 A1
20170353319 Scarlata et al. Dec 2017 A1
20170364908 Smith et al. Dec 2017 A1
20170366359 Scarlata et al. Dec 2017 A1
20180059917 Takehara Mar 2018 A1
20180069708 Thakore Mar 2018 A1
20180089468 Rozas et al. Mar 2018 A1
20180097809 Chakrabarti et al. Apr 2018 A1
20180113811 Xing Apr 2018 A1
20180114012 Sood et al. Apr 2018 A1
20180114013 Sood Apr 2018 A1
20180145968 Rykowski et al. May 2018 A1
20180183578 Chakrabarti et al. Jun 2018 A1
20180183580 Scarlata et al. Jun 2018 A1
20180183586 Bhargav-Spantzel et al. Jun 2018 A1
20180191695 Lindemann Jul 2018 A1
20180210742 Costa Jul 2018 A1
20180211018 Yang Jul 2018 A1
20180212971 Costa Jul 2018 A1
20180212996 Nedeltchev et al. Jul 2018 A1
20180213401 Yang Jul 2018 A1
20180232517 Roth et al. Aug 2018 A1
20180234255 Fu Aug 2018 A1
20180278588 Cela Sep 2018 A1
20180285560 Negi et al. Oct 2018 A1
20180295115 Kumar et al. Oct 2018 A1
20180300556 Varerkar Oct 2018 A1
20180337920 Stites et al. Nov 2018 A1
20180349649 Martel et al. Dec 2018 A1
20180351941 Chhabra Dec 2018 A1
20190026234 Harnik et al. Jan 2019 A1
20190028460 Bhargava et al. Jan 2019 A1
20190034617 Scarlata et al. Jan 2019 A1
20190044724 Sood Feb 2019 A1
20190044729 Chhabra et al. Feb 2019 A1
20190050551 Goldman-Kirst et al. Feb 2019 A1
20190058577 Bowman et al. Feb 2019 A1
20190058696 Bowman et al. Feb 2019 A1
20190103074 Chhabra et al. Apr 2019 A1
20190109877 Samuel et al. Apr 2019 A1
20190147188 Benaloh et al. May 2019 A1
20190149531 Kakumani et al. May 2019 A1
20190158474 Kashyap et al. May 2019 A1
20190163898 Clebsch et al. May 2019 A1
20190197231 Meier Jun 2019 A1
20190245882 Kesavan et al. Aug 2019 A1
20190253256 Saab et al. Aug 2019 A1
20190258811 Ferraiolo et al. Aug 2019 A1
20190310862 Mortensen et al. Oct 2019 A1
20200233953 Palsson et al. Jul 2020 A1
20200387470 Cui Dec 2020 A1
Foreign Referenced Citations (16)
Number Date Country
1685297 Oct 2005 CN
103826161 May 2014 CN
106997438 Aug 2017 CN
107111715 Aug 2017 CN
107787495 Mar 2018 CN
107851162 Mar 2018 CN
1757067 Feb 2007 EP
0143344 Jun 2001 WO
2012082410 Jun 2012 WO
2014062618 Apr 2014 WO
2014158431 Oct 2014 WO
2014196966 Dec 2014 WO
2015066028 May 2015 WO
2015094261 Jun 2015 WO
2016209526 Dec 2016 WO
2018027059 Feb 2018 WO
Non-Patent Literature Citations (22)
Entry
M. M. Ozdal, “Emerging Accelerator Platforms for Data Centers,” in IEEE Design & Test, vol. 35, No. 1, pp. 47-54, Feb. 2018 (Year: 2018).
First Examination Report for Indian Patent Application No. 202047046246 dated Dec. 6, 2021. 5 pages.
International Preliminary Report on Patentability for International Application No. PCT/US2018/042695 dated Nov. 12, 2020. 9 pages.
International Preliminary Report on Patentability for International Application No. PCT/US2018/042625 dated Nov. 12, 2020. 12 pages.
International Preliminary Report on Patentability for International Application No. PCT/US2018/042684 dated Nov. 12, 2020. 12 pages.
The Next Platform, retrieved from https://www.nextplatform.com/2017/04/05/first-depth-look-googles-tpu-architecture/ (2017).
Google Cloud Platform, retrieved from https://cloudplatform.googleblog.com/2017/08/Titan-in-depth-security-in-plaintext.html (2017).
Jouppi, N., et al., “In-Datacenter Performance Analysis of a Tensor Processing Unit”, 44th International Symposium on Computer Architecture (ISCA), (2017) 17 pgs.
McKeen, Frank, et al., “Intel® Software Guard Extensions (Intel® SGX) Support for Dynamic Memory Management Inside an Enclave”, Intel Corporation (2016) 9 pgs.
“Proceedings of USENIX ATC '17 2017 USENIX Annual Technical Conference”, USENIX, USENIX, The Advanced Computing Systems Association, Jul. 12, 2017 (Jul. 12, 2017), pp. 1-811, XP061023212, [retrieved on Jul. 12, 2017].
International Search Report and Written Opinion for PCT Application No. PCT/US2018/042695, dated Oct. 4, 2018. 16 pages.
Invitation to Pay Additional Fees, Communication Relating to the Results of the Partial International Search, and Provisional Opinion Accompanying the Partial Search Result for PCT Application No. PCT/US2018/042625, dated Oct. 8, 2018. 14 pages.
Invitation to Pay Additional Fees, Communication Relating to the Results of the Partial International Search, and Provisional Opinion Accompanying the Partial Search Result for PCT Application No. PCT/US2018/042684, dated Oct. 10, 2018. 14 pages.
International Search Report and Written Opinion for PCT Application No. PCT/US2018/042625, dated Nov. 30, 2018. 19 pages.
International Search Report and Written Opinion for PCT Application No. PCT/US2018/042684, dated Dec. 5, 2018. 19 pages.
First Examination Report for Indian Patent Application No. 202047046281 dated Dec. 7, 2021. 6 pages.
First Examination Report for Indian Patent Application No. 202047046012 dated Sep. 17, 2021. 7 pages.
Office Action for European Patent Application No. 18750026.9 dated May 9, 2022. 6 pages.
Office Action for European Patent Application No. 18753285.8 dated Jul. 5, 2022. 8 pages.
Extended European Search Report for European Patent Application No. 22207041.9 dated Feb. 10, 2023. 8 pages.
Office Action for Chinese Patent Application No. 201880092471.6 dated Aug. 31, 2023. 10 pages.
Office Action for Chinese Patent Application No. 201880092577.6 dated Sep. 11, 2023. 8 pages.
Related Publications (1)
Number Date Country
20210034788 A1 Feb 2021 US
Provisional Applications (2)
Number Date Country
62672680 May 2018 US
62664438 Apr 2018 US