CONFIDENTIAL COMPUTING ENVIRONMENT FOR SERVICE MESH ON A NETWORK INTERFACE DEVICE

Information

  • Patent Application
  • 20220329573
  • Publication Number
    20220329573
  • Date Filed
    June 21, 2022
    2 years ago
  • Date Published
    October 13, 2022
    2 years ago
Abstract
Examples described herein relate to a executing a service mesh in a trust domain in a network interface device and executing one or more services in a second trust domain in one or more devices. In some examples, the network interface device is configured to determine trust domain capabilities of the network interface device and provide the trust domain capabilities based on a query.
Description
BACKGROUND

Confidential Computing (CC) allows customers to protect their code and/or data while executing workloads in third party computing environments (e.g., hosted clouds). Hardware-based mediation enclaves or Trusted Execution Environments (TEE) are based on compute nodes running x86 such as Advanced Micro Devices, Inc. (AMD) Secure Encrypted Virtualization-Encrypted State (SEV-ES), Intel® Software Guard Extensions (SGX), Intel® Trust Domain Extensions (TDX), ARM TrustZone, ARM Confidential Compute Architecture (CCA), among others.


Service Mesh (SM) is a cloud native software paradigm that enables application developers to develop applications with a service mesh that provides underlying Mutual Transport Layer Security (mTLS) security, load balancing, scale-out, scale-up, scale-down, and other miscellaneous functionality. A service mesh can include an infrastructure layer for facilitating service-to-service communications between microservices using application programming interfaces (APIs). A service mesh can be implemented using a proxy instance (e.g., sidecar) to manage service-to-service communications. Some network protocols used by microservice communications include Layer 7 protocols, such as Hypertext Transfer Protocol (HTTP), HTTP/2, remote procedure call (RPC), gRPC, Kafka, MongoDB wire protocol, and so forth. Envoy Proxy is a well-known data plane for a service mesh.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts an example system.



FIG. 2 depicts an example system.



FIG. 3 depicts an example system.



FIG. 4 illustrates an example of per-trusted domain resource and application identifiers.



FIG. 5 depicts an example deployment.



FIG. 6 depicts an example process.



FIG. 7 depicts an example system.



FIG. 8 depicts an example system.





DETAILED DESCRIPTION

Infrastructure Processing Units (IPUs) can include network interface devices which perform networking control plane functions such as secure tunneling, inline IPsec, or Transport Layer Security (TLS), etc. IPUs can include a set of independent computing cores that enable execution of infrastructure networking operations. Customers' confidential computing workloads can run as microservices on central processing units (CPUs) or other processing units (e.g., XPUs), using a cloud native service mesh for communication among workloads. Service mesh components can execute on computing cores of IPUs.


At least to provide confidential data and communications between microservices or workloads running on CPUs and corresponding service meshes executing on IPUs, a security architecture of a platform can provide a confidential compute environment for multi-tenant systems. For example, a service mesh (SM) can be executed in the Infrastructure Processing Units (IPUs). A confidential compute environment, domain, or enclave can provide a cryptographically protected memory that stores code and data at execution time, memory management, access controls, and trusted input/output (I/O) for microservices and service meshes. A microservice executing in a confidential compute environment can use platform security primitives to communicate with corresponding service mesh running in a confidential compute environment on the IPU, or other network interface device. Attestation can be used to provide proof to a workload owner of integrity and secure execution environments of microservice workload and service mesh and binding between microservice workload and service mesh. Cryptographic protections and access controls can provide a scalable security framework to execute different tenants' workloads and utilize a service mesh executed by IPUs. Services and service meshes can be executed in a Trust Framework, Information Integrity, or Trust as a Service (TaaS) system, solution, or platform.



FIG. 1 depicts an example system. Multiple different tenants can utilize a platform whereby different tenants can execute services in trusted computing environments that communicate with service mesh components executing, in trusted computing environments, on network interface devices (NIDs). Data plane 100 can include data processing operations by platforms 102-0 and 102-1. Although two platforms are shown, fewer or more than two platforms are available for use. Platforms 102-0 and 102-1 can be implemented as compute cores 104-0 and 104-1 communicatively coupled to NIDs 100-0 and 100-1 via respective device interfaces 108-0 and 108-1. A network interface device can refer to one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU); as well as physical or virtualized instances of the same.


In some examples, one or more of compute core 104-0 or 104-1 and/or NID 100-0 or 100-1 can be queried by a controller to provide trust domain capabilities or properties that are utilized or capable of being utilized in compute core 104-0 or 104-1 and/or NID 100-0 or 100-1. A controller can include an operating system (OS), hypervisor, orchestrator, or administrator. Based on trust domain capabilities or properties utilized or capable of being utilized in compute core 104-0 or 104-1 and/or NID 100-0 or 100-1, the controller can cause the trust domain with particular properties to be utilized in compute core 104-0 or 104-1 and/or NID 100-0 or 100-1 and cause execution of a service or service mesh in such trust domain in compute core 104-0 or 104-1 and/or NID 100-0 or 100-1.


Platforms 102-0 and 102-1 can be implemented as integrated devices whereby compute cores and NIDs are formed on a same system on chip (SoC) or integrated circuity. Platforms 102-0 and 102-1 can be implemented as disintegrated devices whereby compute cores and NIDs are formed on different system on chip (SoC) or integrated circuity devices that are communicatively coupled using an interface or interconnect.


Device interface 108-0 and 108-1 can provide encrypted communications based on Peripheral Component Interconnect Express (PCIe), Compute Express Link (CXL), Universal Chiplet Interconnect Express (UCIe), or other connection technologies described herein. See, for example, Peripheral Component Interconnect Express (PCIe) Base Specification 1.0 (2002), as well as earlier versions, later versions, and variations thereof. See, for example, Compute Express Link (CXL) Specification revision 2.0, version 0.7 (2019), as well as earlier versions, later versions, and variations thereof. See, for example, UCIe 1.0 Specification (2022), as well as earlier versions, later versions, and variations thereof. In some examples, device interface 108-0 and 108-1 can utilize encrypted communications such as PCIe Integrity and Data Encryption (IDE).


An orchestrator (e.g., Kubernetes) can bind each control element (e.g., 5G PCF, AMF, etc.) to their corresponding service mesh capabilities.


Service A can utilize service mesh A to receive application program interface (API) communications from another service and to communicate with service B (or other service) via service mesh B. For example, service A and B can be implemented as one or more of: microservices, virtual machine (VMs), microVMs, containers, process, thread, or other virtualized execution environment. Similarly, service B can utilize service mesh B to receive API communications from another service and to communicate with service A (or other service).


Service A can execute in confidential compute trust enclave 106-0 or trust domain in compute core 104-0. Service mesh A can execute in confidential compute trust enclave 106-1 or trust domain in network interface device 100-0. Service B can execute in confidential compute trust enclave 106-2 in compute core 104-1. Service mesh B can execute in confidential compute trust enclave 106-3 in network interface device 100-1. Confidential compute environments or trust domains can include or utilize one or more of: Intel® Trusted Domain Extensions (TDX), Intel® SGX Gramine, Intel® SGX Key Management Reference Application, Intel® Trust Domain Extensions TDX, AMD Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV), AMD® SEV-ES, ARM® CCA, AMD Memory Encryption Technology, ARM® TrustZone, total memory encryption (TME), multi-key total memory encryption (MKTME), Double Data Rate (DDR) encryption, function as a service (FaaS) container encryption or an enclave/TD (trust domain), Apple Secure Enclave Processor, or Qualcomm® Trusted Execution Environment.


Encryption or decryption can use, for example, total memory encryption (TME) and multi-key total memory encryption (MKTME) commercially available from Intel Corporation (as described in the Intel Architecture Memory Encryption Technologies Specification version 1.1 dated Dec. 17, 2017 and later revisions), components that make up TME and MKTME, the manner in which TME and MKTME operate, and so forth. These technologies can be used to provide a readily comprehensible perspective for understanding the various disclosed embodiments and are not intended to limit implementations to employing only TME and MKTME. TME provides a scheme to encrypt data by memory interfaces whereby a memory controller encrypts the data flowing to the memory or decrypts data flowing from memory and provides plain text for internal consumption by the processor.


In some examples, TME is a technology that encrypts a device's entire memory or portion of a memory with a key. When enabled via basic I/O system (BIOS) (or Universal Extensible Firmware Interface (UEFI), or a boot loader) configuration, TME can provide for memory accessed by a processor on an external memory bus to be encrypted, including customer credentials, encryption keys, and other intellectual property (IP) or personal information. TME supports a variety of encryption algorithms and in one embodiment may use a National Institute of Standards and Technology (NIST) encryption standard for storage such as the advanced encryption system (AES) XTS algorithm with 128-bit keys. The encryption key used for memory encryption is generated using a hardened random number generator in the processor and is never exposed to software. Data in memory and on the external memory buses can be encrypted and is in plain text while inside the processor circuitry. This allows existing software to run unmodified while protecting memory using TME. There may be scenarios where it would be advantageous to not encrypt a portion of memory, so TME allows the BIOS (or UEFI or bootloader) to specify a physical address range of memory to remain unencrypted. The software running on a TME-capable system can access portions of memory that are not encrypted by TME.


In some embodiments, TME can support multiple encryption keys (Multi-Key TME (MKTME)) and provides the ability to specify the use of a specific key for a page of memory. This architecture allows either processor-generated keys or tenant-provided keys, giving full flexibility to customers. Services can be cryptographically isolated from each other in memory with separate encryption keys which can be used in multi-tenant cloud environments. Services can also be pooled to share an individual key, further extending scale and flexibility.


Confidential compute environments can be configured and deployed in multi-tenant environments by control plane interface 110. Configuration of confidential compute environments can include determining if a platform is capable of creating confidential compute environments determining devices to provide confidential compute environments, provisioning of keys for encryption and decryption operations, provisioning certificates to devices and software, and so forth. Keys can be used for software or device attestation. Certificates can be used for software or device authentication.


A service mesh can be created and deployed by a CSP, content delivery network (CDN), or a Communications Service Provider (CoSP) and made available for use by one or more tenants. These tenants can be other CoSPs, roaming agreements based Mobile Virtual Network Operators (MVNOs), Secure Access Service Edge (SASE) vendors (ZScaler, etc.), or other tenants that pay for service mesh services. Security attestation can be provided by the CSP or CoSP to the tenant, and the tenant can verify the authenticity of the service mesh enclave or Trust Domain (TD) with the CSP/CoSP platform and service mesh software bindings, as described herein. Tenants can execute microservices and use an infrastructure provider's service mesh running on the infrastructure owners' network interface devices in a trust enclave or trust domain. However, the CSP/CoSP may offer the tenant the ability to run a tenant's service mesh on the CPS/CoSP network interface devices. In this scenario, attestation can be verified on a basis of the tenant-provided service mesh software bindings running on the CSP/CoSP infrastructure.



FIG. 2 illustrates a platform design view. A disaggregated system architecture can include one or more processors 200-0 to 200-X (where X is an integer), one or more on-SoC or discrete accelerators 204, and storage or memory 206 connected by a device interface or connection 210 to one or more network interface devices 202. The system can scale out by adding addition devices such as CPUs, accelerators such media decoding or encoding, Intel® QuickAssist Technology, Intel® Data Streaming Architecture (DSA), etc.


Connection 210 can include one or more interconnected links between devices can be based on PCIe, and run CXL.mem, CXL.io and/or CXL.cache across these components. Connection 210 can include bridges and PCIe peer-to-peer connectivity.


An orchestrator can create TD enclaves within one or more of: one or more processors 200-0 to 200-X, one or more on-SoC or discrete accelerators 204 and access data (e.g., read or write) in storage 206. In some examples, applications (App.) (e.g., services) and service meshes can execute within TDs within one or more of: one or more processors 200-0 to 200-X, one or more on-SoC or discrete accelerators 204 and access data (e.g., read or write) in storage 206. For example, Tenant 1's application can execute within processors 200-0 and 200-X and accelerators 204 and access data in storage 206. For example, Tenant 1's service mesh can execute within network interface device 202 and accelerators 204 and access data in storage 206. However, tenant N-2's application can execute within processor 200-0 and access data in storage 206 and tenant N-2's service mesh can execute within network interface device 202 and accelerators 204 and access data in storage 206. A tenant can include a third party that rents or pays for use of hardware and software services from a CSP or CoSP.


Before an application or service mesh can be executed in one or more TDs, attestation of the one or more TDs can take place. For example, security agent 250 (e.g., a platform root of trust such as Intel Platform Firmware Resilience (PFR), Google Titan, etc.) can provide a single immutable source of attestation of TDs on the disaggregated platform. Security agent 250 can be in network interface device 202 or part of one or more processors 200-0 to 200-X. Security agent 250 could operate based on Distributed Management Task Force Security Protocol and Data Model (DMTF SPDM), Trusted Computing Group (TCG) Device Identifier Composition Engine (TCG DICE), or a proprietary algorithm. Security agent 250 can attest software code by comparing code signed with a key against a reference code signed with the key to determine whether there is a match. If there is a match, security agent 250 can attest the code. Security agent 250 could attest a device by collection of device security version (SVN) or device microcode or firmware signed with a key and comparing the SVN or device microcode or firmware signed with a key against a reference SVN or device microcode or firmware signed with a key. If there is a match, security agent 250 can attest the device.


Security agent 250 can authenticate devices that communicate and encrypted communications can be used for microservice-to-microservice communication links and microservice-to-service mesh communication links. Devices and/or software (e.g., services or microservices) can utilize keys to encrypt communications transmitted to other devices and decrypt communications received from other devices. Infrastructure owner, security agent 250 or a root of trust can configure per-device or per-software (e.g., services or microservices) private keys (e.g., Rivest-Shamir-Adleman (RSA) or Elliptic Curve Digital Signature Algorithm (ECDSA)). The initial boot ROM microcode or firmware can be authenticated as part of mutual authentication.


After attestation and authentication of devices (e.g., one or more processors 200-0 to 200-X, one or more on-SoC or discrete accelerators 204, or storage 206), devices can perform a mutually acceptable key derivation handshake to provide mutually cryptographically independent sets of keys used for confidentially (e.g., data encryption or decryption), integrity and replay protection of the links used to transmit data between devices. Keys can be generated per tenant in some examples so that multiple applications and service meshes executed for a tenant share use of the same keys. In some examples, keys can be utilized to encrypt data per tenant in storage 206 (e.g., caches, memory, persistent memory, near and far memory, and others).


For example, key derivation handshake can be based on PCIe Integrity and Data Encryption (IDE) where PCIe encrypted communication are made between TDs. Security bindings enable cryptographically isolated execution of applications and service meshes and communication among microservices and microservices-to-service meshes. PCIe IDE can be extended for key derivations based on software instance specific process identifiers that are unique system wide to protect communications based on software identifiers. A software entity (e.g., application, container, microservice, service mesh, etc.) can be assigned a unique process identifier (e.g., PASID). A process identifier can be combined with key material to create PASID-application and PASID-service mesh instance keys. Accordingly, traffic on a link can be protected by encryption using a key based on per-application-service mesh pair, which is more granular than per-tenant encryption keys. Security agent 250 can configure lanes for communication among TDs on different devices for per tenant encrypted links (e.g., PCIe IDE with SPDM authorization). For example, communications between Tenant 1's Application and Tenant 1's service mesh can occur using protected tunnel 212-0 that protects communications by one or more of: IDE, encryption and decryption using PASID-application and PASID-service mesh instance keys, or encryption and decryption using processor 200-0 and processors 203 keys.


Security agent 250 can be implemented as platform firmware or privileged microcode executed by a processor. Examples of processor-executed microcode include Intel SGX or TDX microcode, AMD-SEV microcode, and others. Microcode can be signed, verified, and protected with anti-rollback protection using hardware fuse mechanisms. For example, security agent 250 can cause separation of traffic between different lanes by programming a root complex, input-output memory management unit (IOMMU), and accelerator input output (IO) stacks to configure access controls to lanes.



FIG. 3 illustrates the orchestration to deploy and execute the applications and service meshes in trust domains or enclaves. Application and service mesh deployments can be provided at least for CoSPs, CSPs, multi-access edge computing (MEC), edge computing, 5G deployments, 6G deployments, in data centers, among others. For example, for tenants A and B that deploy sets of microservices and service mesh (e.g., Envoy side car) on a target platform. The platform is a disaggregated and scalable system and operates as a cloud platform inside a CSP/CoSP environment.


Tenant A can utilize tenant orchestrator-A to specify security policy for applications and service mesh, perform attestation verification of devices (e.g., platform 302-0 to 302-N) and links (e.g., secure input/output (I/O) and links 304-0 to 304-N) that provide communication among applications and between applications and service mesh. Tenant B can utilize tenant orchestrator-B in a similar manner as that of orchestrator-A for applications and service mesh performed for tenant B.


A tenant can utilize a security controller (e.g., virtual network functions (VNF), cloud-native network function (CNF), service mesh) or platform root of trust (PRoT) to attest and authenticate devices and software. A security controller can operate as defined by ETSI NFV SEC013 specification. A security controller can perform attestation of TDs.


Tenant A can utilize a virtual workload (WL) infrastructure (Infra) manager (VIM) such as Kubernetes to deploy workloads (e.g., microservices) on one or more of platforms 302-0 to 302-N. Similarly, tenant B can utilize a virtualized WL Infra manager to deploy workloads (e.g., microservices) on one or more of platforms 302-0 to 302-N. A VIM can operate in accordance with ETSI NFV defined Virtual Infra-Manager (VIM) (e.g., ETSI GS NFV-MAN 001 V1.1.1 (2014-12)) and include Kubernetes APIs for attestation and TD instantiation and application or SM deployment within TDs. For example, secure encrypted (encry) WL repository can store encrypted microservice, encrypted VM images, encrypted containers, and so forth, for execution in a TD.


Tenant orchestrator-A and tenant orchestrator-B can communicate with CSP/CoSP orchestrator 310 to convey a set of policies and connectivity preferences or requirements for a tenant. CSP/CoSP orchestrator 310 can deploy those policies on the platforms to provide TDs and connectivity within secure I/O and links 304-0 for the tenants A and B. For example, policies can specify particular TD to use (e.g., SGX, TDX, MKTME, TrustZone, and so forth) and whether to select an encrypted image for execution in the TD.


Security orchestrator 320 can perform remote attestation for tenants to assure system security. Attestation primitives could include Intel® SGX DCAP (Data Center Attestation Primitives), Intel® Attestation services on TDX, or Intel® S3M-based attestation. Attestation can be performed by one or more devices rooted in hardware.


CSP, CoSP tenant workload orchestrator 310 can deploy tenant workloads on platforms and setup TDs per tenant policy as well as attest the TDs (e.g., indicate a TD exists). For example, orchestrator 310 and/or orchestrator 300 can setup trust domains on devices in platform such as accelerators, CPU, GPUs, network interface devices (NIDs) for utilization by microservices and service mesh executed on behalf of tenants A and B. Orchestrator 300 can setup encrypted communications using secure input/output (I/O) and links among devices to provide encrypted communications among devices, among services, and between service and service mesh. A mesh of interconnected confidential computing services can be deployed on a platform and securely communicate via hardware protected channels. Multiple microservices can utilize a single service mesh. For instance, a tenant may deploy one instance of service mesh (e.g., Envoy side car, Istio Agent, or Istio Certificate Manager) to be used by more than one set of independent tenant microservices.


Alternatively, the service mesh may be orchestrated and deployed by the CSP/CoSP on NIDs in TD environments and made available to the tenant(s). The service mesh executing on a NID can expose a set of interfaces for tenant microservices to access the service mesh.



FIG. 4 illustrates an example of per-trusted domain resource and application identifiers. For example, identifiers 402-0 to 402-N can identify domain identifiers (Domain ID), application identifiers (Appl. ID), bus identifiers (Bus ID), and device identifiers (DeviceID) for trusted domains environments in processors for tenants A-N. For example, identifiers 404-0 to 404-N can identify domain identifiers (Domain ID), application identifiers (Appl. ID), bus identifiers (Bus ID), and device identifiers (DeviceID) in network interface devices for tenants A-N.


In some examples, an application identifier (Appl. ID) can be a PASID of the application and the PASID can be assigned by an operating system (OS). The PASID can be used to cryptographically bind a trust domain with a specific instance of an application or service.


Domain identifiers can identify microservices running on CPUs and service mesh components running on network interface devices. Identifiers can include hardware-based identifiers (e.g., PCIe Address, bus, interface, etc.) and software-based identifiers (e.g., process address space identifiers (PASIDs)). In some cases, CoSP, CSP, Multi-access edge computing (MEC) or Edge Orchestrator, or security controller can assign identifiers to TDs running on CPUs and network interface devices.


CoSP, CSP, MEC or edge Orchestrator, security controller can maintain bindings that indicate TDs of microservices and service meshes that are permitted to communicate. An orchestrator can maintain bindings per tenant. For example, TD Domain ID of FF03 indicates that TD DomainID FF013 of CPU can communicate with TD Domain ID FF023 of a network interface device.


Prior to connection of a compute TD with a network interface TD, attestation can be requested by a tenant. A tenant can issue an attestation quote that includes identifiers, signed by the enclave/TD platform keys. An orchestrator, security controller, or trusted entity can verify that the identifiers are valid and allocated for use by a pair of compute TD and network interface device TD. Based on permitted pairing in CPU and NID interconnect 410, the tenant can be permitted to form a pairing of a compute TD with a network interface TD.


Based on permitted pairing of a compute TD with a network interface TD, an orchestrator, security controller, or trusted entity can grant a mutual encryption key for microservice and service mesh to access shared memory to read and decrypt data from the shared memory and/or encrypted and write data to the shared memory. Private memories can be protected from access by other trusted components. Pairings of compute TD with a network interface TD can use system memory that stores confidential data accessible by decryption solely to the two communicating components (e.g., microservice and service mesh).


Protocols, such as Mutual Transport Layer Security (mTLS), Inter-Process Communication (IPC), google Remote Procedure Call (gRPC), etc. may be utilized for microservice-to-service mesh communications, even in addition to hardware secured channels.


The 5G Service Based Architecture (SBA) allows 5G control and data plane software to communicate over a Service Based Interface (SBI). For example, 5G control and data plane components can utilize Cloud Native service mesh for communication and be deployed using Kubernetes on the CPU. When 5G Control plane elements (e.g., Session Mediation Function (SMF), etc.) run inside Trusted Domains/Enclaves and using Service Mesh running on the IPU, Confidential Computing security is provided for their workload. FIG. 5 depicts an example 5G, 6G, Secure Edge, Secure Access Service Edge (SASE), Zero Trust and other modern security systems are all based on a paradigm that allows applications to securely be filtered, identified, and connected with authorized services. The system provides a 5G/6G Deployment SM architecture with CC on Control and Data Plane Functions. Those services today run as monolithic VMs or Application Containers; this is already changing with microservices being the core of the application and all underlying networking and security tasks (e.g., mutual TLS, load balancers, scale out/up, etc.) to be managed by Service Mesh, which acts like a common software substrate for all microservices.


Service meshes and 5G/6G control plane functions can be performed in TDs or enclaves. For example, one or more of the following can be executed as applications or microservices in TDs or enclaves: network resource management (e.g., Network Slice Selection Function (NSSF), Network Function Repository Function (NRF), network data analytics function (NWDAF)), signaling (e.g., Service Communication Proxy (SCP), Binding Support Function (BSF), Security Edge Protection Proxy (SEPP)), Application Function (AF), Network Exposure Function (NEF), policy (e.g., Core Charging Function (CHF), Policy Charging Function (PCF)), packet controller (e.g., Core Access and Mobility Management Function (AMF), Short Message Service Function (SMSF), Session Management Function (SMF), UE radio Capability Management Function (UCMF)), location services (e.g., Gateway Mobile Location Center (GMLC), Location Management Function (LMF)), or subscriber management (e.g., 5G Equipment Identity Register (5G-EIR), Authentication Server Function (AUSF), Unified Data Management (UDM), Home Subscriber Server (HSS)). In addition, service meshes can be executed in TDs or enclaves. The service meshes can provide communicative coupling between microservices executing in TDs.



FIG. 6 depicts an example process. The process can be performed by an orchestrator, in some examples. At 602, a trust domain or confidential computing environment can be created in one or more devices. For example, a device can include a CPU, GPU, accelerator, network interface device, interconnect, storage, memory, memory pool, and so forth. Various examples of manners of forming a trust domain and encrypted communications between trust domains are described herein.


At 604, before a service can be executed in a trust domain, attestation of the trust domain can occur. For example, attestation of the trust domain can include identifying that an orchestrator created a trust domain in accordance with policies of a tenant. For example, a security controller can perform attestation of a trust domain.


At 606, based on attestation of a trust domain, a service or service mesh can be deployed for execution within the trust domain on the device. Thereafter, the service or service mesh can communicate with another service or service mesh in another trust domain using a secure communication such as encrypted communications.



FIG. 7 depicts an example computing system. Components of system 700 (e.g., processor 710, accelerators 742, network interface 750, memory subsystem 720, and so forth) can be configured to provide a trust domain or confidential computing environment for execution of a service or service mesh, as described herein. System 700 includes processor 710, which provides processing, operation management, and execution of instructions for system 700. Processor 710 can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system 700, or a combination of processors. Processor 710 controls the overall operation of system 700, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.


In one example, system 700 includes interface 712 coupled to processor 710, which can represent a higher speed interface or a high throughput interface for system components that needs higher bandwidth connections, such as memory subsystem 720 or graphics interface components 740, or accelerators 742. Interface 712 represents an interface circuit, which can be a standalone component or integrated onto a processor die. Where present, graphics interface 740 interfaces to graphics components for providing a visual display to a user of system 700. In one example, graphics interface 740 can drive a high definition (HD) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater and can include formats such as full HD (e.g., 1080p), retina displays, 4K (ultra-high definition or UHD), or others. In one example, the display can include a touchscreen display. In one example, graphics interface 740 generates a display based on data stored in memory 730 or based on operations executed by processor 710 or both. In one example, graphics interface 740 generates a display based on data stored in memory 730 or based on operations executed by processor 710 or both.


Accelerators 742 can be a fixed function or programmable offload engine that can be accessed or used by a processor 710. For example, an accelerator among accelerators 742 can provide compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services. In some embodiments, in addition or alternatively, an accelerator among accelerators 742 provides field select controller capabilities as described herein. In some cases, accelerators 742 can be integrated into a CPU socket (e.g., a connector to a motherboard or circuit board that includes a CPU and provides an electrical interface with the CPU). For example, accelerators 742 can include a single or multi-core processor, graphics processing unit, logical execution unit single or multi-level cache, functional units usable to independently execute programs or threads, application specific integrated circuits (ASICs), neural network processors (NNPs), programmable control logic, and programmable processing elements such as field programmable gate arrays (FPGAs) or programmable logic devices (PLDs). Accelerators 742 can provide multiple neural networks, CPUs, processor cores, general purpose graphics processing units, or graphics processing units can be made available for use by artificial intelligence (AI) or machine learning (ML) models. For example, the AI model can use or include one or more of: a reinforcement learning scheme, Q-learning scheme, deep-Q learning, or Asynchronous Advantage Actor-Critic (A3C), combinatorial neural network, recurrent combinatorial neural network, or other AI or ML model. Multiple neural networks, processor cores, or graphics processing units can be made available for use by AI or ML models.


Memory subsystem 720 represents the main memory of system 700 and provides storage for code to be executed by processor 710, or data values to be used in executing a routine. Memory subsystem 720 can include one or more memory devices 730 such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. Memory 730 stores and hosts, among other things, operating system (OS) 732 to provide a software platform for execution of instructions in system 700. Additionally, applications 734 can execute on the software platform of OS 732 from memory 730. Applications 734 represent programs that have their own operational logic to perform execution of one or more functions. Processes 736 represent agents or routines that provide auxiliary functions to OS 732 or one or more applications 734 or a combination. OS 732, applications 734, and processes 736 provide software logic to provide functions for system 700. In one example, memory subsystem 720 includes memory controller 722, which is a memory controller to generate and issue commands to memory 730. It will be understood that memory controller 722 could be a physical part of processor 710 or a physical part of interface 712. For example, memory controller 722 can be an integrated memory controller, integrated onto a circuit with processor 710.


While not specifically illustrated, it will be understood that system 700 can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination. Buses can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a Hyper Transport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (Firewire).


In one example, system 700 includes interface 714, which can be coupled to interface 712. In one example, interface 714 represents an interface circuit, which can include standalone components and integrated circuitry. In one example, multiple user interface components or peripheral components, or both, couple to interface 714. Network interface 750 provides system 700 the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks. Network interface 750 can include an Ethernet adapter, wireless interconnection components, cellular network interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces. Network interface 750 can transmit data to a device that is in the same data center or rack or a remote device, which can include sending data stored in memory.


Network interface 750 can include one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, or network-attached appliance. Some examples of network interface 750 are part of an Infrastructure Processing Unit (IPU) or data processing unit (DPU) or utilized by an IPU or DPU. An xPU can refer at least to an IPU, DPU, GPU, GPGPU, or other processing units (e.g., accelerator devices). An IPU or DPU can include a network interface with one or more programmable pipelines or fixed function processors to perform offload of operations that could have been performed by a CPU.


In one example, system 700 includes one or more input/output (I/O) interface(s) 760. I/O interface 760 can include one or more interface components through which a user interacts with system 700 (e.g., audio, alphanumeric, tactile/touch, or other interfacing). Peripheral interface 770 can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system 700. A dependent connection is one where system 700 provides the software platform or hardware platform or both on which operation executes, and with which a user interacts.


In one example, system 700 includes storage subsystem 780 to store data in a nonvolatile manner. In one example, in certain system implementations, at least certain components of storage 780 can overlap with components of memory subsystem 720. Storage subsystem 780 includes storage device(s) 784, which can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage 784 holds code or instructions and data 786 in a persistent state (e.g., the value is retained despite interruption of power to system 700). Storage 784 can be generically considered to be a “memory,” although memory 730 is typically the executing or operating memory to provide instructions to processor 710. Whereas storage 784 is nonvolatile, memory 730 can include volatile memory (e.g., the value or state of the data is indeterminate if power is interrupted to system 700). In one example, storage subsystem 780 includes controller 782 to interface with storage 784. In one example controller 782 is a physical part of interface 714 or processor 710 or can include circuits or logic in both processor 710 and interface 714.


A volatile memory is memory whose state (and therefore the data stored in it) is indeterminate if power is interrupted to the device. Dynamic volatile memory uses refreshing the data stored in the device to maintain state. One example of dynamic volatile memory incudes DRAM (Dynamic Random Access Memory), or some variant such as Synchronous DRAM (SDRAM). An example of a volatile memory include a cache. A memory subsystem as described herein may be compatible with a number of memory technologies, such as those consistent with specifications from JEDEC (Joint Electronic Device Engineering Council) or others or combinations of memory technologies, and technologies based on derivatives or extensions of such specifications.


A non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable memory device, such as NAND technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND). A NVM device can also comprise a byte-addressable write-in-place three dimensional cross point memory device, or other byte addressable write-in-place NVM device (also referred to as persistent memory), such as single or multi-level Phase Change Memory (PCM) or phase change memory with a switch (PCMS), Intel® Optane™ memory, NVM devices that use chalcogenide phase change material (for example, chalcogenide glass), a combination of one or more of the above, or other memory.


A power source (not depicted) provides power to the components of system 700. More specifically, power source typically interfaces to one or multiple power supplies in system 700 to provide power to the components of system 700. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source.


In an example, system 700 can be implemented using interconnected compute sleds of processors, memories, storages, network interfaces, and other components. High speed interconnects can be used such as: Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniBand, Internet Wide Area RDMA Protocol (iWARP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Peripheral Component Interconnect express (PCIe), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omni-Path, Compute Express Link (CXL), Universal Chiplet Interconnect Express (UCIe), HyperTransport, high-speed fabric, NVLink, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Infinity Fabric (IF), Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3GPP 5G, and variations thereof. Data can be copied or stored to virtualized storage nodes or accessed using a protocol such as NVMe over Fabrics (NVMe-oF) (e.g., NVMe-oF specification, version 1.0 (2016) as well as variations, extensions, and derivatives thereof) or NVMe (e.g., Non-Volatile Memory Express (NVMe) Specification, revision 1.3c, published on May 24, 2018 (“NVMe specification”) as well as variations, extensions, and derivatives thereof).


Communications between devices can take place using a network that provides die-to-die communications; chip-to-chip communications; circuit board-to-circuit board communications; and/or package-to-package communications.


Embodiments herein may be implemented in various types of computing, smart phones, tablets, personal computers, and networking equipment, such as switches, routers, racks, and blade servers such as those employed in a data center and/or server farm environment. The servers used in data centers and server farms comprise arrayed server configurations such as rack-based servers or blade servers. These servers are interconnected in communication via various network provisions, such as partitioning sets of servers into Local Area Networks (LANs) with appropriate switching and routing facilities between the LANs to form a private Intranet. For example, cloud hosting facilities may typically employ large data centers with a multitude of servers. A blade comprises a separate computing platform that is configured to perform server-type functions, that is, a “server on a card.” Accordingly, each blade includes components common to conventional servers, including a main printed circuit board (main board) providing internal wiring (e.g., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board.


In some examples, network interface and other embodiments described herein can be used in connection with a base station (e.g., 3G, 4G, 5G and so forth), macro base station (e.g., 5G networks), picostation (e.g., an IEEE 802.11 compatible access point), nanostation (e.g., for Point-to-MultiPoint (PtMP) applications), micro data center, on-premise data centers, off-premise data centers, edge network elements, fog network elements, and/or hybrid data centers (e.g., data center that use virtualization, cloud and software-defined networking to deliver application workloads across physical data centers and distributed multi-cloud environments).



FIG. 8 depicts an example network interface device. Network interface device 800 manages performance of one or more processes using one or more of processors 806, processors 810, accelerators 820, memory pool 830, or servers 840-0 to 840-N, where N is an integer of 1 or more. In some examples, processors 806 of network interface device 800 can execute one or more processes, applications, VMs, containers, microservices, and so forth that request performance of workloads by one or more of: processors 810, accelerators 820, memory pool 830, and/or servers 840-0 to 840-N. Network interface device 800 can utilize network interface 802 or one or more device interfaces to communicate with processors 810, accelerators 820, memory pool 830, and/or servers 840-0 to 840-N. Network interface device 800 can utilize programmable pipeline 804 to process packets that are to be transmitted from network interface 802 or packets received from network interface 802.


Programmable pipeline 804 and/or processors 806 can be configured or programmed using languages based on one or more of: P4, Software for Open Networking in the Cloud (SONiC), C, Python, Broadcom Network Programming Language (NPL), NVIDIA® CUDA®, NVIDIA® DOCA™, Infrastructure Programmer Development Kit (IPDK), or x86 compatible executable binaries or other executable binaries. Programmable pipeline 804, processors 806, and/or memory pool 830 can be configured to provide a trust domain or confidential computing environment for execution of a service or service mesh, as described herein.


Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. A processor can be one or more combination of a hardware state machine, digital control logic, central processing unit, or any hardware, firmware and/or software elements.


Some examples may be implemented using or as an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.


According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner, or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.


One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.


The appearances of the phrase “one example” or “an example” are not necessarily all referring to the same example or embodiment. Any aspect described herein can be combined with any other aspect or similar aspect described herein, regardless of whether the aspects are described with respect to the same figure or element. Division, omission, or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments.


Some examples may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.


The terms “first,” “second,” and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. The term “asserted” used herein with reference to a signal denote a state of the signal, in which the signal is active, and which can be achieved by applying any logic level either logic 0 or logic 1 to the signal. The terms “follow” or “after” can refer to immediately following or following after some other event or events. Other sequences of steps may also be performed according to alternative embodiments. Furthermore, additional steps may be added or removed depending on the particular applications. Any combination of changes can be used and one of ordinary skill in the art with the benefit of this disclosure would understand the many variations, modifications, and alternative embodiments thereof.


Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Additionally, conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, should also be understood to mean X, Y, Z, or any combination thereof, including “X, Y, and/or Z.′”


Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below.


Example 1 includes one or more examples, and includes a non-transitory computer-readable medium, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: execute a service mesh in a trust domain in a network interface device and execute one or more services in a second trust domain in one or more devices.


Example 2 includes one or more examples, and includes instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: configure the network interface device to determine trust domain capabilities of the network interface device and provide the trust domain capabilities based on a query.


Example 3 includes one or more examples, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNlC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU).


Example 4 includes one or more examples, wherein the one or more devices comprise one or more of: central processing unit (CPU), graphics processing unit (GPU), XPU, accelerator, storage, or memory.


Example 5 includes one or more examples, wherein the trust domain is to provide data and executable code isolation and data isolation from one or more processes outside of the trust domain.


Example 6 includes one or more examples, and includes instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: provide encrypted communications between the one or more services executing in the second trust domain and the service mesh executing in the trust domain.


Example 7 includes one or more examples, wherein an orchestrator is to create the trust domain and the second trust domain.


Example 8 includes one or more examples, wherein an orchestrator is to deploy execution of the service mesh in the trust domain and the one or more services in the second trust domain.


Example 9 includes one or more examples, and includes instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: attest the trust domain prior to execution of the service mesh in the trust domain and attest the second trust domain prior to execution of the one or more services in the trust second domain.


Example 10 includes one or more examples, and includes a method comprising: executing a service mesh in a trust domain in a network interface device and executing one or more services in a second trust domain in one or more devices.


Example 11 includes one or more examples, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNlC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU).


Example 12 includes one or more examples, wherein the one or more devices comprise one or more of: central processing unit (CPU), graphics processing unit (GPU), accelerator, storage, or memory.


Example 13 includes one or more examples, wherein the trust domain is to provide data and executable code isolation and data isolation from one or more processes outside of the trust domain.


Example 14 includes one or more examples, and includes providing encrypted communications between the one or more services executing in the second trust domain and the service mesh executing in the trust domain.


Example 15 includes one or more examples, and includes an orchestrator creating the trust domain and the second trust domain.


Example 16 includes one or more examples, and includes an orchestrator attesting the trust domain prior to execution of the service mesh in the trust domain and an orchestrator attesting the second trust domain prior to execution of the one or more services in the trust second domain.


Example 17 includes one or more examples, and includes an apparatus comprising: a disaggregated composite compute node comprising: a network interface device to execute a service mesh in a trust domain and one or more devices to execute one or more services in a second trust domain.


Example 18 includes one or more examples, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNlC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU).


Example 19 includes one or more examples, wherein the one or more devices comprise one or more of: central processing unit (CPU), graphics processing unit (GPU), XPU, accelerator, storage, or memory.


Example 20 includes one or more examples, wherein the trust domain is to provide data and executable code isolation and data isolation from one or more processes outside of the trust domain.


Example 21 includes one or more examples, and includes an interconnect to provide encrypted communications between the one or more services executing in the second trust domain and the service mesh executing in the trust domain.

Claims
  • 1. A non-transitory computer-readable medium, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: execute a service mesh in a trust domain in a network interface device andexecute one or more services in a second trust domain in one or more devices.
  • 2. The computer-readable medium of claim 1, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: configure the network interface device to determine trust domain capabilities of the network interface device and provide the trust domain capabilities based on a query.
  • 3. The computer-readable medium of claim 1, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU).
  • 4. The computer-readable medium of claim 1, wherein the one or more devices comprise one or more of: central processing unit (CPU), graphics processing unit (GPU), XPU, accelerator, storage, or memory.
  • 5. The computer-readable medium of claim 1, wherein the trust domain is to provide data and executable code isolation and data isolation from one or more processes outside of the trust domain.
  • 6. The computer-readable medium of claim 1, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: provide encrypted communications between the one or more services executing in the second trust domain and the service mesh executing in the trust domain.
  • 7. The computer-readable medium of claim 1, wherein an orchestrator is to create the trust domain and the second trust domain.
  • 8. The computer-readable medium of claim 1, wherein an orchestrator is to deploy execution of the service mesh in the trust domain and the one or more services in the second trust domain.
  • 9. The computer-readable medium of claim 1, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: attest the trust domain prior to execution of the service mesh in the trust domain andattest the second trust domain prior to execution of the one or more services in the trust second domain.
  • 10. A method comprising: executing a service mesh in a trust domain in a network interface device andexecuting one or more services in a second trust domain in one or more devices.
  • 11. The method of claim 10, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU).
  • 12. The method of claim 10, wherein the one or more devices comprise one or more of: central processing unit (CPU), graphics processing unit (GPU), accelerator, storage, or memory.
  • 13. The method of claim 10, wherein the trust domain is to provide data and executable code isolation and data isolation from one or more processes outside of the trust domain.
  • 14. The method of claim 10, comprising: providing encrypted communications between the one or more services executing in the second trust domain and the service mesh executing in the trust domain.
  • 15. The method of claim 10, comprising: an orchestrator creating the trust domain and the second trust domain.
  • 16. The method of claim 10, comprising: an orchestrator attesting the trust domain prior to execution of the service mesh in the trust domain andan orchestrator attesting the second trust domain prior to execution of the one or more services in the trust second domain.
  • 17. An apparatus comprising: a disaggregated composite compute node comprising:a network interface device to execute a service mesh in a trust domain andone or more devices to execute one or more services in a second trust domain.
  • 18. The apparatus of claim 17, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU).
  • 19. The apparatus of claim 17, wherein the one or more devices comprise one or more of: central processing unit (CPU), graphics processing unit (GPU), XPU, accelerator, storage, or memory.
  • 20. The apparatus of claim 17, wherein the trust domain is to provide data and executable code isolation and data isolation from one or more processes outside of the trust domain.
  • 21. The apparatus of claim 17, comprising an interconnect to provide encrypted communications between the one or more services executing in the second trust domain and the service mesh executing in the trust domain.