The present invention relates to the field of computing systems with automated deployment, scaling, and management of containerized applications across multiple clusters (e.g., Kubernetes clusters) in a hybrid/multi-cloud, or multi-datacenter environment.
A distributed computing system has interconnected clusters with compute nodes executing a set of microservices in containers organized into multi-container pods. The system includes application slice components distributed among the clusters to define and operate a plurality of application slices providing application slice services for respective sets of pods distributed among the clusters. The clusters are configured in a multi-tenancy in which distinct tenants each include a respective distinct set of the application slices and is configured according to respective per-tenant configuration data.
The foregoing and other objects, features and advantages will be apparent from the following description of embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views.
The content of U.S. Application No. 63/183,244 filed on May 3, 2021, entitled “Smart Application Framework”, is hereby incorporated by reference in its entirety.
Overview
The disclosure is directed to a container-based service deployment system having Pod/Node/Cluster architecture and corresponding management and operational functions, which in one embodiment may be realized using Kubernetes® components. In one major aspect the disclosure is directed to “multi-tenancy” in an application environment, which provides an ability to run applications from different customers, teams, or other administrative units (“tenants”) simultaneously sharing cluster resources while providing compute resource, network, security, and policy isolation among the tenants.
Existing Kubernetes practices do not provide multi-tenancy as a first-class construct for users and resources. Teams deploying applications on one or more Kubernetes clusters leads to operational challenges to manage the namespaces and associated shared resources across all the applications deployed. In some cases, this can lead to security concerns and resource contention due to resource intensive applications. In addition, with multi-cluster deployments admins have tedious operational management challenges to extend the normalized resource quota management, namespace sameness and configuration and configuration drift management. They lack a normalized way to support multi-tenancy related configuration and features like secure overlay network for network traffic isolation, application namespaces association, namespace sameness, resource quota management and isolation based on container and overlay network policies, zero-trust security related features, and slice optimization specific to customer/tenant applications across one or more clusters.
The present disclosure is directed to methods an apparatus that address the above shortcomings using a construct called Application Slice, which can exhibit some or all the following:
The Mesh platform (also known as “Mesh” or “KubeSlice”) combines network, application, Kubernetes, and deployment services in a framework to accelerate application deployment in a multi-cluster, multi-tenant environment. KubeSlice achieves this by creating logical application slice boundaries that allow pods and services to communicate seamlessly across clusters, clouds, edges, and data centers. As enterprises expand application architectures to span multiple clusters located in data centers or cloud provider regions, or across cloud providers, Kubernetes clusters need the ability to fully integrate connectivity and pod-to-pod communications with namespace propagation across clusters. The Smart Application Framework makes it easier to scale and operate cloud business. It infuses intelligence and automation on top of the existing infrastructure to make application infrastructure smarter and grow efficiently while improving quality. The framework includes: (1) the Smart Application Mesh (KubeSlice/Mesh Platform); (2) the Application Slice; and (3) the Smart Applications like AIOps driven Load Balancer or workload placement.
The platform enables creating multiple logical slices in a single cluster or group of clusters regardless of their physical location. Existing intra-cluster communication remains local to the cluster utilizing the CNI interface. Application slice provides isolation of network traffic between clusters by creating an overlay network for inter-cluster communication. Clusters are interconnected using secure gateways. One or more clusters may be attached to the slice. Each slice has its own separate L3 domain address space—separate Subnet. Each cluster that is part of the slice has a part of the slice-subnet. Application Pods are connected to a slice and can connect to each other on slice subnet creating an overlay L3 network using slice routers cross the slice. The overlay L3 network is collection of virtual wires (vWires), and the connectivity is driven by the network service names (namespace-driven) associating workloads/applications to a slice. Applications/Pods that are attached to slice have an IP interface to the slice specific L3 address space. Each slice may include a global namespace that is normalized across the slice—in all the clusters that are attached to slice. All the services that are attached to the slice (across one or more clusters) are visible to each other via slice wide service discovery. Exporting services from one attached cluster in the slice to all the clusters that are attached to the slice. Exported services are only visible to the applications/services attached to the slice.
The platform architecture consists of several components that interact with each other to manage the lifecycle of the slice components and its overlay network. Mesh platform enables creation of a collection of microservices and or collection of virtual machines irrespective of location be in a data center or in multi-cloud to form a domain. This domain acts as micro segmentation to the rest of the workloads. Slice has the capability of spanning across clusters and geographical boundaries. Application slice is an overlay on existing service mesh or hybrid footprint. The platform enables zero trust security across all workloads/microservices. The system federates security for service-to-service communication. A security controller works as a typical Kubernetes-native application with Custom Resources and Controllers with no additional infrastructure or custom configuration formats.
The platform enables customers to extend compute resources to Edge. A small footprint will enable workloads to scale-out to edge compute and appear as a cloud extension to the rest of the services
The system can establish Reinforcement Learning for load balancing service to service communication. RL based load balancing of service-to-service communication helps better utilization of resources and enables huge positive impact to customer experience. RL based load balancing helps to identify bottlenecks in service-to-service communication in a proactive measure.
The Smart Application Overlay works on a multi-cluster environment with slice. In a Multi-cluster environment, service discovery, security and name space are normalized to create a surface area which has fine grain traffic control and security posture.
The Mesh provides a seamless way to manage, connect, secure, and observe applications that need to run workloads on the edge as well as public cloud.
The disclosed system addresses an opportunity that has arisen from the development of the ‘Service Mesh’ (like Istio™) and ‘Network Service Mesh (NSM)’ constructs originating from the development of Kubernetes, microservices, and other technologies under the umbrella of ‘Cloud Native Computing.’ These technologies have enabled multi-cloud distributed applications with Kubernetes microservices clusters deployed across multiple public clouds, edge clouds and customer premise private clouds. It is now possible to create an application overlay infrastructure that interconnects distributed application clusters/Pods across domains. These application specific overlays can now provide a tight binding between an application and its overlay network. Applications can now specify the exact connectivity and QOS requirements required for the application. This allows application developers to build and deploy application overlay networks that support application driven traffic engineering/steering with network-level QOS on the underlying infrastructure.
In accordance with certain embodiments, disclosed herein is an “Application Slice”—a key feature of the Mesh Platform. The platform allows operators to build application slices—application overlays—that are a way of grouping application pods based on one or more organizing principles such as velocity of deployment, security, governance, teams, deployment environments like production/development/pre-production, etc.
The Mesh provides mechanisms to create and manage slices—create an overlay network, apply network policy and service discovery across the slice; and continuous monitoring of slices; observe the slice telemetry, service-to-service relationships, and traffic prioritization and management.
In some embodiments, the Mesh supports combinations of the following:
Embodiments
Also shown in
One important aspect of the disclosed technique is its use of a specialized construct referred to as “application slice.” In general, the application slice construct can be used across distinct clusters C to ease the deployment and management of services (specifically, to provide an application-oriented view and organization as opposed to the more structural pod/cluster view and organization, which can have drawbacks as mentioned below). In some embodiments, as described in examples herein, an application slice can include a respective overlay network services that provides several communications related functionalities. More generally, an application slice may minimally include application namespace bindings to the slice and associated resource quota management and namespace-based isolation. Application slices can also be used in conjunction with multi-tenancy as described further below.
Thus, in this embodiment an application slice is an application overlay infrastructure that includes network services/components distributed across multiple clusters C to provide a surface area with its own layer-3 (L3) domain and IP address space. Application slices may extend over multiple clusters C that are deployed in one or more public/private clouds 10 or data centers/edges. The application slice mechanism provides a framework for scalable secure segmentation of pods 28 that allows traffic prioritization, security isolation, service discovery for service-to-service communication across the slice, granular governance, and failover containment. In addition, this mechanism enables granular application performance management using artificial intelligence/machine learning (AI/ML) algorithms and AI driven AppNetOps (AIOps). Finally, an application slice is considered as an “overlay” because it can work with existing cloud-service infrastructure (such as Kubernetes) and may not require significant changes to existing code. For example, a Pod 28 may be included in an application slice by simple addition of an annotation to a Pod specification in the Kubernetes system.
Referring again to
Application Slice Features
Discovery and Orchestration of Application Slices
During an application deployment—network services are discovered using the slice network namespace; and inter-domain secure overlay links (VPN etc.) are established to build a distributed applications specific application overlay network slice.
Slices can use service export/import functions to export/import Kubernetes services and Istio virtual services for slice-wide service discovery. In addition, a Slice Ingress gateway can be used to export services and a Slice Egress gateway can be used for imported services. One or more application namespaces can be associated with these slices. Slice isolation can be enabled by implementing network policies for these namespaces. Slices are defined across clusters C, but in some deployments, it may be beneficial to use slices that exist within a single cluster.
Slice Namespace
The slice namespace is an association of application slice wide L3 network namespace and one or more cluster namespaces with the slice. Slice namespace provides slice-specific namespace associations for all the services on the application slice to be associated with. All the services that are deployed on the slice across all the clusters are associated with the slice namespace associations and are discovered across the slice. The services that are registered with the application slice namespace can be looked up by any of the services on the application slice. The Slice Operators (Slice Controllers) 52 in all the slice associated clusters C coordinate to normalize the slice namespace across those clusters. They also monitor and enforce the slice namespace associations within the slice. Any application/service to be deployed on the slice must be in one of the associated namespaces of the slice. These services are not visible or accessible outside of the slice (unless exception rules are applied). The slice namespace provides isolation of services to application slice. Slice network policies can be associated with namespaces that are associated with the slice namespace. These slice network policies provide isolation of traffic and traffic control within the slice and between slice and the other cluster resources.
Federated Security
The Application Slice offers an important feature—federated security—that automates the creation of Secure Overlay Links (SOL)—VPNs/VPCs or other wide area secure interconnection technologies, applying global security policies and removes the burden of the security management from the operational staff and further improves the overall security of the network through automation.
AIOps on Application Slice
During application runtime, a component AIOps (AI Ops) ingests telemetry from the overlay network services to ML/RL agents. The RL agents assist in tuning the overlay network services parameters to optimize the distributed application performance.
Mesh system components include the network service mesh Control plane and Dataplane components to create and manage the Application Slice L3 overlay network. These components include the network service manager, network service Dataplane daemons, network service registry, forwarders and Webhooks management functions. Network service mesh control plane enables the automation of orchestration of slice connectivity between the slice network service clients (Application Pods 28) and slice network services/components 54 such as Slice Routers 60.
Application Mesh Controller (“Backend,” “KubeSlice Controller”) 18
The Backend 18 provides management, visualization, dashboard functions and APIs to manage the life cycle of the slice and slice policy deployment across multiple clusters. In one embodiment the Backend can be implemented using Cloud services and, in another embodiment, as “KubeSlice/Mesh Controller” can be implemented using Kubernetes native constructs and custom resource descriptors (CRDs).
The Backend/KubeSlice Controller is installed in one of the clusters and provides a central configuration management system, for slices across multiple clusters. The KubeSlice Controller can be installed in one of the worker cluster or in a separate cluster.
The Backend/KubeSlice Controller 18 provides:
Slice Operator 52
In accordance with certain embodiments, the Slice Operator 52 may be a Kubernetes Operator component that manages the life cycle of application slices related custom resource definitions (CRDs). It helps to manage the application slices with a declarative management support for GitOps based workflows. A SliceCtl tool may be used to manage the Slice CRD resources. Application Slices CRDs can be managed using Cluster Controller 32 as well.
SliceCtl
In accordance with certain embodiments, SliceCtl is a CLI tool to interact with Slice Operator 52 and manage slices and slice related resources on the cluster. SliceCtl commands include Login, register cluster, attach/detach slice, delete slice, service import/export, etc.
Slice Overlay Network
In an embodiment such as that of
Slice VPN Gateway 62
Slice VPN Gateway 62 is a slice network service component that provides a secure VPN link connection endpoint for the slice on a cluster C. A pair of Slice VPN Gateways 62 are deployed to connect every pair of clusters C attached to a Slice. A VPN Gateway 62 connects to a remote VPN Gateway 62 in a remote cluster C. Slice Operator 52 manages the life cycle of the Slice VPN Gateways 62. Slice Operator 52 deploys and manages the configuration and keys/certificates for the operation of the Slice VPN Gateways. Slice Operator 52 interacts with Backend to get the slice configuration and auto inserts the slice components like VPN Gateways 62 and Slice Routers 60 for the slice. Slice Operator 52 constantly interacts with Slice VPN Gateways 62 for status, keys/certificates, and configuration changes. Backend manages the VPN gateway pairs for slice attached clusters, creates the keys and configuration for the operation.
Slice Traffic Control
Slice VPN Gateways 62 are the exit/entry points for all the traffic to/from the Applications Pods 28 on the slice to remote cluster Slice VPN Gateways 62. Slice VPN Gateways 62 are configured with Traffic Control (TC) Policies (with a QOS profile) to manage the traffic shaping for the slice. Slice TC on VPN Gateways 62 support marking the packets with DSCP/COS code points to provide prioritization of the slice traffic.
Slice Router 60
Slice Router 60 is a slice network service (VL3 NSE) component that provides a virtual L3 IP switching functionality for the slice. Each slice in a cluster C has one Slice Router 60, with the possibility of a redundant pair option. Slice Operator 52 manages the life cycle of the Slice Router 60, which includes deploying, configuring and continuously monitoring/managing the Slice Router 60 for the slice. All the Application 28 Pods of the cluster C on the slice connect to Slice Router 60 of the slice. Slice Router 60 provides the connectivity to the rest of the slice components, which are Applications distributed across the clusters C.
When an Application Pod 28 connects to the slice (as a network service client NSC) on a cluster C, the Slice Router 60 manages the establishment of the Slice Interface (NSM interface) on the Application Pod 28—done automatically via injection into the Pod 28. The Application Pods 28 use this Slice Interface to communicate with the other Applications/Network Services (local or remote) on the slice. Slice Router 60 manages the IPAM/routes for the slice cluster applications/components.
NetOps
Each slice in a cluster is associated with a QoS profile. The QOS profile is applied on the tunnel interface of the VPN gateways 62. In addition, on the Gateway nodes 50 the NetOp Pods enforces the QoS profiles for all the slices. It uses Linux TC (Traffic Control) to apply Hierarchical Token Bucket (HTB), priority and DSCP values for slice traffic classification.
Mesh DNS (KubeSlice DNS)
Mesh DNS is a core DNS server that is used to resolve service names exposed on application slices. The Slice Operator 52 manages the DNS entries for all the services running on the Slice overlay network(s). When a service is exported on the slice by installing a ServiceExport object, the Slice Operator 52 creates a DNS entry for the service in the Mesh DNS and a similar entry is created in the other clusters that are a part of the slice.
Slice Istio Components
The application mesh works with Istio service mesh components in a cluster. If Istio is deployed on a cluster, it uses Istio ingress/egress gateway resources to create Slice Ingress/Egress Gateways. These Slice Ingress/Egress Gateways can be manually deployed or auto deployed as part of the slice. Slice Ingress/Egress Gateways can be deployed for E/W traffic
Slice Egress/Ingress Gateways can be used to export/import slice connected application services across the slice clusters. A Slice Ingress Gateway can be used to export the services from a slice cluster. A Slice Egress Cluster can be used to import the slice services from remote slice clusters. Slice Service Discovery uses the Slice Ingress/Egress Gateways to export/import the application services across the slice clusters. Deployment of the Slice Ingress/Egress Gateways on a slice is optional.
User Interface (UI)
UI (KubeSlice Manager) is a web interface to manage the mesh network across multiple clusters C. The UI can be used for Slice and Slice Policy management. It allows users to register clusters, create slice and connect clusters. Slice dashboards provides observability into the slice operations—slice network services, application services deployed on the slice across multiple clusters. It allows users to view and explore the slice services topology, slice service discovery data, traffic, latency, and real time health status.
Deploying Application Slice across Multiple Clusters
The mesh allows users to create and manage application slices across multiple clusters C. Based on role-based permissions (RBP), a user can be Cluster Admin, Slice Admin, Application TL, Developer, etc. The Mesh allows multiple ways to create and deploy the slices—UI, Helm Charts/GitOps and Backend APIs.
In some embodiments, the following tasks are performed in preparation for deploying a slice on a cluster:
1. Create worker clusters C, and configure and deploy Istio and other system components
2. Deploy the Mesh/KubeSlice System and Slice Operator 52 components
3. Identify and label a node 20 in a cluster C as a Gateway Node 50 and open appropriate ports (UDP/TCP) for communication
Registering Clusters
Once the KubeSlice/Mesh system components and Operators are installed Users can register the worker clusters C with Controller 52. The user can use Helm charts or UI (KubeSlice manager) to register the clusters. Once clusters are registered user can create slices.
Installing Slice
There are multiple ways a slice can be created with worker clusters C:
1. Helm chart: Users can specify the slice parameters as values and apply slice helm chart to Backend/Controller 18. Slice controller creates appropriate SliceConfig resources (CRDs) for the configuration. The Slice Operator 52 interacts with the Controller 18 to get the SliceConfig and uses these parameters to create and deploy the slice components on the worker cluster.
2. User can use UI to register and create slice. UI interacts with Slice Controller 18 using Controller APIs to create SliceConfig resources (CRDs). Slice Operator 52 interacts with the Controller 18 to get the SliceConfig and uses these parameters to create and deploy the slice components on the worker cluster.
Once the slice components are deployed the Slice VPN gateways in worker clusters connect to each other to form a full mesh connectivity.
Deploying Applications Over Application Slice
Users can deploy the Application Services (App Pods 28) on to the slice on a cluster C to access other Application Services that are deployed on the slice in other attached clusters. Slice provides the network connectivity and service discovery to enable service-to-service communication. Users can deploy the Application Service on to a slice in multiple ways.
Users can update the service deployment specifications with slice related annotations to onboard the service and related replicas on to the slice.
Users can also associate namespaces with slice. In auto onboarding mode, all the services that are deployed on the associated namespaces are onboarded on to the slice the Slice Operator 52 by updating the deployment specs of the services.
Users can also use UI to onboard the applications/services on to a slice. Users can select and associate namespaces to slice. SliceConfig will be updated with selected namespace associations. Slice Operator 52 onboards the services that belong to the namespaces.
In one embodiment, onboarding of a service on to the slice will result in adding an overlay network interface (NSM interface) to the POD. The POD is attached to the slice overlay network. This will allow that service/POD to communicate with all the other PODs/Services that are attached (onboarded) to slice overlay network using IP/TCP/HTTP/GRPC/UDP/etc. protocols.
Multi-Tenancy with Application Slices
As described above with reference to
Enterprise customers with multiple teams/departments/environments can share one or more cluster resources using one or more application slices per team/department/environments. Each team/department/environment can be a separate customer in the platform. The teams and departments can further have team members or sub-departments.
A service provider with multiple customers from different enterprises or individuals can share one or more cluster resources using one or more application slices per customer.
Application Slices provide features to support multi-tenancy deployment models. Platform provides mechanisms to support multi-tenant application deployments using application slices.
The slice controller 82 allows administrative users (Admins) to create separate customers/tenants 80 in the platform. Each customer/tenant data is kept in isolation in the controller database 86. For each tenant 80, Admins can create one or more slices 70 on which to deploy their applications.
Admins can also configure tenant-wide settings that would be applied to all the slices 70 that are created for the tenant across all the clusters C. The controller 82 may reside in a separate controller cluster, and multi-tenant configuration and resource data kept separate from the registered slice worker clusters C. Access control to customer data is controlled using service accounts and appropriate RBAC policies.
Each slice configuration has information about the multi-tenancy requirements for the customer and the slice. The platform controller uses that information to orchestrate the slice for the customer across one or more clusters. The slice operator in each cluster implements the orchestration of the slice on its cluster. The slice operator implements and enforces the application slice multi-tenancy requirements. The slice operator constantly monitors the slice metrics and configuration to enforce the multi-tenancy requirements.
Controller maintains the customer/slice associations and configuration details including multi-tenancy related configuration. The controller provides APIs, helm charts/YAMLs and gitOps mechanisms to orchestrate customers/tenants and slices.
The following is an example slice configuration description:
It will be appreciated that in an embodiment such as
The following describes various aspects of the configuration and use of slices in additional detail:
Multiple clusters: Each slice can be deployed across one or more clusters. The registered clusters can be associated with the slice. Platform allows customer configurations related to resource quota, service mesh and overlay network configuration, etc. to support multi-tenancy with application slices across the clusters.
Isolation: To provide application namespaces and associated network traffic isolation the platform supports the namespace association with network policies and overlay network.
Global namespace/namespaces association: Each slice has a global namespace associated with it. This global namespace is the root namespace that would be present in every cluster that is associated with the slice. The global namespace root can be a K8S hierarchical namespace root. Admins can create and sub-namespaces under this root namespace and attach them to the slice. In addition, each slice can be associated with one or more k8s native namespaces. The Slice configuration carries associated application namespaces with the slice. In addition, each slice can be configured with a list of allowed namespaces that are allowed to communicate with the associated application namespaces.
Network policy: Each slice is configured to apply network policy for the application namespaces associated with the slice. The network policy allows communication with all the associated application namespaces and listed allowed namespaces. It blocks the communication with other namespaces/applications/Pods. Slice operator implements the namespace association and network policy for the slice. It creates and applies appropriate k8s native container networking and overlay networking resources to provide isolation for both north-south ingress and east-west ingress/egress traffic.
Secure Overlay Network: Each slice can be configured with its own secure overlay network that spans across one or more clusters. The secure overlay network provides the needed micro-segmentation traffic isolation and security for multi-tenancy. The secure overlay is created using networking services like VPN gateways, slice IP routers, layer 2 overlay data plane and control plane network services. The secure overlay also integrates with service mesh control and data plane to provide service discovery across the clusters. The overlay also integrates with north-south ingress and east-west ingress/egress gateways. These services are all specific to each slice. The communication between these services can be purely on the overlay network or combination of the overlay and container network. The overlay network is a collection of point-to-point virtual wire (vwire) tunnels. The overlay supports different tunnel protocols like GRE/IP-in-IP/etc. The traffic in the tunnels can be encrypted. To provide isolation for multi-tenancy these tunnels can be encrypted. The VPN tunnel between the clusters is encrypted with different encryption techniques like OpenVPN, L2TP/IPSEC, WireGuard, PPTP, IKEV2, etc. The controller manages the orchestration of the tunnels by generating configuration and associated keys/certificates and other parameters. In addition, the communication between services inside the cluster and across the clusters can be with mTLS authentication.
The secure overlay network in addition to namespace-based network policies and authentication and authorization provides a zero-trust security—essential for multi-tenancy—with the application slices.
RBAC and access control: On both controller clusters and slice clusters service accounts (and other identity management solution-based tokens) and appropriate RBAC policies are used to provide access control to the customer/slice resources. The allowed services accounts will be able to configure and manage the customer/slice resources and block access to the resources for others.
Resource quotas and optimized utilization: Admins can configure appropriate resource quotas for customers/tenants/slices. The controller passes these resource quota requirements to all the clusters during slice orchestration. The slice operator in each cluster implements the resource requirements across all the associated namespaces and overlay network services and other slice services. The slice operator also monitors the resource usage and takes appropriate actions like generating alerts and events that can trigger corrective actions by the controller.
QOS profiles and traffic control: The platform allows Admins to create separate QOS profiles for each customer/slice. This allows the platform to support multi-tenancy with applications/slices with different traffic control/priorities for each tenant. Different tenants can have different QOS profiles—high priority slices, medium priority slices and low slices. Admins will be able to support and enforce multi-tenancy with different traffic control requirements for each customer/tenant.
Slice monitoring for multi-tenancy: Controller and slice operators in all the clusters work together to ingest telemetry from multi-tenancy related resources like namespaces, network policy, overlay network services and other slice services. The configuration drift and other violations are detected, and appropriate alerts and events are generated to the controller so appropriate corrective actions can be taken.
Slice optimization: The platform allows Admins to configure and implement different RL driven slice optimization policies. Some of the RL policies are 1) load-balancer optimization for efficient traffic control and distribution, 2) workload placement for the cost and resource optimization and 3) slice wide auto-scalar policy to optimize the cost and resources used for auto-scaling the application/services deployment.
Service discovery and isolation: Each slice has its own slice specific service discovery and services discovered across the slice are isolated from other customers and slices. Each slice can have dedicated slice DNS to provide isolation across the tenants. Since each overlay has its own L3 IP domain the services from other clusters will not be able to resolve and access the services in other slices over the overlay network. In addition, network policies at the namespaces/pods will provide access control as well.
Multi-tenancy with application slices in single cluster: The platform allows Admins to be able to provide multi-tenancy with applications slices for application deployments for multiple customers/tenants. The application slice features that are discussed above apply to single cluster deployments as well.
While various embodiments of the invention have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention as defined by the appended claims.
Number | Date | Country | |
---|---|---|---|
63183244 | May 2021 | US |