The field relates generally to information processing systems, and more particularly to management of information processing systems comprising edge computing networks.
A distributed machine-to-machine computing network such as, for example, an Internet of Things (IoT) computing environment, can be part of an information processing system. The IoT computing environment typically comprises a plurality of smart devices connected via a communication network in which a large amount of data is transmitted to and from the smart devices.
Edge devices are examples of smart devices with computing power at or near the end-user. Edge computing is a strategy for computing on location where data is collected or used. This strategy allows the data to be processed at the edge of the computing environment rather than sending the data back to a centralized datacenter or cloud computing platform that can also be part of the information processing system.
Thus, edge computing takes place at or near the physical location of the user or source of the data. The edge devices serve as network entry and/or exit points, and are deployed to realize the benefit from enhanced local physical security. However, edge computing networks are responsible for connecting local area networks to external networks and thus can be vulnerable to cyberattacks.
Illustrative embodiments provide techniques for secure edge computing network management in information processing systems.
For example, in one illustrative embodiment, a processing platform comprises at least one processor coupled to at least one memory and is configured to determine that a given edge node has joined an edge computing network comprising a plurality of edge nodes. The processing platform is further configured to determine that security data associated with at least one of the plurality of edge nodes is suitable for the given edge node. The processing platform is further configured to cause a transfer of the security data from the at least one of the plurality of edge nodes, determined to be suitable for the given edge node, to the given edge node.
Further illustrative embodiments are provided in the form of a non-transitory computer-readable storage medium having embodied therein executable program code that when executed by at least one processor causes the at least one processor to perform the above-mentioned operations. Still further illustrative embodiments comprise methodologies performed by a processing platform comprising at least one processor coupled to at least one memory.
Advantageously, illustrative embodiments provide, inter alia, automatic deployment of edge nodes installed with a suitable secure-ready model.
These and other features and advantages of embodiments described herein will become more apparent from the accompanying drawings and the following detailed description.
Illustrative embodiments will be described herein with reference to exemplary information processing systems and associated host devices, storage devices, network devices and other processing devices. It is to be appreciated, however, that these and other embodiments are not restricted to the particular illustrative system and device configurations shown. Accordingly, the term “information processing system” as used herein is intended to be broadly construed, so as to encompass, for example, processing systems comprising edge and cloud computing environments, as well as other types of processing systems comprising various combinations of physical and virtual processing resources. An information processing system may therefore comprise, for example, at least one data center or other cloud-based system that includes one or more clouds, e.g., multi-cloud computing network, hosting multiple tenants that share cloud resources, as well as one or more edge computing networks as will be further explained. Numerous different types of enterprise computing and storage systems are also encompassed by the term “information processing system” as that term is broadly used herein.
As mentioned above, edge computing networks are typically vulnerable to cyberattacks that target one or more edge devices. One reason is because the edge computing network is typically slow to respond to an attack. Illustrative embodiments address this and other technical challenges with respect to edge computing networks by providing secure edge computing management functionalities that comprise a framework to securely deploy edge devices with a pre-trained security model, e.g., an intrusive detection system (IDS) model, an intrusion prevention system (IPS) model, and/or the like. More particularly, the framework learns the environment and identifies neighborhood edge devices and stores metadata information regarding resources and edge devices. Since edge devices are more exposed to attacks, the framework learns from the attacks and stores the attack information. When any new edge device is deployed, the framework determines the most suitable existing edge device and transfers the model trained from the learned information to the newly deployed edge device (note that an edge device may also illustratively be referred to as an edge node).
Prior to describing further details of secure edge computing management functionalities according to illustrative embodiments, some exemplary information processing system environments with which secure edge computing network management functionalities can be implemented will be described in the context of
By way of example only,
The information processing system environment 100 comprises a set of cloud computing sites 102-1, . . . 102-M (collectively, cloud computing sites 102) that collectively comprise a multi-cloud computing network 103. Information processing system environment 100 also comprises a set of edge computing sites 104-1, . . . 104-N (collectively, edge computing sites 104, also referred to as edge computing nodes or edge servers) that collectively comprise at least a portion of an edge computing network 105. The cloud computing sites 102, also referred to as cloud data centers, are assumed to comprise a plurality of cloud devices or cloud nodes (not shown in
Information processing system environment 100 also includes a plurality of edge devices that are coupled to each of the edge computing sites 104 as part of edge computing network 105. A set of edge devices 106-1, . . . 106-P are coupled to edge computing site 104-1, and a set of edge devices 106-P+1, . . . 106-Q are coupled to edge computing site 104-N. The edge devices 106-1, . . . 106-Q are collectively referred to as edge devices 106. Edge devices 106 may comprise, for example, physical computing devices such as Internet of Things (IoT) devices, smart devices, sensor devices (e.g., for telemetry measurements, videos, images, etc.), mobile telephones, laptop computers, tablet computers, desktop computers or other types of devices utilized by members of an enterprise, in any combination. Such devices are examples of what are more generally referred to herein as “processing devices.” Some of these processing devices are also generally referred to herein as “computers.” The edge devices 106 may also or alternately comprise virtualized computing resources, such as virtual machines (VMs), containers, etc. In this illustration, the edge devices 106 may be tightly coupled or loosely coupled with other devices, such as one or more input sensors and/or output instruments (not shown). Couplings can take many forms, including but not limited to using intermediate networks, interfacing equipment, connections, etc.
Edge devices 106 in some embodiments comprise respective processing devices associated with a particular company, organization or other enterprise. In addition, at least portions of information processing system environment 100 may also be referred to herein as collectively comprising an “enterprise.” Numerous other operating scenarios involving a wide variety of different types and arrangements of processing nodes are possible, as will be appreciated by those skilled in the art.
Note that the number of different components referred to in
As shown in
In some embodiments, one or more of cloud computing sites 102 and one or more of edge computing sites 104 collectively provide at least a portion of an information technology (IT) infrastructure operated by an enterprise, where edge devices 106 are operated by users of the enterprise. The IT infrastructure comprising cloud computing sites 102 and edge computing sites 104 may therefore be referred to as an enterprise system. As used herein, the term “enterprise system” is intended to be construed broadly to include any group of systems or other computing devices. In some embodiments, an enterprise system includes cloud infrastructure comprising one or more clouds (e.g., one or more public clouds, one or more private clouds, one or more hybrid clouds, combinations thereof, etc.). The cloud infrastructure may host at least a portion of one or more of cloud computing sites 102 and/or one or more of the edge computing sites 104. A given enterprise system may host assets that are associated with multiple enterprises (e.g., two or more different businesses, organizations or other entities). In another example embodiment, one or more of the edge computing sites 104 may be operated by enterprises that are separate from, but communicate with, enterprises which operate one or more cloud computing sites 102.
Although not explicitly shown in
As noted above, cloud computing sites 102 host cloud-hosted applications 108 and edge computing sites 104 host edge-hosted applications 110. Edge devices 106 may exchange information with cloud-hosted applications 108 and/or edge-hosted applications 110. For example, edge devices 106 or edge-hosted applications 110 may send information to cloud-hosted applications 108. Edge devices 106 or edge-hosted applications 110 may also receive information (e.g., such as instructions) from cloud-hosted applications 108.
It should be noted that, in some embodiments, requests and responses or other information may be routed through multiple edge computing sites. While
It is to be appreciated that multi-cloud computing network 103, edge computing network 105, and edge devices 106 may be collectively and illustratively referred to herein as a “multi-cloud edge platform.” In some embodiments, edge computing network 105 and edge devices 106 are considered a “distributed edge system.”
In some embodiments, edge data from edge devices 106 may be stored in a database or other data store (not shown), either locally at edge computing sites 104 and/or in a processed or transformed format at different endpoints (e.g., cloud computing sites 102, edge computing sites 104, other ones of edge devices 106, etc.). The database or other data store may be implemented using one or more storage systems that are part of or otherwise associated with one or more of cloud computing sites 102, edge computing sites 104, and edge devices 106. By way of example only, the storage systems may comprise a scale-out all-flash content addressable storage array or other type of storage array. The term “storage system” as used herein is therefore intended to be broadly construed, and should not be viewed as being limited to content addressable storage systems or flash-based storage systems. A given storage system as the term is broadly used herein can comprise, for example, network-attached storage (NAS), storage area networks (SANs), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage. Other particular types of storage products that can be used in implementing storage systems in illustrative embodiments include all-flash and hybrid flash storage arrays, software-defined storage products, cloud storage products, object-based storage products, and scale-out NAS clusters. Combinations of multiple ones of these and other storage products can also be used in implementing a given storage system in an illustrative embodiment.
Cloud computing sites 102, edge computing sites 104, and edge devices 106 in the
It is to be appreciated that the particular arrangement of cloud computing sites 102, edge computing sites 104, edge devices 106, cloud-hosted applications 108, edge-hosted applications 110, and communications networks 112 illustrated in the
It is to be understood that the particular set of components shown in
Cloud computing sites 102, edge computing sites 104, edge devices 106, and other components of the information processing system environment 100 in the
Cloud computing sites 102, edge computing sites 104, edge devices 106, or components thereof, may be implemented on respective distinct processing platforms, although numerous other arrangements are possible. For example, in some embodiments at least portions of edge devices 106, and edge computing sites 104 may be implemented on the same processing platform. One or more of edge devices 106 can therefore be implemented at least in part within at least one processing platform that implements at least a portion of edge computing sites 104. In other embodiments, one or more of edge devices 106 may be separated from but coupled to one or more of edge computing sites 104. Various other component coupling arrangements are contemplated herein.
The term “processing platform” as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and associated storage systems that are configured to communicate over one or more networks. For example, distributed implementations of information processing system environment 100 are possible, in which certain components of the system reside in one data center in a first geographic location while other components of the system reside in one or more other data centers in one or more other geographic locations that are potentially remote from the first geographic location. Thus, it is possible in some implementations of the system for cloud computing sites 102, edge computing sites 104, and edge devices 106, or portions or components thereof, to reside in different data networks. Distribution as used herein may also refer to functional or logical distribution rather than to only geographic or physical distribution. Numerous other distributed implementations are possible.
In some embodiments, information processing system environment 100 may be implemented in part or in whole using a Kubernetes container orchestration system. Kubernetes is an open-source system for automating application deployment, scaling, and management within a container-based information processing system comprised of components referred to as pods, nodes and clusters. Types of containers that may be implemented or otherwise adapted within the Kubernetes system include, but are not limited to, Docker containers or other types of Linux containers (LXCs) or Windows containers.
In general, for a Kubernetes environment, one or more containers are part of a pod. Thus, the environment may be referred to, more generally, as a pod-based system, a pod-based container system, a pod-based container orchestration system, a pod-based container management system, or the like. As mentioned above, the containers can be any type of container, e.g., Docker container, etc. Furthermore, a pod is typically considered the smallest execution unit in the Kubernetes container orchestration environment. A pod encapsulates one or more containers. One or more pods are executed on a worker node. Multiple worker nodes form a cluster. A Kubernetes cluster is managed by at least one manager node. A Kubernetes environment may include multiple clusters respectively managed by multiple manager nodes. Furthermore, pods typically represent the respective processes running on a cluster. A pod may be configured as a single process wherein one or more containers execute one or more functions that operate together to implement the process. Pods may each have a unique Internet Protocol (IP) address enabling pods to communicate with one another, and for other system components to communicate with each pod. Still further, pods may each have persistent storage volumes associated therewith. Configuration information (configuration objects) indicating how a container executes can be specified for each pod. It is to be appreciated, however, that embodiments are not limited to Kubernetes container orchestration techniques or the like.
Kubernetes has become the prevalent container orchestration system for managing containerized workloads and has been adopted by many enterprise-based IT organizations to deploy its application programs (applications) and edge computing networks. While the Kubernetes container orchestration system is mentioned as an illustrative implementation, it is to be understood that alternative deployment systems, as well as information processing systems other than container-based systems, can be utilized.
It is to be further appreciated that illustrative embodiments are not limited to the
Referring now to
The information processing system environment 200 comprises a first set (level-0) of edge nodes 202-1, 202-2, and 202-3 (collectively referred to as level-0 edge nodes 202) operatively coupled to a second set (level-1) of edge nodes 204-1, 204-2, 204-3, and 204-4 (collectively referred to as level-1 edge nodes 204). The level-1 edge nodes 204 are operatively coupled to one or more edge datacenters 206, which are operatively coupled to one or more cloud datacenters 208.
In some illustrative embodiments, some or all of the one or more edge datacenters 206 can be considered to correspond to some or all of the edge computing sites 104 in
Given the above-described illustrative embodiments of information processing system environments with edge computing networks in
As shown, in step 310 of workflow 300, existing edge node 302 stores metadata that is descriptive of the existing topology of the edge computing network in which it is deployed. This data can be collected by existing edge node 302 at the time it is deployed and updated as appropriate as existing edge node 302 operates in the edge computing network. Such metadata may comprise information about some or all of the other edge nodes in the edge computing network as well as resources (e.g., hardware configuration, software configuration, etc.) available at each of the edge nodes.
In step 312, existing edge node 302 stores data learned from one or more previous cyberattacks. Since, existing edge node 302 has been deployed and operating in the edge computing network for some period of time, it is assumed that it has either directly experienced one or more cyberattacks, and thus obtained information regarding the one or more cyberattacks, or has been updated by another edge node or other source (not shown) in the edge computing network with information about the one or more cyberattacks.
In step 314, existing edge node 302 trains a security model using the data learned from the one or more previous cyberattacks. As mentioned above, such a model may comprise an IPS model, an IDS model, and/or any other appropriate type of security model which can be trained by data descriptive of one or more previous cyberattacks in a typical manner. Existing edge node 302 may alternatively obtain the security model, pre-trained using data learned from the one or more previous cyberattacks, from another source such as, but not limited to, another edge node (not shown).
In step 316, existing edge node 302 and new edge node 304 establish communication. It can be assumed that, in some illustrative embodiments, this step is performed as new edge node 304 is being deployed in the edge computing network. Note that, as part of step 316, new edge node 304 also provides existing edge node 302 with its resource information (e.g., hardware configuration, software configuration, etc.) to enable existing edge node 302 to determine whether or not the security model it trained or otherwise obtained in step 314 is suitable for new edge node 304. In some illustrative embodiments, the determination of model suitability comprises existing edge node 302 comparing resources on new edge node 304 with resources on edge nodes (i.e., itself and/or other edge nodes) from which cyberattack data used to train the security model was learned. When the resources match or substantially match, it is determined that the security model is suitable to protect new edge node 304.
In step 318, based on a suitable match determined in step 316, existing edge node 302 transfers the pre-trained security model and/or other learned data to new edge node 304. Also, some or all of the metadata obtained/stored by existing edge node 302 in step 310 can be provided to new edge node 304 in step 318, although new edge node 304 can also collect similar metadata from any one or more other neighborhood edge nodes.
If it is determined that the pre-trained security model at existing edge node 302 is not a suitable match for new edge node 304, existing edge node 302 can notify new edge node 304 (as part of step 316), and new edge node 304 can establish communication with another existing edge node until it finds a suitable security model matching its resources.
It is to be appreciated that once new edge node 304 is deployed and operates in the edge computing network, it too can then serve the same or similar functions as existing edge node 302 (i.e., performs steps 310-318) with respect to another newly deployed edge node. As such, in one or more illustrative embodiments, each edge node in an edge computing network can be comprised with a similar architecture for collecting metadata, training security models, comparing resources, and sharing the trained model.
As shown, architecture 400 comprises a metadata component 410, an edge-to-edge communication component 412, and a learning component 414. As explained above, each edge node in an edge computing network is configured with these components to identify neighborhood edge nodes and determine if there are any newly deployed edge nodes. The edge node can then determine the best suited pre-trained security model and perform a learning transfer to the newly deployed edge node.
Metadata component 410 is configured, for example, to store, or otherwise manage, data relevant to secure edge computing network management for a given edge node. As mentioned, such data may comprise information about some or all of the other edge nodes in the edge computing network as well as resources (e.g., hardware configuration, software configuration, etc.) available at each of the edge nodes. Such data may also comprise data learned from one or more previous cyberattacks. Such data may be obtained by metadata component 410 at the time the given edge node joins the edge computing network and/or during operation of the given edge node in the edge computing network.
Edge-to-edge communication component 412 manages, in conjunction with metadata component 410, communication between edge nodes in the edge computing network. Further, edge-to-edge communication component 412 is configured to compute and maintain a topological structure of the edge computing network, as well as cause exchange with other edge nodes of information such as, but not limited to, edge node identifiers (IDs), Internet Protocol (IP) addresses, and port numbers that are determined through node lookups.
Thus, by way of example, each edge node obtains (i.e., via edge-to-edge communication component 412) and stores (i.e., via metadata component 410) the information locally exchanged with other edge nodes, and updates it when any new edge node joins the edge computing network. In some illustrative embodiments, information relevant to secure edge computing network management is merged into a single document which is stored in metadata component 410. By way of example only, the document may be in a JavaScript Object Notation (JSON) file format with the node ID as the filename. Edge-to-edge communication component 412 can then obtain the documents of the active edge nodes in the edge computing network and update its local store, i.e., in metadata component 410.
In one non-limiting example, edge-to-edge communication component 412 can comprise an edge communication protocol such as Kademlia. Kademlia is a distributed hash table (DHT) for decentralized peer-to-peer computer networks. A Kademlia network consists of nodes where each of the nodes has a unique 160 bit ID as an identifier. Nodes in the Kademlia network communicate using a User Datagram Protocol (UDP). Moreover, the participating nodes exchange their information through node lookups. An overlay network is formed where each node is identified by its own node ID. Besides the unique ID, it maintains a routing table and a DHT. A routing table maintains a list for each bit of the node ID. A routing table is divided into created lists known as buckets, wherein each bucket contains contact information and the distance from the current node. Contact information resides in one of the buckets which contains the node ID, IP address, and port number of the other node. Buckets in the routing table are updated every time a new node joins the network. In addition, new edge nodes can be bootstrapped by knowing basic contact information of any other reachable DHT nodes in the network. The DHT segment stores key/value pairs where the key is the name of the public metadata document and the value is the network location of the document. As such, Kademlia or the like can be used as the edge communication protocol due to its automatic spreading of contact information. The process of finding nodes and resource descriptions in this type of network is fast and efficient. Kademlia or any communication protocol used can more generally be referred to, as shown in
Learning component 414 is configured to monitor and analyze the network security for edge security with an intrusion detection system (IDS), an intrusion prevention system (IPS), and/or the like. IDS monitors, analyzes, and reports on network events for anomalies and possibly malicious activities (i.e., cyberattacks and the like). IPS also conducts detection but acts against cyberattacks. One or more security models are trained by data obtained from the IDS, IPS, and/or some other source having data relevant to one or more previous cyberattacks and the like.
Given a security model trained on learned data regarding one or more previous cyberattacks as explained above,
With the edge-to-edge communication (e.g., edge-to-edge communication component 412), transfer learning process 500 determines that a new edge node has been deployed (step 502). Based on a determination (step 504) of the neighborhood of nearest edge nodes (e.g., edge nodes 1, 4 and 5) with respect to the new edge node, transfer learning process 500 then determines which edge node of the neighborhood edge nodes has the most suitable pre-trained security model or other learned data (more generally, pre-trained security model or other learned data can be considered security data) that can be used for training the newly deployed edge node, and causes transfer (step 506) of the security model and/or other learned data from that edge node to the new edge node (e.g., from edge node 5 to the new edge node). In some illustrative embodiments, transfer learning can be performed using a Keras trainable application programming interface (API). The new edge node then implements the security model/learned data resulting in a secure-ready new edge node (step 508) which is now trained to prevent or otherwise mitigate any subsequent similar cyberattacks.
In some illustrative embodiments, parameters that can be used to determine the edge node with the most suitable security model may include, but are not limited to: (i) category of resources (e.g., IoT devices) that are communicating in the edge computing network; (ii) protocol(s) that resources (edge nodes) use to communicate; and/or (iii) frequency of network traffic.
Suitability can be determined by comparing one or more of these parameters for the new edge node against one or more existing edge nodes in the edge computing network. In illustrative embodiments where transfer learning process 500 is implemented in an edge node itself, the edge node can perform the comparison of its parameters against the new edge node to determine whether or not its own security model is suitable for the new edge node, otherwise the edge node can determine other nearest neighbors to the new edge node and either select one based on the parameter comparison or have one of the neighboring edge nodes do the parameter comparison itself. When a separate edge computing network component is performing transfer learning process 500, the separate edge computing network component determines nearest neighbors to the new edge node and selects one based on the parameter comparison.
Note that, in some illustrative embodiments, nearest neighbor edge nodes or edge nodes in a neighborhood of a given edge node can be edge nodes that are geographically close to the given edge node in terms of ease of data routing. Additionally or alternatively, neighboring edge nodes can be edge nodes that have some other common attribute or parameter with the given edge node that would constitute a neighborhood (e.g., similar vendors, similar resources, similar functionalities, etc.).
In one illustrative example, transfer learning process 500 can obtain layers from a previously trained model, freeze those layers, and add new and trainable layers on top of the frozen layers. These new layers learn to turn the old features into predictions on a new data set. The new layers are then trained on the new data set. The resulting model can be fine-tuned if required/desired.
Advantageously, as explained herein, illustrative embodiments provide: (i) automatic deployment of edge devices installed with the best suited secure ready model; (ii) training of newly deployed edge nodes based on the most suitable learning component of the edge computing environment; and/or (iii) a learning component driven edge deployment. Exemplary benefits include, but are not limited to, enhanced edge security by having a secure ready edge deployment with trained intrusion detection system and intrusion prevention system functionalities so as to enable an optimized and appropriate edge deployment resilient to learned security attacks.
The processing platform 700 in this embodiment comprises a plurality of processing devices, denoted 702-1, 702-2, 702-3, . . . 702-N, which communicate with one another over network(s) 704. It is to be appreciated that the methodologies described herein may be executed in one such processing device 702, or executed in a distributed manner across two or more such processing devices 702. It is to be further appreciated that a server, a client device, a computing device or any other processing platform element may be viewed as an example of what is more generally referred to herein as a “processing device.” As illustrated in
The processing device 702-1 in the processing platform 700 comprises a processor 710 coupled to a memory 712. The processor 710 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements. Components of systems as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as processor 710. Memory 712 (or other storage device) having such program code embodied therein is an example of what is more generally referred to herein as a processor-readable storage medium. Articles of manufacture comprising such computer-readable or processor-readable storage media are considered embodiments of the invention. A given such article of manufacture may comprise, for example, a storage device such as a storage disk, a storage array or an integrated circuit containing memory. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals.
Furthermore, memory 712 may comprise electronic memory such as random-access memory (RAM), read-only memory (ROM) or other types of memory, in any combination. The one or more software programs when executed by a processing device such as the processing device 702-1 causes the device to perform functions associated with one or more of the components/steps of system/methodologies in
Processing device 702-1 also includes network interface circuitry 714, which is used to interface the device with the networks 704 and other system components. Such circuitry may comprise conventional transceivers of a type well known in the art.
The other processing devices 702 (702-2, 702-3, . . . 702-N) of the processing platform 700 are assumed to be configured in a manner similar to that shown for computing device 702-1 in the figure.
The processing platform 700 shown in
Also, numerous other arrangements of servers, clients, computers, storage devices or other components are possible in processing platform 700. Such components can communicate with other elements of the processing platform 700 over any type of network, such as a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, or various portions or combinations of these and other types of networks.
Furthermore, it is to be appreciated that the processing platform 700 of
As is known, virtual machines are logical processing elements that may be instantiated on one or more physical processing elements (e.g., servers, computers, processing devices). That is, a “virtual machine” generally refers to a software implementation of a machine (i.e., a computer) that executes programs like a physical machine. Thus, different virtual machines can run different operating systems and multiple applications on the same physical computer. Virtualization is implemented by the hypervisor which is directly inserted on top of the computer hardware in order to allocate hardware resources of the physical computer dynamically and transparently. The hypervisor affords the ability for multiple operating systems to run concurrently on a single physical computer and share hardware resources with each other.
It was noted above that portions of the computing environment may be implemented using one or more processing platforms. A given such processing platform comprises at least one processing device comprising a processor coupled to a memory, and the processing device may be implemented at least in part utilizing one or more virtual machines, containers or other virtualization infrastructure. By way of example, such containers may be Docker containers or other types of containers.
The particular processing operations and other system functionality described in conjunction with
It should again be emphasized that the above-described embodiments of the invention are presented for purposes of illustration only. Many variations may be made in the particular arrangements shown. For example, although described in the context of particular system and device configurations, the techniques are applicable to a wide variety of other types of data processing systems, processing devices and distributed virtual infrastructure arrangements. In addition, any simplifying assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the invention.