The instant patent application is related to and claims priority from the co-pending India provisional patent application entitled, “ORCHESTRATION IN CLOUD INFRASTRUCTURE”, Serial No.: 202241000379, Filed: 4 Jan. 2022, naming as inventors Sriramoju et al, attorney docket number: NTNX-331-INPR, which is incorporated in its entirety herewith.
The instant patent application is related to and claims priority from the co-pending India non-provisional patent application entitled, “ORCHESTRATION OF TASKS IN TENANT CLOUDS SPANNING MULTIPLE CLOUD INFRASTRUCTURES”, Serial No.: 202241000379, Filed: 9 Feb. 2022, naming as inventors Sriramoju et al, attorney docket number: NTNX-331-IN, which is incorporated in its entirety herewith
The present disclosure relates to cloud infrastructures, and more specifically to orchestration of tasks in tenant clouds spanning multiple cloud infrastructures.
Cloud infrastructure refers to a collection of physical processing nodes, connectivity infrastructure, data storages, administration systems, etc., which are engineered to together provide a virtual computing infrastructure for various customers, with the scale of such virtual computing being specified often on demand. Examples of cloud infrastructures include Amazon Web Services (AWS) Cloud Infrastructure available from Amazon.com, Inc., Google Cloud Platform (GCP) available from Google LLC, etc., as is well known in the relevant arts.
The virtual computing infrastructure provided to each customer is normally referred to as a “cloud”. The virtual infrastructure contains computing resources (e.g., virtual machines, operating systems), storage resources (e.g., database servers, file systems) and other required resources such as networking resources (e.g., connection pools, etc.). A customer/owner (also known as tenant) of a cloud may deploy desired user applications/data services on the resources provided as a part of their cloud(s), with the services capable of processing user requests received from end user systems. A cloud provisioned for a tenant is thus referred to as a “tenant cloud”.
A tenant cloud is often provisioned to span multiple cloud infrastructures. Spanning multiple cloud infrastructures implies that resources of the tenant cloud would be present in each of the multiple cloud infrastructures. Thus, some VMs of a tenant cloud can be present in AWS infrastructure while some others can be present in GCP infrastructure. Typically, such provisioning is opted by tenants for reasons such as cost, performance, scalability, availability, etc.
There is often a need to execute various tasks on the resources provided in a tenant cloud. For example, a tenant may wish to perform tasks such as powering down/up a VM, upgrading the software, taking backup of data, restoring data, etc. In general, it is desirable that tenants be able to easily and accurately control execution of (i.e., orchestrate) such tasks on any desired resources. Aspects of the present disclosure are directed to such orchestration.
Example embodiments of the present disclosure will be described with reference to the accompanying drawings briefly described below.
In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
Aspects of the present disclosure are directed to orchestration of tasks in tenant clouds. In an embodiment, an orchestrator receives a task-group and a condition, with the task-group specifying multiple tasks. The orchestrator selects a group of resources satisfying the condition, with the group of resources being of a tenant cloud spanning multiple cloud infrastructures. The orchestrator invokes the task-group on the group of resources to cause each of the multiple tasks to be executed on each of the group of resources.
Another aspect of the present disclosure facilitates a user to conveniently specify the condition at a desired level of precision. The user may first specify a tentative condition, and the orchestrator immediately displays a set of resources satisfying the tentative condition, thereby enabling the user to refine the tentative condition until the condition is formulated to the user's satisfaction.
According to yet another aspect, prior to receipt of the (tentative) condition, orchestrator retrieves attribute-value pairs characterizing each of multiple resources in a first cloud infrastructure and multiple resources in a second cloud infrastructure. The retrieved attribute-value pairs are examined to select the group of resources satisfying the condition. The group of resources satisfying the condition may either be selected dynamically at the time of each invocation or determined statically when the condition is received.
In an embodiment, orchestrator retrieves the configuration information at a sequence of pre-determined time points and stores locally the attribute-value pairs. The examination of the configuration information is accordingly based on the locally stored attribute-value pairs.
According to yet another aspect, the attribute-value pairs are retrieved and stored according to conventions of respective cloud infrastructures. However, conditions are specified according to a common convention such that the user interface is convenient for the users.
According to one more aspect, additional group of resources may be specified associated with an individual task of the task-group. Accordingly, the individual task is executed only on the resources of the additional group, but not on those of the task-group.
Several aspects of the present disclosure are described below with reference to examples for illustration. However, one skilled in the relevant art will recognize that the disclosure can be practiced without one or more of the specific details or with other methods, components, materials and so forth. In other instances, well-known structures, materials, or operations are not shown in detail to avoid obscuring the features of the disclosure. Furthermore, the features/aspects described can be practiced in various combinations, though only some of the combinations are described herein for conciseness.
Merely for illustration, only representative number/type of systems is shown in
Each of computing infrastructures 130, 160 and 180 is a collection of physical processing nodes (140, 170 and 190), connectivity infrastructure, data storages, administration systems, etc., which are engineered to together provide a virtual computing infrastructure for various customers, with the scale of such virtual computing being specified often on demand. Computing infrastructure 130/160/180 may correspond to a public cloud infrastructure such as Amazon Web Services (AWS) Cloud available from Amazon.com, Inc., Google Cloud Platform (GCP) available from Google LLC, Azure cloud available from Microsoft, Inc., Xi cloud available from Nutanix etc. Computing infrastructure 130/160/180 may also correspond to one of the On-Premises (On-Prem) enterprise systems owned by corresponding customers.
In one embodiment, computing infrastructure 130 is On-Prem (on premises) enterprise system owned by corresponding customer, while computing infrastructures 180 and 160 are two different public cloud infrastructures (such as AWS/GCP noted above) provided by corresponding cloud infrastructure providers. Accordingly, in the following description, the terms on-prem system 130 and cloud infrastructures 160/180 are used interchangeably with computing infrastructures 130/160/180. However, aspects of the present disclosure can be implemented in other environments as well such as when 130/160/180 are all public cloud infrastructures, as will be apparent to one skilled in the relevant arts by reading the disclosure herein.
All the systems of each computing infrastructure 130/160/180 are assumed to be connected via an intranet. Internet 115 extends the connectivity of these (and other systems of the computing infrastructures) with external systems such as user systems 110 and orchestrator 150. Each of intranet and Internet 115 may be implemented using protocols such as Transmission Control Protocol (TCP) and/or Internet Protocol (IP), well known in the relevant arts.
In general, in TCP/IP environments, a TCP/IP packet is used as a basic unit of transport, with the source address being set to the TCP/IP address assigned to the source system from which the packet originates and the destination address set to the TCP/IP address of the target system to which the packet is to be eventually delivered. An IP packet is said to be directed to a target system when the destination IP address of the packet is set to the IP address of the target system, such that the packet is eventually delivered to the target system by Internet 115 and intranets. When the packet contains content such as port numbers, which specifies a target application, the packet may be said to be directed to such application as well.
Each of user systems 110 represents a system such as a personal computer, workstation, mobile device, computing tablet etc., used by users to generate (user) requests directed to enterprise/user applications executing in computing infrastructures 130/160/180. The user requests may be generated using appropriate user interfaces (e.g., web pages provided by a user application executing in a node, a native user interface provided by a portion of a user application downloaded from a node, etc.).
In general, a user system requests a user application for performing desired tasks and receives the corresponding responses (e.g., web pages) containing the results of performance of the requested tasks. The web pages/responses may then be presented to the user by local applications such as the browser. Each user request is sent in the form of an IP packet directed to the desired system or user application, with the IP packet including data identifying the desired tasks in the payload portion.
Some of nodes 140/170/190 may be implemented as corresponding data stores. Each data store represents a non-volatile (persistent) storage facilitating storage and retrieval of data by applications executing in the other systems/nodes of computing infrastructures 130/160/180. Each data store may be implemented as a corresponding database server using relational database technologies and accordingly provide storage and retrieval of data using structured queries such as SQL (Structured Query Language). Alternatively, each data store may be implemented as a corresponding file server providing storage and retrieval of data in the form of files organized as one or more directories, as is well known in the relevant arts.
Some of the nodes 140/170/190 may be implemented as corresponding server systems. Each server system represents a server, such as a web/application server, executing enterprise applications (examples of user applications) capable of performing tasks requested by users using user systems 110. A server system receives a user request from a user system and performs the tasks requested in the user request. A server system may use data stored internally (for example, in a non-volatile storage/hard disk within the server system), external data (e.g., maintained in a data store/node) and/or data received from external sources (e.g., from the user) in performing the requested tasks. The server system then sends the result of performance of the tasks to the requesting user system (one of 110) as a corresponding response to the user request. The results may be accompanied by specific user interfaces (e.g., web pages) for displaying the results to the requesting user.
In one embodiment, each customer/tenant is provided with a corresponding virtual computing infrastructure (referred to as a “tenant cloud”) provisioned on the nodes of computing infrastructures 130, 160 and 180. Thus, tenant cloud administrators may manage the provisioned resources of his/her cloud using one of user systems 110. Also, cloud infrastructure administrators may manage infrastructure resources using one of user systems 110. A tenant cloud is often provisioned to span multiple cloud infrastructures, as described below with examples.
In one embodiment, virtual machines (VMs) form the basis for deployment of various user/enterprise applications in the nodes of computing infrastructures 130/160/180. As is well known, a virtual machine may be viewed as a container in which other execution entities are executed. A node/server system can typically host multiple virtual machines, and the virtual machines provide a view of a complete machine (computer system) to the user applications executing in the virtual machine. Although the description provided herein is with respect to VM resources, aspects of the present disclosure may be implemented with other types of resources (such as containers, disks, network devices, databases, etc.) in tenant clouds, as will be apparent to a skilled practitioner.
Tenant cloud 230 is shown containing VMs 230-1 through 230-M (M representing any natural number) that may be provisioned on nodes 140 of on-prem system 130 and nodes 190 of cloud infrastructure 180. As is well known, a cloud containing a mixture of VMs provisioned in an on-prem system and VMs provisioned in a cloud infrastructure are referred to as “hybrid” cloud. Hybrid clouds are distinguished from other clouds that operate based on VMs provisioned on one or more cloud infrastructures. Tenant cloud 230 is accordingly a hybrid cloud provisioned across the nodes of computing infrastructures 130 and 180. Specifically, groups 230A and 230B respectively represents the set of VMs provisioned in on-prem system 130 and cloud infrastructure 180.
Similarly, tenant cloud 240 is another tenant cloud containing VMs 240-1 through 240-N (N representing any natural number) that may be provisioned on nodes 190 of cloud infrastructure 180 and nodes 170 of cloud infrastructure 160. Specifically, groups 240A and 240B respectively represents the set of VMs provisioned in cloud infrastructure 180 and cloud infrastructure 160. For illustration, it is assumed that each tenant cloud (230 and 240) is owned by a corresponding customer/tenant.
A tenant of a cloud (e.g., 230) may wish to perform various tasks on corresponding resources. Examples of such tasks include, but are not limited to, powering down/up a VM, upgrading the software on VMs, taking backup of data, restoring data, disaster recovery, VM network configuration, storage volume attachment/detachment, etc.
It may be appreciated that each cloud infrastructure may have a corresponding convention of characterizing the configuration information (including identity) of resources provisioned in that cloud infrastructure. Also, the volume of such disparate/diverse ‘metadata’ (data that characterizes the configuration of the resources) grows with the number of resources (which may be of the order of thousands of VMs in a tenant cloud, for example).
Thus, one of the challenges in performing the tasks (noted above) in tenant clouds spanning multiple cloud infrastructures is to be able to easily and accurately control execution of (i.e., orchestrate) the tasks on any desired resources. Selecting appropriate resources for execution of tasks requires knowledge of metadata supported by the corresponding cloud infrastructure.
Orchestrator 150, provided according to several aspects of the present disclosure, facilitates orchestration of tasks in tenant clouds spanning multiple cloud infrastructures. In an embodiment, orchestrator 150 is provided as a centralized component (as depicted in
The manner in which orchestration of tasks may be provided in tenant clouds is described below with examples.
In addition, some of the steps may be performed in a different sequence than that depicted below, as suited to the specific environment, as will be apparent to one skilled in the relevant arts. Many of such implementations are contemplated to be covered by several aspects of the present invention. The flow chart begins in step 301, in which control immediately passes to step 310.
In step 310, orchestrator 150 provisions a tenant cloud spanning multiple cloud infrastructures. Such provisioning may entail the tenant specifying (via one of user systems 110) the desired number of resources (e.g., VMs) to be part of each cloud infrastructure. In addition, the tenant may also specify the configuration parameters of each resource, such as compute capacity/storage/OS type, etc. In one embodiment, orchestrator 150 provisions (creates) a tenant cloud for the tenant is by allocating the desired number of resources hosted on nodes 140/170/190 in cloud infrastructures 130/160/180. The tenant may thereafter deploy desired user applications for execution in his/her tenant cloud.
In step 330, orchestrator 150 retrieves attribute-value pairs (metadata) characterizing resources of the tenant cloud. According to an aspect, orchestrator 150 retrieves the attribute-value pairs of each resource in each cloud infrastructure over which the tenant cloud spans before the user specifies a condition in step 350. The attribute-value pairs may be retrieved periodically to ensure the data is reasonably current for any given time instance.
Thus, for tenant cloud 230 (of
In step 340, orchestrator 150 stores the retrieved attribute-value pairs. In one embodiment, the attribute-value pairs are stored in a local data store. It may be appreciated that the stored attribute-value pairs may be used subsequently to enable users to easily and accurately specify groups of resources, as described in detail below.
In step 350, a user specifies a task-group and a condition using one of user systems 110. A task-group is defined to contain a set of tasks that needs to be performed on a group of resources. A condition is a constraint specified by the user that forms the basis of selection of the group of resources. The user may specify the condition independently of specifying the task-group. Thus, the user may specify the condition at a first time instance, and specify the task-group at a second time instance (subsequent to the first time instance).
In one embodiment, a condition contains a condition-key specifying an identifier of an attribute of a resource, and a condition-value indicating the desired value of the attribute. For example, a user may specify a condition to indicate VMs of a particular operating system (OS) type (e.g., Linux). Here, OS type is the condition-key, and the specified OS type (Linux) is the condition-value. A condition is deemed to be satisfied if the attribute-identifier of an attribute-value pair matches the condition-key and the value of the attribute-value pair matches the condition-value. It may be appreciated that a condition may contain one or more sub-conditions. In such cases, a condition is deemed to be satisfied only when the requisite sub-conditions present in the condition are met. For example, a user may specify a condition to indicate VMs of a particular OS type (sub-condition 1) and VMs whose name starts with ‘abc’ (sub-condition 2).
A task-group may contain one or more tasks that the user wishes to execute on each of the resources in the group of resources. The group of resources on which a task-group is desired to be invoked may be referred to as an endpoint. Accordingly, each task-group is associated with a corresponding endpoint. Each task in the task-group may be specific to the type of resource in the endpoint. In other words, the user may specify a task-group to contain tasks that are supported to be performed on each resource of the endpoint. The user may specify the task-group to be invoked periodically (at pre-determined time(s) of day/week, etc.). Alternatively, or in addition, the task-group may be invoked by the user on-demand at desired time instances.
In step 360, orchestrator 150 selects a group of resources satisfying the condition. Orchestrator 150 examines the stored attribute-value pairs of resources of the tenant cloud, and selects only those resources that satisfy the condition specified by the user. Orchestrator 150 may perform the selection either upon the receipt of the condition or at the time of each invocation of the task-group.
In step 370, orchestrator 150 executes each task of the task-group on each resource of the group of resources. In one embodiment, orchestrator 150 may execute the tasks concurrently on sub-groups of resources contained in the endpoint. It may be appreciated that certain tasks of the task-group may be defined to update a value of an attribute-value pair. In such cases, execution of the task also entails updating the corresponding attribute-value pair in the data store. Subsequent tasks in the task-group operate on the updated value. Also, subsequent selections of groups of resources operate on the updated value. The flowchart ends in step 399.
Thus, the flow-chart of
According to another aspect described below, for the user to conveniently specify the condition in step 350, the user may first specify a tentative condition, and orchestrator 150 immediately displays matching resources based on the locally stored attribute-value pairs. The user may accordingly refine the tentative condition (while examining the corresponding matching resources upon each refinement) until a final condition is specified as the (final) condition of step 350.
The manner in which orchestrator 150 operates in accordance with the steps of
Data store 420 represents a non-volatile (persistent) storage facilitating storage and retrieval of data by other components of orchestrator 150, and can be implemented external to orchestrator 150 also. Data store 420 may be implemented as a database server using relational database technologies and accordingly provide storage and retrieval of data using structured queries such as SQL (Structured Query Language). Alternatively or in addition, data store 420 may be implemented as a file server providing storage and retrieval of data in the form of files organized as one or more directories, as is well-known in the relevant arts. Data store 420 stores data associated with tenant clouds such as the corresponding cloud infrastructures over which each tenant cloud spans, tenant account information (e.g., account id, credentials, etc.) associated with each cloud infrastructure, metadata specifying configuration of resources in tenant clouds, endpoints and task-groups data. In one embodiment, metadata characterizing resources is stored as corresponding attribute-value pairs in data store 420.
Query manager 440 provides an interface to query metadata from data store 420. As noted above, each cloud infrastructure may have a respective manner of specifying metadata. For example, an attribute specifying a certain characteristic of a resource may be identified as ‘attribute-1’ in cloud infrastructure 130, while the same characteristic may be identified as ‘attribute-2’ in cloud infrastructure 160. However, when a user specifies a condition for selection of resources, the user may specify the characteristic using ‘attribute-common’. Query manager 440 operates to convert to/from cloud infrastructure-specific attributes, as will be described below in detail. In an embodiment, query manager 440 employs FIQL (Feed Item Query Language) parser in order to efficiently retrieve metadata satisfying specified conditions.
Synchronization service 450 operates to retrieve metadata of resources in various cloud infrastructures over which each tenant cloud spans. Synchronization service 450 may retrieve the metadata at a sequence of pre-determined time points, as configured by the user. In one embodiment, synchronization service 450 retrieves metadata every 20 minutes. Synchronization service 450 may send requests to each resource, and receive the metadata as responses to the request over corresponding paths 135/165/185. Alternatively, each resource in each cloud infrastructure may be configured to push/send the metadata to synchronization service 450 at pre-determined time points. Synchronization service 450 stores the retrieved information in data store 420. As the attribute-value pairs are stored as received from respective cloud infrastructures, the attribute-identifiers in data store 420 are according to the convention in the respective cloud infrastructure.
Endpoint manager 460 facilitates storage and retrieval of data associated with endpoints in tenant clouds. Endpoint manager 460 receives user requests (via user systems 110) associated with endpoints (such as creating/modifying/deleting endpoints), sends the requests to query manager 440, receives respective responses from query manager 440 and forwards the responses to users.
The group of resources in an endpoint may be determined statically or selected dynamically, as described below.
Static determination—Endpoint manager 460 may determine the group of resources to be included in an endpoint based on a condition received from user at the time of defining the endpoint. Endpoint manager 460 employs query manager to determine the group of resources satisfying the condition, and stores the list of resources along with the corresponding endpoint. Thereafter, each invocation of task-group(s) associated with the endpoint operates on the fixed (static) group of resources.
Dynamic selection—Endpoint manager 460 stores the condition received from the user at the time of defining the endpoint. The group of resources is selected at the time of each invocation of task-group associated with the endpoint and thus the group of resources varies at different instances of invocation.
Task-group manager 470 facilitates storage and retrieval of data associated with task-groups in tenant clouds. Task-group manager 470 receives user requests (via user systems 110) associated with task-groups (such as creating/modifying/deleting), performs the desired operations on task-groups stored in data store 420, and forwards the responses to users. During each invocation of a task-group, task-group manager 470 retrieves the set of tasks in the task-group and the corresponding endpoint information associated with the task-group from data store 420.
In cases where the group of resources is static (as noted above), task-group manager 470 retrieves the group of resources associated with the endpoint directly from data store 420, and executes each task of the task-group on each resource of the endpoint. In the case of dynamic selection, task-group manager 470 retrieves the condition stored with the endpoint, and selects the group of resources for each invocation. In such cases, task-group manager 470 uses query manager 440 to map the attributes to select the group of resources satisfying the condition. Task-group manager 470 then executes each task of the task-group on each selected resource.
The description is continued to illustrate some of the above noted features with respect to sample metadata.
Metadata 500 depicts a portion of attribute-value pairs characterizing a VM (e.g., 230-2) provisioned in cloud infrastructure 130 as part of tenant cloud 230. As noted below, the portion depicts attribute-identifier and the corresponding value of each attribute-value pair. Specifically, metadata 500 is shown containing, among other configuration data:
“_id” (502) that uniquely identifies the VM in tenant cloud 230,
“name” (504) that specifies the name of VM 230-2,
“categories” (506) depict multiple values (list) associated for the same key,
“power_state” (505) indicating whether VM 230-2 is powered ON or OFF at the time of retrieval of metadata,
“OSType” (507) that specifies the OS type of VM 230-2,
“nic_list” (508) that specifies the list of network interface cards (NICs) available on VM 230-2, and
“memory_size_bytes” (509) that specifies the memory (RAM) allocated to VM 230-2.
Thus, VM 230-2 is shown to be configured with name “vm-220124-084537” (504) and OS type “Linux” (507).
It may be appreciated that data characterizing custom configuration information of a VM is stored using attribute-identifier “categories” (506 in
It may be appreciated that data characterizing type of OS of a VM is stored using attribute-identifier “OSType” (507 in
The manner in which orchestrator 150 facilitates users to specify endpoints according to aspects of the present disclosure is described next.
Each of
The user may specify a name (601) and description for the endpoint, and select one of the pre-configured projects (such as 602) in which the endpoint will be created. The user may select ‘Type’ (such as 603) of the endpoint, indicating the communication protocol to be used for accessing resources in the endpoint. In an embodiment, the endpoint type can be one of ‘Linux’, ‘Windows’ or ‘HTTP’. For type ‘HTTP’, a user may need to additionally specify a base URL (not shown) to connect to the resource.
The user may select ‘Target Type’ (such as 604) for the endpoint. In an embodiment, the endpoint target type can be one of ‘IP Addresses’ or ‘VMs’. The endpoint may thus be viewed as a collection of IP addresses or a collection of VMs. For target type ‘IP Addresses’, the user may additionally specify the list of IP addresses and connection parameters (such as connection protocol (e.g., HTTP/HTTPs) and port number). The user may specify IP addresses across multiple cloud infrastructures (over which the tenant cloud spans) for the endpoint. For target type ‘VMs’, the user may specify the account (such as 605), indicating the cloud infrastructure in which the VMs are provisioned. ‘Account’ dropdown (605) shown in area 600 lists all the accounts corresponding to cloud infrastructures over which the tenant cloud spans. Thus, for users of tenant cloud 230, ‘Account’ dropdown (605) lists accounts associated with cloud infrastructures 130 and 180, while for users of tenant cloud 240, ‘Account’ dropdown (605) lists accounts associated with cloud infrastructures 180 and 160. The user is assumed to have selected account associated with cloud infrastructure 130 in this illustration.
Toggle button 607 specifies whether the current end-point specification filters are to be evaluated statically (607 turned OFF) or dynamically (607 turned ON). In one embodiment, toggle button 607 is OFF by default (as shown in
Dropdown 606-1 lists the attributes (such as ‘Name’, ‘Power State’, ‘RAM’, etc.) that the user can specify in a sub-condition. Dropdown 606-2 lists the operators (such as ‘NOT IN’, ‘EQUALS’, ‘STARTS WITH’, ‘LESS THAN’, etc.) that the user can specify in the sub-condition. The user can specify a desired value for the attribute selected (in dropdown 606-1) in text area 606-3. Clicking ‘Add’ button 606-4 enables the user to specify additional sub-conditions. Thus, a condition specified by the user may contain one or more sub-conditions.
When toggle button 607 is OFF and the user selects desired values (601, 602, 603, 604, 605, etc.), and specifies respective filter attributes (using elements 606-1, 606-2, 606-3, 606-4) endpoint manager 460 retrieves the list of resources (from data store 420) satisfying the tentative condition specified by the user. Specifically, endpoint manager 460 sends the tentative condition specified by the user to query manager 440. Query manager 440 maps the attributes received from endpoint manager 460 to cloud infrastructure-specific attributes stored in data store 420.
Thus, in the illustration, query manager 440 receives the following tentative condition from endpoint manager 460: “Type=Linux” and “Target Type=VMs” and “Account=NTNX_INFRA”. Query manager 440 examines ‘account’ attribute and determines, in a known way, that the corresponding cloud infrastructure is cloud infrastructure 130. Query manager 440 maps attribute ‘type’ to ‘OSType’ (specific to cloud infrastructure 130), and retrieves VMs (target type) provisioned in cloud infrastructure 130 that are part of tenant cloud 230, and having OS type ‘Linux’. Endpoint manager 460 receives the results from query manager 440 and displays the list of VMs in display area 600 of
It may be appreciated that the user may examine the displayed results, and thereafter further refine the tentative condition, e.g., select type as ‘Windows’ in dropdown 603 of
As shown in
When toggle button 607 is ON and the user specifies the final condition, endpoint manager 460 stores the condition specified by the user along with the endpoint information in data store 420. However, in case of button 607 being OFF, endpoint manager 460 stores the list of resources along with the endpoint information in data store 420.
The endpoints thus specified by the user are thereafter available to be associated with task-groups, as described next.
Referring to
The user may specify a name (702) and description for the task-group, and select one of the pre-configured projects (such as 703) in which the task-group will be created. The user may select a default endpoint for the task-group from dropdown 704. In
When the user clicks ‘Proceed’ button (705), task-group manager 470 creates a task-group named ‘sample_rb1’ (702) in data store 420. Task-group manager 470 also associates endpoint (here, ‘dev-vms_collection’) with the task-group. The user can then specify tasks and various other configuration parameters for task-group ‘sample_rb1’ as described below with respect to
Display area 720 (of
Task name (722) facilitates the user to specify the name of the task. Dropdown task type (723) allows the user to specify the type of task. In one embodiment, task type is one of ‘Execute’, ‘Set Variable’, ‘Delay’, ‘HTTP’, ‘Decision’, ‘VM Power Off’, ‘VM Power On’ or ‘VM Restart’. Thus, the user may select VM-related tasks (such as ‘VM Restart’) for endpoints of target type ‘VM’.
Dropdown script type (724) lets the user specify the type of script associated with the task. In one embodiment, script type is one of ‘Shell’, ‘PowerShell’ or ‘eScript’. ‘eScript’ may be implemented as Python script with a selected sub-set of supported Python modules. As an example, the user may select script type ‘eScript’ for endpoints having target type ‘VMs’.
Dropdown endpoint (725) allows the user to specify an endpoint on which the task may execute. If no endpoint is specified at the task-level, the task is executed on each of the resources of the endpoint specified at the task-group level.
However, if respective endpoints are specified at both the task level (e.g., labeled first task) and the task-group level, then the task level endpoint takes priority. That is, each task for which a corresponding endpoint is not specified at the task level, is executed on each resource of the endpoint associated with the task-group, while the first task is executed on each resource of the endpoint specified at the task level.
For example, assume that a task-group containing 3 tasks (t1, t2 and t3 in that sequential order of execution) has been specified with an endpoint (ep1) associated at the task-group level, and additionally, endpoint (ep2) has been specified for task t2 explicitly. Tasks t1 and t3 do not have corresponding endpoints specified at task level. Then, upon invoking the task-group, task t1 is executed on each resource of ep1, task t2 is executed only on the each resource of ep2 (instead of those on ep1) and task t3 is executed on each resource of ep1, in that order. Thus a convention is provided to conveniently specify a task-group with a desired flow, and yet control endpoints with respect to specific tasks in the flow, as may be desirable in corresponding situations.
In one embodiment, the list of endpoints populated in dropdown 725 is based on the task type and the script type specified by the user. For example, if the user has specified task type ‘VM Power On’, then only resources of target type ‘WM’ (and not of type ‘IP Address’) are populated in dropdown 725. Similarly, if the user has specified script type ‘Shell’, then only Linux VMs are populated in dropdown 725.
It may be appreciated that enabling the user to specify an endpoint at a task level facilitates the user to conveniently create task-groups containing tasks that need to be executed on groups of resources spanning multiple cloud infrastructures. In other words, a user of tenant cloud 230 may wish to restarts all Linux VMs in his/her tenant cloud. The user may thus conveniently specify a task-group with two tasks—a first task specifying ‘Restart VMs’ with endpoint (specifying Linux VMs) in cloud infrastructure 130 and a second task specifying ‘Restart VMs’ with endpoint (specifying Linux VMs) in cloud infrastructure 160. Upon execution of the example task-group, Linux VMs contained in both cloud infrastructures 130 and 160 are restarted.
Dropdown credential (726) allows the user to select credential for executing the task. Text area script (727) allows the user to enter (type) or upload a script. The script specifies the sequence of operations to be performed as part of executing the task. Task graph 728 provides a visual representation of the sequence of execution of tasks as and when a task is added to the task-group. For example, if a task type of ‘Decision’ is added to the task-group, task graph 728 may display a branch each indicating the possible outcome (Yes/No) of the decision. Under each branch, the user may add the corresponding task based on the outcome. The user can add/delete tasks using field 729.
Referring to
Referring to
Referring to
Referring to
Task-group ‘sample_rb1’ thus is shown containing 4 tasks to be executed in a particular sequence as depicted in
A user may invoke a task-group on-demand via user interface (not shown). Alternatively, or in addition, a user may specify that the task-group be invoked periodically (at pre-determined time(s) of day/week, etc.). The description is continued to illustrate the manner in which tasks in task-group ‘sample_rb1’ are executed.
Upon invocation of task-group ‘sample_rb1’, task-group manager 470 retrieves task-group information from data store 420. Task-group manager 470 determines that task-group ‘sample_rb1’ is associated with endpoint ‘dev_vms_collection’. Task-group manager 470 also determines that the group of resources in endpoint ‘dev_vms_collection’ is static, and consists of 4 VMs (namely ‘NTNX-frieza01-2-CVM’, ‘vm-220124-084537’, ‘vm-0-220124-090028’ and ‘vm-220127-044109’, as specified by user using user interface of
Thus, for each VM, task-group manager 470 first executes task ‘Print Network Configuration’ to print network configuration parameters (noted above with respect to
As part of execution of task ‘Restart VMs’ on a VM, the IP address of the VM may change. Task-group manager 470 notes the new IP address of the VM and updates metadata of the VM in data store 420. Any subsequent selection of group of resources is accordingly based on the new IP address.
Upon failure of any task in the task-group, task-group manager 470 may retry execution of the task and/or exit the current invocation. Task-group manager 470 may also log the pass/fail details of each task.
According to an aspect, task-group manager 470 may execute tasks in a task-group concurrently on sub-groups of resources in the endpoint associated with the task-group. Thus, in the illustration, task-group manager 470 may execute the four tasks on sub-groups of 2 VMs each to reduce latency of each invocation.
It may be appreciated that since endpoint ‘dev_vms_collection’ contains static list of 4 VMs, each invocation of task-group ‘sample_rb1’ will execute the tasks on the 4 VMs only.
In an alternative scenario, task-group ‘sample_rb1’ may be associated with endpoint ‘dev_vms_collection2’ (endpoint specified by user in
Thus, the number of VMs may vary for each invocation of task-group ‘sample_rb1’. For example, assuming that the list of VMs in cloud infrastructure 130 is as depicted in
It may be appreciated that orchestrator 150 (specifically, task-group manager 470) eliminates the need to retrieve metadata of resources from corresponding cloud infrastructures for dynamic selection of resources at the time of invocation of task-groups. Rather, orchestrator 150 uses pre-fetched metadata stored in data store 420 to select groups of resources, thus reducing latency in task-group invocations.
Thus, orchestrator 150, provided according to several aspects of the present disclosure, facilitates orchestration of tasks in tenant clouds spanning multiple cloud infrastructures.
It should be appreciated that the features described above can be implemented in various embodiments as a desired combination of one or more of hardware, software, and firmware. The description is continued with respect to an embodiment in which various features are operative when the software instructions described above are executed.
Digital processing system 800 may contain one or more processors such as a central processing unit (CPU) 810, random access memory (RAM) 820, secondary memory 830, graphics controller 860, display unit 870, network interface 880, and input interface 890. All the components except display unit 870 may communicate with each other over communication path 850, which may contain several buses as is well known in the relevant arts. The components of
CPU 810 may execute instructions stored in RAM 820 to provide several features of the present disclosure. CPU 810 may contain multiple processing units, with each processing unit potentially being designed for a specific task. Alternatively, CPU 810 may contain only a single general-purpose processing unit.
RAM 820 may receive instructions from secondary memory 830 using communication path 850. RAM 820 is shown currently containing software instructions constituting shared environment 825 and/or other user programs 826 (such as other applications, DBMS, etc.). In addition to shared environment 825, RAM 820 may contain other software programs such as device drivers, virtual machines, etc., which provide a (common) run time environment for execution of other/user programs.
Graphics controller 860 generates display signals (e.g., in RGB format) to display unit 870 based on data/instructions received from CPU 810. Display unit 870 contains a display screen to display the images defined by the display signals (for example, portions of the user interface shown in
Secondary memory 830 may contain hard drive 835, flash memory 836, and removable storage drive 837. Secondary memory 830 may store the data (for example, data portions shown in
Some or all of the data and instructions may be provided on removable storage unit 840, and the data and instructions may be read and provided by removable storage drive 837 to CPU 810. Removable storage unit 840 may be implemented using medium and storage format compatible with removable storage drive 837 such that removable storage drive 837 can read the data and instructions. Thus, removable storage unit 840 includes a computer readable (storage) medium having stored therein computer software and/or data. However, the computer (or machine, in general) readable medium can be in other forms (e.g., non-removable, random access, etc.).
In this document, the term “computer program product” is used to generally refer to removable storage unit 840 or hard disk installed in hard drive 835. These computer program products are means for providing software to digital processing system 800. CPU 810 may retrieve the software instructions, and execute the instructions to provide various features of the present disclosure described above.
The term “storage media/medium” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage memory 830. Volatile media includes dynamic memory, such as RAM 820. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 850. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment”, “in an embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Furthermore, the described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the above description, numerous specific details are provided such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the disclosure.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
It should be understood that the figures and/or screen shots illustrated in the attachments highlighting the functionality and advantages of the present disclosure are presented for example purposes only. The present disclosure is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown in the accompanying figures.
Further, the purpose of the following Abstract is to enable the Patent Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the present disclosure in any way.
Number | Date | Country | Kind |
---|---|---|---|
202241000379 | Jan 2022 | IN | national |
2022 41000379 | Feb 2022 | IN | national |