Serverless computing and task scheduling

Information

  • Patent Grant
  • 10884807
  • Patent Number
    10,884,807
  • Date Filed
    Wednesday, April 12, 2017
    7 years ago
  • Date Issued
    Tuesday, January 5, 2021
    3 years ago
Abstract
In one embodiment, a method for serverless computing comprises: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task; adding the first task and the second task to a task queue; executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider; and executing the second task from the task queue using hardware computing resources in a second serverless environment selected based on a condition on an output of the first task.
Description
TECHNICAL FIELD

This disclosure relates in general to the field of computing and, more particularly, to serverless computing and task scheduling.


BACKGROUND

Cloud computing aggregates physical and virtual compute, storage, and network resources in the “cloud” and offers users many ways to utilize the resources. One kind of product leveraging cloud computing is called Serverless Computing. Serverless computing offers a high level of compute abstraction, with a great deal of scalability. Developers no longer need to worry about the underlying physical or even virtual infrastructure in the cloud. Often, serverless computing frameworks are offered as a service, e.g., Amazon Web Services (AWS) Lambda (a compute service that runs code in response to events (making serverless computing an event-driven framework) and automatically manages the compute resources required by the code). Developers can pay for compute time consumed. Code can be uploaded to the serverless computing framework, and the serverless computing framework handles the rest.





BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:



FIG. 1 illustrates an exemplary serverless computing system, according to some embodiments of the disclosure;



FIG. 2 illustrates a method for serverless computing, according to some embodiments of the disclosure;



FIG. 3 shows an exemplary task definition for a first task, according to some embodiments of the disclosure;



FIG. 4 shows an exemplary task definition for a second task chained to the first task, according to some embodiments of the disclosure;



FIG. 5 shows an exemplary rule defined for the second task based on an output of the first task, according to some embodiments of the disclosure;



FIG. 6 shows an exemplary method for scheduling a task chain, according to some embodiments of the disclosure;



FIG. 7 illustrates a task chain and dependencies of each task in the task chain, according to some embodiments of the disclosure;



FIG. 8 shows exemplary serverless computing environments and workers, according to some embodiments of the disclosure; and



FIG. 9 illustrates an exemplary data processing system, according to some embodiments of the disclosure.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
Overview

One aspect of the disclosure relates to, among other thing, a method for serverless computing comprising: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task; adding the first task and the second task to a task queue; executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider; and executing the second task from the task queue using hardware computing resources in a second serverless environment selected based on a condition on an output of the first task.


In other aspects, apparatuses comprising means for carrying out one or more of the method steps are envisioned by the disclosure. As will be appreciated by one skilled in the art, aspects of the disclosure, in particular the functionality associated with modelling and deploying scalable micro services herein, may be embodied as a system, a method or a computer program product. Accordingly, aspects of the disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Functions described in this disclosure may be implemented as an algorithm executed by a processor, e.g., a microprocessor, of a computer. Furthermore, aspects of the disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied, e.g., stored, thereon.


EXAMPLE EMBODIMENTS

Understanding Serverless Computing


Serverless computing works by having developers or users upload a piece of code to a serverless computing environment (or serverless computing platform or serverless environment), and the serverless computing environment runs the code without having to burden the developer or user with the setup of workers (i.e., networked hardware resources in the cloud, including compute, storage, and network resources) to execute the code. The serverless computing environment can be event driven, meaning that the code can execute or perform some kind of computations triggered by events. The triggering can be dictated by rules defined in the serverless computing environment. In some cases, the code is executed on demand, or according to a predetermined schedule. To use a serverless computing environment, a developer or user can upload a piece of code to be executed. The developer or user is abstracted from the setup and execution of the code in the networked hardware resources in the cloud.


There are many different flavors of serverless computing environments (in some cases, the serverless computing environments are virtualized computing environments having an application programming interface which abstracts the user from the implementation and configuration of a service or application in the cloud). Some serverless computing environments may have restrictions on the kind of code that it can run, i.e., some serverless computing environment can only execute code written in a number of supported programming languages. Some serverless computing environments may differ in the operating systems on which the code is executed. Some serverless computing environments may differ in their support for dependencies management (libraries for executing code). If the serverless computing environments are part of a public cloud or is a service offered by a company, the serverless computing environments may have different associated costs. Some serverless computing environments may be part of a private cloud, where the networked hardware resources are on-premise and managed by the developer. Scalability, availability, and resource limits may differ from one serverless computing environment to another. Some serverless computing environments may have different limits on the maximum number of functions, concurrent executions, maximum execution duration length, etc. Some serverless computing environments may only support certain specific event subscribers or monitoring platforms. Some serverless computing environments may only support certain kinds of notifications, logging, etc. Serverless computing environments can be different in many respects.


Serverless computing aims to provide a higher level of compute abstraction which allows developers and users to not have to worry about the underlying physical or even virtual infrastructure. While it is easy for a developer to use the public service offerings of serverless computing environments, it is not so trivial for a developer to extend serverless computing to a private cloud or a hybrid cloud environment.


It would be advantageous to build an improved serverless computing system or infrastructure that can leverage offerings from the different serverless computing environments as well as other cloud computing environments. When serverless computing environments are so different from each other and when it is desirable to integrate both public and private clouds (hybrid cloud), designing the improved serverless computing system can be challenging.


Integrating Heterogeneous Serverless Computing Environments into One Unified Serverless Computing System or Infrastructure



FIG. 1 illustrates an exemplary serverless computing system 100, according to some embodiments of the disclosure. The system 100 includes an interface 102, task queue 104, task scheduler 106, and networked hardware resources 160 having workers 110_1, 110_2, . . . 110_N.


The interface 102 allows a developer or user (machine) to interact with the serverless computing system 100 via a predefined application programming interface (API). Via the interface 102, a user can provide a task definition to create an action (associated with some piece of code or script) for the serverless computing system 100 to execute. The interface 102 can include a command line and/or a graphical user interface to facilitate the user interactions, such as inputting and specifying the task definition. The interface 102 is an abstraction layer which would allow a developer or user to use different serverless computing environments deployed in the public cloud(s) and/or private cloud(s).


As an illustration, the following are exemplary actions available via the API:














url
request type
remarks







/actions
GET
list actions


/actions
POST
create an action


/actons/:id
GET
get an action


/actions/:id
PUT
update an action


/actions/:id
DELETE
delete an action


/actions/:id/tag
POST
snapshot a current action


/actions/:id/execute
POST
execute an action


/executions
GET
list executions


/executions/:id
GET
get an execution


/executions/:id/retry
POST
retry an execution


/executions/:id/cancel
DELETE
cancel scheduled execution


/executions/:id/logs
GET
get execution logs


/rules
GET
list rules


/rules
POST
create rule


/rules/:id
GET
get a rule


/rules/:id
PUT
update a rule


/rules/:id
DELETE
delete a rule


/subscribers
GET
list subscribers


/subscribers
POST
create a subscriber


/subscribers/:id
PUT
update a subscriber


/subscribers/:id
DELETE
delete a subscriber


/subscribers/:id/start
PUT
start a subscriber daemon


/subscribers/:id/stop
PUT
stop a subscriber daemon


/notifiers
GET
list notifiers


/notifiers
POST
create a notifier


/notifiers/:id
PUT
update a notifier


/notifiers/:id
DELETE
delete a notifier









Task queue 104 can include one or more data structures which stores tasks which are to be executed by the serverless computing system 100. The tasks which are stored in the task queue 104 can come from a plurality of sources, including from a developer/user via the interface 102. A task can be considered an execution unit or an action, which can include a set of binary codes and a shell script. Via interface 102, developers/users can push tasks to the task z


Task scheduler 106 is configured schedule and decide how to execute the tasks in the task queue 104. The task scheduler 106 can be responsible for assigning tasks to any one of the workers 110_1, 110_2, . . . 110_N. In some embodiments, the task scheduler 106 can implement optimization the assignment of tasks from the task queue. In some embodiments, the task scheduler 106 may assign tasks from the task queue according to a suitable assignment scheme, e.g., an assignment scheme which assigns task to random workers, etc.


One unique aspect of the serverless computing system 100 is that networked hardware resources 160 having workers 110_1, 110_2, . . . 110_N can include different serverless computing environments and/or cloud computing environments with heterogeneous characteristics. For instance, networked hardware resources 160 having workers 110_1, 110_2, . . . 110_N can be implemented in different environments, including but not limited to, AWS Lambda, IBM OpenWisk, Google Cloud Functions, Windows Azure Functions, OpenStack, local Docker environment (e.g., private cloud with support for implementing Containers), local environment (e.g., private cloud) with support for virtual machines, local environment (e.g., private cloud) with support for microservices, etc. The networked hardware resources 160 can include resources in one or more of the following: one or more public clouds, one or private clouds, and one or more hybrid clouds (having both public and private clouds).


The interface 102 abstracts the APIs from the different environments and enables the integration of different environments under a unified API. In some embodiments, the interface 102 also exposes the workers 110_1, 110_2, . . . 110_N in a way to enable developers/users to access the environments or define rules based on the environments. Task scheduler can select one of the available workers 110_1, 110_2, . . . 110_N from any suitable serverless computing environment (private cloud, public cloud, local Docker, etc.), since system 100 is implemented on top of many different serverless computing environments and/or cloud computing environments. This aspect provides a great deal of flexibility for the developer to execute tasks. A developer can even deploy functions in a public and private setting (executing tasks in a hybrid setting). Such aspect can potentially speed up the development of applications or new cloud native applications (in the fields such as internet of things, network function virtualization, etc.).


As an event-driven architecture, the serverless computing system 100 can further include rule checker 120, monitoring agent 130 and/or subscriber manager 140, and events 150. The system 100 can include more than one monitoring agent 130. The system 100 can include more than one subscriber agent 130. The system 100 can include more than one events (event sources) 150. The serverless computing system 100 can deal with both pull-type and push-type event driven workflows. Rule checker 120 can receive rules (e.g., rule definitions) from a developer/user, and/or have predefined rules. The rules can be of a form of event condition action (ECA), which can check one or more events against one or more conditions and performs one or more actions based on the outcome of the check. In some embodiments, a monitoring agent 130 (e.g., Kafka monitoring agent, Rabbit monitoring agent, etc.) can poll an external event source, i.e., events 150. The events monitored by the monitoring agent 130 can be checked by rule checker 120 based on the rules therein. If an action is to be performed based on one or more rules, the one or more actions are to be added to the task queue as one or more tasks. In some embodiments, a subscriber agent 140 can subscribe to an external event source, i.e., events 150. The events subscribed by the subscriber agent 140 can be checked by rule checker 120 based on the rules therein. If one or more actions are be performed based on one or more rules, the one or more actions can be added to the task queue as one or more tasks. In some embodiments, any one or more of the workers 110_1, 110_2, . . . 110_N may generate output which can be fed to events 150, which could in turn trigger tasks to be added to the task queue by the rule checker 120 and either the monitoring agent 130 and/or the subscriber agent 140. In some embodiments, any one or more of the workers 110_1, 110_2, . . . 110_N may generate output which can be fed to rule checker 120, which could in turn trigger tasks to be added to the task queue.


In some embodiments, the serverless computing system 100 can include a notification system. The interface 102 can accept notification definitions which requests notifier 108 to output one or more notifications based on one or more outputs from any one or more of the workers 110_1, 110_2, . . . 110_N. For instance, the success/failure/status from an execution of a task can be output to a developer/user by notifier 108. In another instance, the output data or a derivation of the output data from executing of a task by any one or more of the workers 110_1, 110_2, . . . 110_N can be output to a developer/user by notifier 108. Exemplary notifier 108 includes Hypertext Transfer Protocol (HTTP) notifier, Kafka notifier, etc.


In some embodiments, any one of the workers 110_1, 110_2, . . . 110_N can also push/add one or more tasks to the task queue 104.


Task Definition and Processing of Task Chains


Different from other serverless computing architectures, the serverless computing architecture 100 can receive a task definition which can specify a task chain, e.g., describing a work flow or data processing flow. A task chain can link two more tasks together to be executed in sequence (e.g., one after another). In some cases, a task chain can be a directed acylic graph. For instance, a first task can generate some output, and a subsequent second task can process the output from the first task. FIG. 2 illustrates a method for serverless computing which receives a task chain, according to some embodiments of the disclosure. In 202, interface (e.g., interface of 102 of FIG. 1) can receive a task definition. The definition can come from a developer/user. In some cases, the task definition can come from other sources such as rule checker 120, and workers 110_1, 110_2, . . . 110_N. The task definition comprises a first task and a second task chained to the first task (i.e., the task chain). In 204, the interface 102 can add the first task and the second task to a task queue, e.g., task queue 104. In 206, the networked hardware resources (e.g., networked hardware resources 160 having workers 110_1, 110_2, . . . 110_N) can execute the first task from the task queue in a first serverless environment associated with a first serverless environment provider. In 208, the networked hardware resources (e.g., networked hardware resources 160 having workers 110_1, 110_2, . . . 110_N) can execute the second task from the task queue in a second serverless environment selected based on a condition on an output of the first task. In some cases, the first and second serverless computing environments are the same. In some cases, the first and second serverless computing environments are not the same. The latter can be particularly advantageous since some execution of tasks may be more suited for one serverless computing environment over another. The serverless computing system 100 seen in FIG. 1 enables the task chain to be split to tasks (or sub-tasks), where each of the tasks in the task chain can be executed in different serverless computing environments (e.g., task scheduler can assign the tasks to different workers from different serverless computing environments).


A task definition can be provided by the developer/user, and the task definition can take the form of: T→{task_id, input_data, task_action_function_code, output_data, next_task_id}. An exemplary task definition comprises: a first task identifier identifying the first task “task_id”, a first pointer/name to input_data “input_data”, a task action function code “task_action function code”, a second pointer/name to output data “output_data”, and a second task identifier identifying the second task “next_task_id”. FIG. 3 shows an exemplary task definition for a first task 300, according to some embodiments of the disclosure. FIG. 4 shows an exemplary task definition for a second task 400 chained to the first task, according to some embodiments of the disclosure. The first task specifies that the next_task_id to be “2” so that the first task can be chained to the second task. The first task's action finds a list of users from a database and outputs the list of users (“user list”). The second task's action takes the list of users and sends email to users.


Besides the ability to define a next task as part of the task definition, a developer/user can specify the next serverless environment for the next task, based on the output of the current task. Extending the method described in FIG. 2, the method can further include receiving a rule specifying the second serverless environment to be used for the second task if the condition of the output of the first task is met. The rule can further specify a third serverless environment (different from the second serverless environment) to be used for the second task if the condition of the output of the first task is not met. FIG. 5 shows an exemplary rule defined for the second task based on an output of the first task, according to some embodiments of the disclosure. In the exemplary rule, if the length of the output_data of the first task 300 is greater than 1,000,000 (a condition on the output of the first task), then use “serverless_environment_1 worker”, else, use “serverless_environment_2 worker” (serverless_environment_1 is different from serverless_environment_2). The rule drives the selection of a next task execution serverless environment based on a condition on the previous task. This rule can be advantageous if “serverless_environment_2” is a better (e.g., cheaper, more efficient, etc.) task execution environment for the second task when the length of the output data is large.


Task Scheduling


The function of assigning tasks to a worker in a particular serverless environment (function of task scheduler 106 of FIG. 1) is not trivial. Most serverless environments are delivered as a service, but the characteristics and capabilities differ drastically from one service to another. One environment can be part of the public cloud, or on-premise in a local environment. Selecting an optimized serverless environment for the task at hand can be challenging. FIG. 6 shows an exemplary method for scheduling a task chain, according to some embodiments of the disclosure. The method can enable an optimized selection of serverless computing environments and/or virtualized computing environments (i.e. assign a task to an optimal worker in a task execution environment). The method models dependencies or requirements of a task as constraints. The method also models capabilities of the serverless environments also as constraints. Each serverless environment also has a cost associated with it. In 602, a task scheduler determines a set of dependency constraints for each task of a task chain in the task queue is determined. In 604, the task scheduler determines a cost and a set of capability(-ies) associated with each serverless computing environment. For example, a set of dependency constraints can include the programming language of code for the task. A set of capabil(-ies) can also include a programming language that a serverless environment can execute. In some embodiments, a set of dependency constraints for a given task comprises a data locality compliance rule. In some embodiments, a set of dependency constraints for a given task comprises one or more requirements specified in a task definition associated with the given task. In some embodiments, a set of dependency constraints for a given task comprises a rule specifying a particular serverless environment to be used for the given task if a condition of an output of a task previous to the given task in the task chain is met (e.g., a rule, such as the one seen in FIG. 5). Some examples of these features are illustrated in FIGS. 7 and 8.


In 606, the task scheduler selects a serverless computing environment for each task of the task chain based on the sets of one or more dependency constraints, the costs, and the sets of one or more capabilities. Selecting the serverless computing environment can include minimizing sum of costs for all tasks in the task chain and ensuring the set of dependency constraints for each task in the task chain is satisfied by the set of capabilities associated with the serverless environment selected for each task in the task chain.



FIG. 7 illustrates a task chain and dependencies of each task in the task chain, according to some embodiments of the disclosure. Every task—T, has a set of dependency constraints—{D1, D2, D3 . . . }. In the serverless computing environment described herein, the task queue may have a task chain to be fulfilled as per a single execution workflow. Consider a chain of tasks T={T_1, T_2, . . . T_i . . . } with similar constraints chains—{{D1_1, D1_2 . . . }, {D2_1, D2_2, . . . }, {Di_1, Di_2, . . . } . . . } For simplicity, three of such tasks in a task chain is shown as T_1 702, T_2 704, T_3 706 having corresponding sets of dependencies 710, 712, and 714.


Every serverless environment—S_j—has a cost factor per task Cj_i and some additional capability constraints—{Dj′_1, Dj′_2, Dj′_3 . . . }. Consider the set of all available serverless environments S={S1, S2, . . . S_j, . . . S_n}. The task scheduler can select a set of S_j from S for each T_i in T, such that the {Sum of costs Cj_i for all tasks T_i in T} is minimized or decreased, with constraints for task T_i—{Di_1, Di_2, . . . } being satisfied by the corresponding chosen serverless environment S_j's capabilities—{Dj′_1, Dj′_2, Dj′_3 . . . }. The resulting solution of this problem provides a right and optimized set of assignment pairs—{T_i, S_j} meaning the task T_i will be executed in the serverless environment S_j, and the assignment pairs would be most optimal distribution of tasks to different serverless environments.



FIG. 8 shows exemplary serverless computing environments and workers, according to some embodiments of the disclosure. Consider a task chain that involves a set of three tasks (e.g., as seen in FIG. 7), and a set of available serverless execution platforms—S_1 804 (cost=5), S_2 806 (cost=10), S_3 808 (cost=20), S_4 824 (cost=5), S_5 826 (cost=10), and S_6 846 (cost=20), out of which S_1 804, S_2 806, and S_3 808 are located in Datacenter DC_1 802, S_4 824 and S_5 826 can be in Datacenter DC_2 822, and S_6 846 can be in a public cloud serverless environment. Exemplary dependency constraints of each of the tasks, and exemplary capability constraints of each of the serverless environments as follows, in reference to both FIGS. 7 and 8.

    • Set of dependencies 710 of T_1 702 can include {D1_1: Data locality compliance rule: Data stays locally on premise in DC_1, D1_2: Needs numpy, python in an Ubuntu environments}
    • Set of dependencies 712 of T_2 704 can include {D2_1: Data locality compliance rule: text documents reside in a storage server in DC_2}
    • Set of dependencies 714 of T_3 706 can include {D3_1: Backup service should be local to storage server location in DC_2}
    • Set of capabilities 812 of S_1 804 can include {D1′_1: local Windows server environment}
    • Set of capabilities 814 of S_2 806 can include {D2′_1: has numpy, python Ubuntu container images}
    • Set of capabilities 816 of S_4 808 can include {D3′_1: has numpy, python Ubuntu container images}
    • Set of dependencies 832 of S_4 824 can include {D4′_1: has numpy, python Ubuntu container images}
    • Set of dependencies 826 of S_4 824 can include {D5′_1: has numpy, python Ubuntu container images}
    • Set of dependencies 832 of S_4 824 can include {D6′_1: has numpy, python Ubuntu container images}


Determining an optimized solution from the optimization problem can result in an assignment pair as follows, that satisfies all of the required constraints, as well optimizes on the total cost, based on the individual costs of each serverless environments. Tasks in a task chain are assigned to the least costing, and most fitting serverless environment, where the task can be executed (e.g., by workers in assigned serverless environments)

    • Task T_1 702 Serverless env: S_2 806
    • Task T_2 704 Serverless env: S_4 824
    • Task T_3 706 Serverless env: S_4 824


Data Processing System



FIG. 9 depicts a block diagram illustrating an exemplary data processing system 900 (sometimes referred herein as a “node”) that may be used to implement the functionality associated with any parts of the serverless computing system (100 of FIG. 1) or user (machines) accessing any one part of the serverless computing system (e.g., via interface 102), according to some embodiments of the disclosure. For instance, networked hardware resources having the functionalities implemented thereon, may have one or more of the components of the system 900.


As shown in FIG. 9, the data processing system 900 may include at least one processor 902 coupled to memory elements 904 through a system bus 906. As such, the data processing system may store program code within memory elements 904. Further, the processor 902 may execute the program code accessed from the memory elements 904 via a system bus 906. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 900 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.


The memory elements 904 may include one or more physical memory devices such as, for example, local memory 908 and one or more bulk storage devices 910. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 900 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from the bulk storage device 910 during execution.


Input/output (I/O) devices depicted as an input device 912 and an output device 914 optionally can be coupled to the data processing system. User (machines) accessing the interface 102 would typically have such I/O devices. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers. In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in FIG. 9 with a dashed line surrounding the input device 912 and the output device 914). An example of such a combined device is a touch sensitive display, also sometimes referred to as a “touch screen display” or simply “touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.


A network adapter 916 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 900, and a data transmitter for transmitting data from the data processing system 900 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 900.


As pictured in FIG. 9, the memory elements 904 may store an application 918. In various embodiments, the application 918 may be stored in the local memory 908, the one or more bulk storage devices 910, or apart from the local memory and the bulk storage devices. It should be appreciated that the data processing system 900 may further execute an operating system (not shown in FIG. 9) that can facilitate execution of the application 918. The application 918, being implemented in the form of executable program code, can be executed by the data processing system 900, e.g., by the processor 902. Responsive to executing the application, the data processing system 900 may be configured to perform one or more operations or method steps described herein.


Persons skilled in the art will recognize that while the elements 902-918 are shown in FIG. 9 as separate elements, in other embodiments their functionality could be implemented in lesser number of individual elements or distributed over a larger number of components.


EXAMPLES

Example 1 is a method for serverless computing, comprising: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task; adding the first task and the second task to a task queue; executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider; and executing the second task from the task queue using hardware computing resources in a second serverless environment selected based on a condition on an output of the first task.


In Example 2, the method of Example 1 can further include receiving a rule specifying the second serverless environment to be used for the second task if the condition of the output of the first task is met.


In Example 3, the method of Example 2 can further include: the rule further specifying a third serverless environment to be used for the second task if the condition of the output of the first task is not met.


In Example 4, the method of any one of Examples 1-3 can further include the task definition comprising: a first task identifier identifying the first task, a first pointer to input data, a task action function code, a second pointer to output data, and a second task identifier identifying the second task.


In Example 5, the method of any one of Examples 1-4 can further include: determining a set of dependency constraints for each task of a task chain in the task queue; determining a cost and a set of capabilities associated with each serverless computing environment; and selecting a serverless computing environment for each task of the task chain based on the sets of one or more dependency constraints, the costs, and the sets of one or more capabilities.


In Example 6, the method of Example 5 can further include: selecting the serverless computing environment comprising minimizing sum of costs for all tasks in the task chain and ensuring the set of dependency constraints for each task in the task chain is satisfied by the set of capabilities associated with the serverless environment selected for each task in the task chain.


In Example 7, the method of Example 5 or 6 can further include: the set of dependency constraints comprising the programming language of code for the task, and the set of capabilities comprises a programming language that a serverless environment can execute.


In Example 8, the method of any one of Examples 5-7 can further include: a set of dependency constraints for a given task comprising a data locality compliance rule.


In Example 9, the method of any one of Examples 5-8 further include a set of dependency constraints for a given task comprising one or more requirements specified in a task definition associated with the given task.


In Example 10, the method of Example 5-9 can further include: a set of dependency constraints for a given task comprises a rule specifying a particular serverless environment to be used for the given task if a condition of an output of a task previous to the given task in the task chain is met.


Example 11 is a system comprising: at least one memory element; at least one processor coupled to the at least one memory element; an interface that when executed by the at least one processor is configured to: receive a task definition, wherein the task definition comprises a first task and a second task chained to the first task; and adding the first task and the second task to a task queue; and one or more workers provisioned in networked hardware resources of a serverless computing environment that when executed by the at least one processor is configured to: execute the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider; and execute the second task from the task queue using hardware computing resources in a second serverless environment selected based on a condition on an output of the first task.


In Example 12, the system of Example 11 can further include: the interface that when executed by the at least one processor being further configured to: receive a rule specifying the second serverless environment to be used for the second task if the condition of the output of the first task is met.


In Example 13, the system of Example 12 can further include the rule further specifying a third serverless environment to be used for the second task if the condition of the output of the first task is not met.


In Example 14, the system of any one of Examples 11-13 can further include the task definition comprises a first task identifier identifying the first task, a first pointer to input data, a task action function code, a second pointer to output data, and a second task identifier identifying the second task.


In Example 15, the system of any one of Examples 11-14 can further include a task scheduler that when executed by the at least one processor being configured to: determining a set of dependency constraints for each task of a task chain in the task queue; determining a cost and a set of capability(-ies) associated with each serverless computing environment; and selecting a serverless computing environment for each task of the task chain based on the sets of one or more dependency constraints, the costs, and the sets of one or more capabilities.


In Example 16, the system of Example 15 can further include: the task scheduler that when executed by the at least one processor being configured to select the computing environment comprising minimizing sum of costs for all tasks in the task chain and ensuring the set of dependency constraints for each task in the task chain is satisfied by the set of capabilities associated with the serverless environment selected for each task in the task chain.


In Example 17, the system of Example 15 or 16 can further include the set of dependency constraints comprising the programming language of code for the task, and the set of capabil(-ies) comprises a programming language that a serverless environment can execute.


In Example 18, the system of any one of Examples 15-17 can further include a set of dependency constraints for a given task comprising a data locality compliance rule.


In Example 19, the system of any one of Examples 15-18 can further include a set of dependency constraints for a given task comprises one or more requirements specified in a task definition associated with the given task.


In Example 20, the system of any one of Examples 15-19 can further include a set of dependency constraints for a given task comprises a rule specifying a particular serverless environment to be used for the given task if a condition of an output of a task previous to the given task in the task chain is met.


Example 21 include one or more computer-readable non-transitory media comprising one or more instructions, for serverless computing and task scheduling, that when executed on a processor configure the processor to perform one or more operations comprising: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task; adding the first task and the second task to a task queue; executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider; and executing the second task from the task queue using hardware computing resources in a second serverless environment selected based on a condition on an output of the first task.


In Example 22, the media of Example 21 can further include the operations further comprising: receiving a rule specifying the second serverless environment to be used for the second task if the condition of the output of the first task is met.


In Example 23, the media of Example 22 can further include the rule further specifying a third serverless environment to be used for the second task if the condition of the output of the first task is not met.


In Example 24, the media of Examples 21-23 can further include the task definition comprising a first task identifier identifying the first task, a first pointer to input data, a task action function code, a second pointer to output data, and a second task identifier identifying the second task.


In Example 25, the media of any one of Examples 21-24 can further include the operations further comprising: determining a set of dependency constraints for each task of a task chain in the task queue; determining a cost and a set of capabilities associated with each serverless computing environment; and selecting a serverless computing environment for each task of the task chain based on the sets of one or more dependency constraints, the costs, and the sets of one or more capabilities.


In Example 26, the media of Example 25 can further include selecting the serverless computing environment comprising minimizing sum of costs for all tasks in the task chain and ensuring the set of dependency constraints for each task in the task chain is satisfied by the set of capabilities associated with the serverless environment selected for each task in the task chain.


In Example 27, the media of Examples 25 or 26 can further include the set of dependency constraints comprising the programming language of code for the task, and the set of capabilities comprises a programming language that a serverless environment can execute.


In Example 28, the media of any one of Examples 25-27, can further include a set of dependency constraints for a given task comprising one or more of the following: a data locality compliance rule


In Example 29, the media of any one of Examples 25-28 can further include a set of dependency constraints for a given task comprising one or more requirements specified in a task definition associated with the given task.


In Example 30, the media of any one of Examples 25-28, can further include a set of dependency constraints for a given task comprising a rule specifying a particular serverless environment to be used for the given task if a condition of an output of a task previous to the given task in the task chain is met.


Example 31 is one or more apparatus comprising means for carrying out any one or more parts of the methods described in Examples 1-10.


As used herein “a set” of, e.g., dependency constraints, requirements, capabilities, etc., can include just one of such element in the set, or more than one elements in the set.


Variations and Implementations

Within the context of the disclosure, the cloud includes a network used herein represents a series of points, nodes, or network elements of interconnected communication paths for receiving and transmitting packets of information that propagate through a communication system. A network offers communicative interface between sources and/or hosts, and may be any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), Intranet, Extranet, Internet, WAN, virtual private network (VPN), or any other appropriate architecture or system that facilitates communications in a network environment depending on the network topology. A network can comprise any number of hardware or software elements coupled to (and in communication with) each other through a communications medium.


As used herein in this Specification, the term ‘network element’ or ‘node’ in the cloud is meant to encompass any of the aforementioned elements, as well as servers (physical or virtually implemented on physical hardware), machines (physical or virtually implemented on physical hardware), end user devices, routers, switches, cable boxes, gateways, bridges, loadbalancers, firewalls, inline service nodes, proxies, processors, modules, or any other suitable device, component, element, proprietary appliance, or object operable to exchange, receive, and transmit information in a network environment. These network elements may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the disclosed operations. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information.


In one implementation, components seen in FIG. 1 and other components described herein may include software to achieve (or to foster) the functions discussed herein for serverless computing and task scheduling where the software is executed on one or more processors to carry out the functions. This could include the implementation of instances of an optimizer, provisioner, and/or any other suitable element that would foster the activities discussed herein. Additionally, each of these elements can have an internal structure (e.g., a processor, a memory element, etc.) to facilitate some of the operations described herein. Exemplary internal structure includes elements shown in data processing system in FIG. 9. In other embodiments, these functions for serverless computing and task scheduling may be executed externally to these elements, or included in some other network element to achieve the intended functionality. Alternatively, the components seen in FIG. 1 and other components described herein may include software (or reciprocating software) that can coordinate with other network elements in order to achieve the serverless computing and task scheduling functions described herein. In still other embodiments, one or several devices may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof.


In certain example implementations, the functions outlined herein may be implemented by logic encoded in one or more non-transitory, tangible media (e.g., embedded logic provided in an application specific integrated circuit [ASIC], digital signal processor [DSP] instructions, software [potentially inclusive of object code and source code] to be executed by one or more processors, or other similar machine, etc.). In some of these instances, one or more memory elements can store data used for the operations described herein. This includes the memory element being able to store instructions (e.g., software, code, etc.) that are executed to carry out the activities described in this Specification. The memory element is further configured to store information such as task definitions, task queues, rules, dependencies, costs, and capabilities described herein. The processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by the processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array [FPGA], an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.


Any of these elements (e.g., the network elements, etc.) can include memory elements for storing information to be used in achieving the optimization functions, as outlined herein. Additionally, each of these devices may include a processor that can execute software or an algorithm to perform the optimization activities as discussed in this Specification. These devices may further keep information in any suitable memory element [random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.], software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’ Each of the network elements can also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment.


Additionally, it should be noted that with the examples provided above, interaction may be described in terms of two, three, or four network elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that the systems described herein are readily scalable and, further, can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad techniques of serverless computing and task scheduling, as potentially applied to a myriad of other architectures.


It is also important to note that the parts of the flow diagram in the FIGS. 2 and 6 illustrate only some of the possible scenarios that may be executed by, or within, the components shown (e.g., in FIGS. 1 and 7) and described herein. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the components shown and described herein, in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.


The term “system” is used generically herein to describe any number of components, elements, sub-systems, devices, packet switch elements, packet switches, routers, networks, computer and/or communication devices or mechanisms, or combinations of components thereof. The term “computer” is used generically herein to describe any number of computers, including, but not limited to personal computers, embedded processing elements and systems, control logic, ASICs, chips, workstations, mainframes, etc. The term “processing element” is used generically herein to describe any type of processing mechanism or device, such as a processor, ASIC, field programmable gate array, computer, etc. The term “device” is used generically herein to describe any type of mechanism, including a computer or system or component thereof. The terms “task” and “process” are used generically herein to describe any type of running program, including, but not limited to a computer process, task, thread, executing application, operating system, user process, device driver, native code, machine or other language, etc., and can be interactive and/or non-interactive, executing locally and/or remotely, executing in foreground and/or background, executing in the user and/or operating system address spaces, a routine of a library and/or standalone application, and is not limited to any particular memory partitioning technique. The steps, connections, and processing of signals and information illustrated in the FIGURES, including, but not limited to any block and flow diagrams and message sequence charts, may typically be performed in the same or in a different serial or parallel ordering and/or by different components and/or processes, threads, etc., and/or over different connections and be combined with other functions in other embodiments, unless this disables the embodiment or a sequence is explicitly or implicitly required (e.g., for a sequence of read the value, process the value—the value must be obtained prior to processing it, although some of the associated processing may be performed prior to, concurrently with, and/or after the read operation). Furthermore, the term “identify” is used generically to describe any manner or mechanism for directly or indirectly ascertaining something, which may include, but is not limited to receiving, retrieving from memory, determining, defining, calculating, generating, etc.


Moreover, the terms “network” and “communications mechanism” are used generically herein to describe one or more networks, communications mediums or communications systems, including, but not limited to the Internet, private or public telephone, cellular, wireless, satellite, cable, local area, metropolitan area and/or wide area networks, a cable, electrical connection, bus, etc., and internal communications mechanisms such as message passing, interprocess communications, shared memory, etc. The term “message” is used generically herein to describe a piece of information which may or may not be, but is typically communicated via one or more communication mechanisms of any type.


Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.


One or more advantages mentioned herein does not in any way suggest that any one of the embodiments necessarily provides all the described advantages or that all the embodiments of the present disclosure necessarily provide any one of the described advantages.

Claims
  • 1. A method for serverless computing, comprising: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task;adding the first task and the second task to a task queue;determining a set of dependency constraints for each task of a task chain in the task queue;determining a cost and a set of capabilities associated with each serverless computing environment;executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider;first selecting, based on sets of dependency constraints, the costs, the sets of capabilities, and an output of the first task, a second serverless environment out of a plurality of serverless environments to execute the second task from the task queue; andexecuting the second task from the task queue using hardware computing resources in the second serverless environment.
  • 2. The method of claim 1, wherein the task definition comprises a first task identifier identifying the first task, a first pointer to input data, a task action function code, a second pointer to output data, and a second task identifier identifying the second task.
  • 3. The method of claim 1, wherein the first selecting the serverless computing environment comprises minimizing sum of costs for all tasks in the task chain and ensuring the set of dependency constraints for each task in the task chain is satisfied by the set of capabilities associated with the serverless environment selected for each task in the task chain.
  • 4. The method of claim 1, wherein the set of dependency constraints comprises code for the task, and the set of capabilities comprises a programming language that a serverless environment can execute.
  • 5. The method of claim 1, wherein a set of dependency constraints for a given task comprises a data locality compliance rule.
  • 6. The method of claim 1, wherein a set of dependency constraints for a given task comprises one or more requirements specified in a task definition associated with the given task.
  • 7. The method of claim 1, wherein a set of dependency constraints for a given task comprises a rule specifying a particular serverless environment to be used for the given task if a condition of an output of a task previous to the given task in the task chain is met.
  • 8. A system comprising: at least one memory element;at least one processor coupled to the at least one memory element;an interface that when executed by the at least one processor is configured to: receive a task definition, wherein the task definition comprises a first task and a second task chained to the first task and adding the first task and the second task to a task queue; andone or more workers provisioned in networked hardware resources of a serverless computing environment that when executed by the at least one processor is configured to: determine a set of dependency constraints for each task of a task chain in the task queue;determine a cost and a set of capabilities associated with each serverless computing environment;execute the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider;first select, based on a sets of dependency constraints, the costs, and the sets of capabilities, and an output of the first task, a second serverless environment out of a plurality of serverless environments to execute the second task from the task queue; andexecute the second task from the task queue using hardware computing resources in the second serverless environment.
  • 9. The system of claim 8, wherein the task definition comprises a first task identifier identifying the first task, a first pointer to input data, a task action function code, a second pointer to output data, and a second task identifier identifying the second task.
  • 10. One or more computer-readable non-transitory media comprising one or more instructions, for serverless computing and task scheduling, that when executed on a processor configure the processor to perform one or more operations comprising: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task;adding the first task and the second task to a task queue;determining a set of dependency constraints for each task of a task chain in the task queue;determining a cost and a set of capabilities associated with each serverless computing environment;executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider;first selecting, based on sets of dependency constraints, the costs, and the sets of capabilities, and an output of the first task, a second serverless environment out of a plurality of serverless environments to execute the second task from the task queue; andexecuting the second task from the task queue using hardware computing resources in the second serverless environment.
  • 11. The media of claim 10, wherein the first selecting the serverless computing environment comprises minimizing sum of costs for all tasks in the task chain and ensuring the set of dependency constraints for each task in the task chain is satisfied by the set of capabilities associated with the serverless environment selected for each task in the task chain.
  • 12. The media of claim 10, wherein the set of dependency constraints comprises the programming language of code for the task, and the set of capabilities comprises a programming language that a serverless environment can execute.
  • 13. The media of claim 10, wherein a set of dependency constraints for a given task comprises one or more of the following: a data locality compliance rule, and one or more requirements specified in a task definition associated with the given task.
  • 14. The media of claim 10, wherein a set of dependency constraints for a given task comprises a rule specifying a particular serverless environment to be used for the given task if a condition of an output of a task previous to the given task in the task chain is met.
US Referenced Citations (265)
Number Name Date Kind
3629512 Yuan Dec 1971 A
4769811 Eckberg, Jr. et al. Sep 1988 A
5408231 Bowdon Apr 1995 A
5491690 Alfonsi et al. Feb 1996 A
5557609 Shobatake et al. Sep 1996 A
5600638 Bertin et al. Feb 1997 A
5687167 Bertin et al. Nov 1997 A
6115384 Parzych Sep 2000 A
6167438 Yates et al. Dec 2000 A
6400681 Bertin et al. Jun 2002 B1
6493804 Soltis et al. Dec 2002 B1
6661797 Goel et al. Dec 2003 B1
6687229 Kataria et al. Feb 2004 B1
6799270 Bull et al. Sep 2004 B1
6888828 Partanen et al. May 2005 B1
6993593 Iwata Jan 2006 B2
7027408 Nabkel et al. Apr 2006 B2
7062567 Benitez et al. Jun 2006 B2
7095715 Buckman et al. Aug 2006 B2
7096212 Tribble et al. Aug 2006 B2
7139239 McFarland et al. Nov 2006 B2
7165107 Pouyoul et al. Jan 2007 B2
7197008 Shabtay et al. Mar 2007 B1
7197660 Liu et al. Mar 2007 B1
7209435 Kuo et al. Apr 2007 B1
7227872 Biswas et al. Jun 2007 B1
7231462 Berthaud et al. Jun 2007 B2
7333990 Thiagarajan et al. Feb 2008 B1
7443796 Albert et al. Oct 2008 B1
7458084 Zhang et al. Nov 2008 B2
7472411 Wing et al. Dec 2008 B2
7486622 Regan et al. Feb 2009 B2
7536396 Johnson et al. May 2009 B2
7552201 Areddu et al. Jun 2009 B2
7558261 Arregoces et al. Jul 2009 B2
7567504 Darling et al. Jul 2009 B2
7571470 Arregoces et al. Aug 2009 B2
7573879 Narad et al. Aug 2009 B2
7610375 Portolani et al. Oct 2009 B2
7643468 Arregoces et al. Jan 2010 B1
7644182 Banerjee et al. Jan 2010 B2
7647422 Singh et al. Jan 2010 B2
7657940 Portolani et al. Feb 2010 B2
7668116 Wijnands et al. Feb 2010 B2
7684321 Muirhead et al. Mar 2010 B2
7738469 Shekokar et al. Jun 2010 B1
7751409 Carolan Jul 2010 B1
7793157 Bailey et al. Sep 2010 B2
7814284 Glass et al. Oct 2010 B1
7831693 Lai Nov 2010 B2
7860095 Forissier et al. Dec 2010 B2
7860100 Khalid et al. Dec 2010 B2
7895425 Khalid et al. Feb 2011 B2
7899012 Ho et al. Mar 2011 B2
7899861 Feblowitz et al. Mar 2011 B2
7907595 Khanna et al. Mar 2011 B2
7908480 Firestone et al. Mar 2011 B2
7983174 Monaghan et al. Jul 2011 B1
7990847 Leroy et al. Aug 2011 B1
8000329 Fendick et al. Aug 2011 B2
8018938 Fromm et al. Sep 2011 B1
8094575 Vadlakonda et al. Jan 2012 B1
8166465 Feblowitz et al. Apr 2012 B2
8180909 Hartman et al. May 2012 B2
8191119 Wing et al. May 2012 B2
8195774 Lambeth et al. Jun 2012 B2
8280354 Smith et al. Oct 2012 B2
8281302 Durazzo et al. Oct 2012 B2
8291108 Raja et al. Oct 2012 B2
8305900 Bianconi Nov 2012 B2
8311045 Quinn et al. Nov 2012 B2
8316457 Paczkowski et al. Nov 2012 B1
8355332 Beaudette et al. Jan 2013 B2
8442043 Sharma et al. May 2013 B2
8464336 Wei et al. Jun 2013 B2
8479298 Keith et al. Jul 2013 B2
8498414 Rossi Jul 2013 B2
8520672 Guichard et al. Aug 2013 B2
8601152 Chou Dec 2013 B1
8612612 Dukes et al. Dec 2013 B1
8627328 Mousseau et al. Jan 2014 B2
8676965 Gueta Mar 2014 B2
8676980 Kreeger et al. Mar 2014 B2
8700892 Bollay et al. Apr 2014 B2
8730980 Bagepalli et al. May 2014 B2
8743885 Khan et al. Jun 2014 B2
8751420 Hjelm et al. Jun 2014 B2
8762534 Hong et al. Jun 2014 B1
8762707 Killian et al. Jun 2014 B2
8792490 Jabr et al. Jul 2014 B2
8793400 Mcdysan et al. Jul 2014 B2
8819419 Carlson et al. Aug 2014 B2
8825070 Akhtar et al. Sep 2014 B2
8830834 Sharma et al. Sep 2014 B2
8904037 Haggar et al. Dec 2014 B2
8949847 Kim et al. Feb 2015 B2
8984284 Purdy, Sr. et al. Mar 2015 B2
9001827 Appenzeller Apr 2015 B2
9071533 Hui et al. Jun 2015 B2
9077661 Andreasen et al. Jul 2015 B2
9088584 Feng et al. Jul 2015 B2
9130872 Kumar et al. Sep 2015 B2
9143438 Khan et al. Sep 2015 B2
9160797 Mcdysan Oct 2015 B2
9178812 Guichard et al. Nov 2015 B2
9253274 Quinn et al. Feb 2016 B2
9300585 Kumar et al. Mar 2016 B2
9338097 Anand et al. May 2016 B2
9344337 Kumar et al. May 2016 B2
9374297 Bosch et al. Jun 2016 B2
9379931 Bosch et al. Jun 2016 B2
9385950 Quinn et al. Jul 2016 B2
9398486 La Roche, Jr. et al. Jul 2016 B2
9407540 Kumar et al. Aug 2016 B2
9413655 Shatzkamer et al. Aug 2016 B2
9436443 Chiosi et al. Sep 2016 B2
9479443 Bosch et al. Oct 2016 B2
9491094 Patwardhan et al. Nov 2016 B2
9537836 Maller et al. Jan 2017 B2
9558029 Behera et al. Jan 2017 B2
9559970 Kumar et al. Jan 2017 B2
9608896 Kumar et al. Mar 2017 B2
9723106 Shen et al. Aug 2017 B2
9794379 Kumar et al. Oct 2017 B2
10387179 Hildebrant Aug 2019 B1
20010023442 Masters Sep 2001 A1
20020131362 Callon Sep 2002 A1
20020156893 Pouyoul et al. Oct 2002 A1
20020167935 Nabkel et al. Nov 2002 A1
20030023879 Wray Jan 2003 A1
20030037070 Marston Feb 2003 A1
20030088698 Singh et al. May 2003 A1
20030110081 Tosaki et al. Jun 2003 A1
20030120816 Berthaud et al. Jun 2003 A1
20030226142 Rand Dec 2003 A1
20040109412 Hansson et al. Jun 2004 A1
20040148391 Lake, Sr. et al. Jul 2004 A1
20040199812 Earl Oct 2004 A1
20040213160 Regan et al. Oct 2004 A1
20040264481 Darling et al. Dec 2004 A1
20040268357 Joy et al. Dec 2004 A1
20050044197 Lai Feb 2005 A1
20050058118 Davis Mar 2005 A1
20050060572 Kung Mar 2005 A1
20050086367 Conta et al. Apr 2005 A1
20050120101 Nocera Jun 2005 A1
20050152378 Bango et al. Jul 2005 A1
20050157645 Rabie et al. Jul 2005 A1
20050160180 Rabje et al. Jul 2005 A1
20050204042 Banerjee et al. Sep 2005 A1
20050210096 Bishop et al. Sep 2005 A1
20050257002 Nguyen Nov 2005 A1
20050281257 Yazaki et al. Dec 2005 A1
20050286540 Hurtta et al. Dec 2005 A1
20050289244 Sahu et al. Dec 2005 A1
20060005240 Sundarrajan et al. Jan 2006 A1
20060045024 Previdi et al. Mar 2006 A1
20060074502 McFarland Apr 2006 A1
20060092950 Arregoces et al. May 2006 A1
20060095960 Arregoces et al. May 2006 A1
20060112400 Zhang et al. May 2006 A1
20060168223 Mishra et al. Jul 2006 A1
20060233106 Achlioptas et al. Oct 2006 A1
20060233155 Srivastava Oct 2006 A1
20070061441 Landis et al. Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070143851 Nicodemus et al. Jun 2007 A1
20070237147 Quinn et al. Oct 2007 A1
20070250836 Li et al. Oct 2007 A1
20080080509 Khanna et al. Apr 2008 A1
20080080517 Roy et al. Apr 2008 A1
20080170542 Hu Jul 2008 A1
20080177896 Quinn et al. Jul 2008 A1
20080181118 Sharma et al. Jul 2008 A1
20080196083 Parks et al. Aug 2008 A1
20080209039 Tracey et al. Aug 2008 A1
20080219287 Krueger et al. Sep 2008 A1
20080225710 Raja et al. Sep 2008 A1
20080291910 Tadimeti et al. Nov 2008 A1
20090003364 Fendick et al. Jan 2009 A1
20090006152 Timmerman et al. Jan 2009 A1
20090094684 Chinnusamy et al. Apr 2009 A1
20090204612 Keshavarz-nia et al. Aug 2009 A1
20090300207 Giaretta et al. Dec 2009 A1
20090305699 Deshpande et al. Dec 2009 A1
20090328054 Paramasivam et al. Dec 2009 A1
20100063988 Khalid Mar 2010 A1
20100080226 Khalid Apr 2010 A1
20100191612 Raleigh Jul 2010 A1
20110023090 Asati et al. Jan 2011 A1
20110137991 Russell Jun 2011 A1
20110142056 Manoj Jun 2011 A1
20110222412 Kompella Sep 2011 A1
20110255538 Srinivasan et al. Oct 2011 A1
20120131662 Kuik et al. May 2012 A1
20120147894 Mulligan et al. Jun 2012 A1
20120324442 Barde Dec 2012 A1
20130003735 Chao et al. Jan 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130121137 Feng et al. May 2013 A1
20130124708 Lee et al. May 2013 A1
20130163594 Sharma et al. Jun 2013 A1
20130163606 Bagepalli et al. Jun 2013 A1
20130272305 Lefebvre et al. Oct 2013 A1
20130311675 Kancherla Mani Prasad Nov 2013 A1
20130329584 Ghose et al. Dec 2013 A1
20140036730 Nellikar Suraj et al. Feb 2014 A1
20140105062 McDysan et al. Apr 2014 A1
20140254603 Banavalikar et al. Sep 2014 A1
20140279863 Krishnamurthy et al. Sep 2014 A1
20140280836 Kumar et al. Sep 2014 A1
20140321459 Kumar et al. Oct 2014 A1
20140334295 Guichard et al. Nov 2014 A1
20140362682 Guichard et al. Dec 2014 A1
20140369209 Khurshid et al. Dec 2014 A1
20140376558 Rao et al. Dec 2014 A1
20150012584 Lo et al. Jan 2015 A1
20150012988 Jeng et al. Jan 2015 A1
20150029871 Frost et al. Jan 2015 A1
20150032871 Allan et al. Jan 2015 A1
20150052516 French et al. Feb 2015 A1
20150074276 DeCusatis et al. Mar 2015 A1
20150082308 Kiess et al. Mar 2015 A1
20150085870 Narasimha et al. Mar 2015 A1
20150092564 Aldrin Apr 2015 A1
20150103827 Quinn et al. Apr 2015 A1
20150131484 Aldrin May 2015 A1
20150195197 Yong et al. Jul 2015 A1
20150222516 Deval et al. Aug 2015 A1
20150222533 Birrittella et al. Aug 2015 A1
20150319078 Lee et al. Nov 2015 A1
20150326473 Dunbar et al. Nov 2015 A1
20150365495 Fan et al. Dec 2015 A1
20150381465 Narayanan et al. Dec 2015 A1
20150381557 Fan et al. Dec 2015 A1
20160028604 Chakrabarti et al. Jan 2016 A1
20160028640 Zhang et al. Jan 2016 A1
20160050132 Zhang Feb 2016 A1
20160080263 Park et al. Mar 2016 A1
20160099853 Nedeltchev et al. Apr 2016 A1
20160103695 Udupi Apr 2016 A1
20160112502 Clarke et al. Apr 2016 A1
20160119253 Kang et al. Apr 2016 A1
20160127139 Tian et al. May 2016 A1
20160165014 Nainar et al. Jun 2016 A1
20160173464 Wang et al. Jun 2016 A1
20160179560 Ganguli et al. Jun 2016 A1
20160182684 Connor et al. Jun 2016 A1
20160212017 Li et al. Jul 2016 A1
20160226742 Apathotharanan et al. Aug 2016 A1
20160285720 Mäenpää et al. Sep 2016 A1
20160328273 Molka et al. Nov 2016 A1
20160352629 Wang et al. Dec 2016 A1
20160380966 Gunnalan et al. Dec 2016 A1
20170019303 Swamy et al. Jan 2017 A1
20170031804 Ciszewski et al. Feb 2017 A1
20170068574 Cherkasova Mar 2017 A1
20170078175 Xu et al. Mar 2017 A1
20170187609 Lee et al. Jun 2017 A1
20170208000 Bosch et al. Jul 2017 A1
20170214627 Zhang et al. Jul 2017 A1
20170237656 Gage et al. Aug 2017 A1
20170279712 Nainar et al. Sep 2017 A1
20170310611 Kumar et al. Oct 2017 A1
20170371703 Wagner Dec 2017 A1
Foreign Referenced Citations (8)
Number Date Country
102073546 Jul 2013 CN
3160073 Apr 2017 EP
WO 2011029321 Mar 2011 WO
WO 2012056404 May 2012 WO
WO 2015180559 Dec 2015 WO
WO 2015187337 Dec 2015 WO
WO 2016004556 Jan 2016 WO
WO 2016058245 Apr 2016 WO
Non-Patent Literature Citations (55)
Entry
Author Unknown, “Xilinx Demonstrates Reconfigurable Acceleration for Cloud Scale Applications at SC16,” PR Newswire, Nov. 7, 2016, 9 pages; news.sys-con.com/node/3948204.
Burt, Jeff, “Intel Begins Shipping Xeon Chips With FPGA Accelerators,” eWeek, Apr. 13, 2016, 3 pages.
Chen, Yu-Ting, et al., “When Apache Spark Meets FPGAs: A Case Study for Next-Generation DNA Sequencing Acceleration,” The 8th USENIX Workshop on Hot Topics in Cloud Computing, Jun. 20, 2016, 7 pages.
Fahmy Suhaib A., et al., “Virtualized FPGA Accelerators for Efficient Cloud Computing,” The University of Warwick, Cloud Computing Technology and Science (CloudCom), IEEE 7th International Conference, Nov. 30, 2015, 7 pages.
Farrel, A., et al., “A Path Computation Element (PCE)—Based Architecture,” RFC 4655, Network Working Group, Aug. 2006, 40 pages.
Hejtmanek, Lukas, “Scalable and Distributed Data Storage,” is.muni.cz, Jan. 2005, pp. 1-58.
Jain, Abhishek Kumar, “Architecture Centric Coarse-Grained FPGA Overlays,” Nanyang Technological University, Jan. 2017, 195 pages.
Kachris, Christoforos, et al., “A Survey on Reconfigurable Accelerators for Cloud Computing,” Conference Paper, Aug. 2016, 11 pages.
Kidane Hiliwi Leake, et al., “NoC Based Virtualized FPGA as Cloud Services,” 3rd International Conference on Embedded Systems in Telecommunications and Instrumentation (ICESTI'16), Oct. 24, 2016, 6 pages.
Neshatpour, Katayoun, et al., “Energy-Efficient Acceleration of Big Data Analytics Applications Using FPGAs,” IEEE International Conference, Oct. 29, 2015, 9 pages.
Orellana, Julio Proano, et al., “FPGA-Aware Scheduling Strategies at Hypervisor Level in Cloud Environments,” Hindawi Publishing Corporation, Scientific Programming, vol. 2016, Article ID 4670271, May 22, 2016, 13 pages.
Putnam, Andrew, et al., “A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services,” Computer Architecture (ISCA), 41st International Symposium, Jun. 2014, 12 pages.
Weissman, Jon B. et al., “Optimizing Remote File Access for Parallel and Distributed Network 2017, Applications,” users@cs.umn.edu, Oct. 19, 2017, pp. 1-25.
Westerbeek, Michiel, “Serverless Server-side Rendering with Redux-Saga,” medium.com, Dec. 10, 2016, pp. 1-6.
Wikipedia contributors, “Serverless Computing,” Wikipedia, The Free Encyclopedia, Jun. 11, 2017, 4 pages.
U.S. Appl. No. 15/485,948, filed Apr. 12, 2017 entitled “Virtualized Network Functions and Service Chaining in Serverless Computing Infrastructure,” Inventors: Komei Shimamura, et al.
“AWS Lambda Developer Guide,” Amazon Web Services Inc., Hämtad, May 2017, 416 pages.
“AWS Serverless Multi-Tier Architectures,” Amazon Web Services Inc., Nov. 2015, 20 pages.
“Cisco NSH Service Chaining Configuration Guide,” Cisco Systems, Inc., Jul. 28, 2017, 11 pages.
“Cloud Functions Overview,” Cloud Functions Documentation, Mar. 21, 2017, 3 pages; cloud.google.com/functions/docs/concepts/overview.
“Cloud-Native VNF Modelling,” Open Source Mano, © ETSI 2016, 18 pages.
Capdevila Pujol, P., “Deployment of NFV and SFC scenarios,” EETAC, Master Thesis, Advisor: David Rincón Rivera, Feb. 17, 2017, 115 pages; upcommons.upc.edu/bitstream/handle/2117/101879/memoria_v2.pdf.
“Network Functions Virtualisation (NFV); Use Cases,” ETSI, GS NFV 001 v1.1.1, Architectural Framework, © European Telecommunications Standards Institute, Oct. 2013, 50 pages.
Ersue, M. “ETSI NFV Management and Orchestration—An Overview,” Presentation at the IETF# 88 Meeting, Nov. 3, 2013, 14 pages, www.ietf.org/proceedings/88/slides/slides-88-opsawg-6.pdf.
“Cisco and Intel High-Performance VNFs on Cisco NFV Infrastructure,” White Paper, © 2016 Cisco|Intel, Oct. 2016, 7 pages.
Pierre-Louis, M., “OpenWhisk: A quick tech preview,” DeveloperWorks Open, IBM, Feb. 22, 2016, 7 pages; developer.ibm.com/open/2016/02/22/openwhisk-a-quick-tech-preview/.
Hendrickson, S., et al. “Serverless Computation with OpenLambda.” Elastic 60, University of Wisconson, Madison, Jun. 20, 2016, 7 pages, www.usenix.org/system/files/conference/hotcloud16/hotcloud16_hendrickson.pdf.
“Understanding Azure A Guide for Developers,” Microsoft Corporation, Copyright © 2016 Microsoft Corporation, 29 pages.
Yadav, R., “What Real Cloud-Native Apps Will Look Like,” Crunch Network, Aug. 3, 2016, 8 pages; techcrunch.com/2016/08/03/what-real-cloud-native-apps-will-look-like/.
Aldrin, S., et al. “Service Function Chaining Operation, Administration and Maintenance Framework,” Internet Engineering Task Force, Oct. 26, 2014, 13 pages.
Author Unknown, “ANSI/SCTE 35 2007 Digital Program Insertion Cueing Message for Cable,” Engineering Committee, Digital Video Subcommittee, American National Standard, Society of Cable Telecommunications Engineers, © Society of Cable Telecommunications Engineers, Inc. 2007 All Rights Reserved, 140 Philips Road, Exton, PA 19341; 42 pages.
Author Unknown, “CEA-708,” from Wikipedia, the free encyclopedia, Nov. 15, 2012; 16 pages http://en.wikipedia.org/w/index.php?title=CEA-708&oldid=523143431.
Author Unknown, “Digital Program Insertion,” from Wikipedia, the free encyclopedia, Jan. 2, 2012; 1 page en.wikipedia.org/w/indec.php?title=Digital_Program_Insertion&oldid=469076482.
Author Unknown, “Dynamic Adaptive Streaming over HTTP,” from Wikipedia, the free encyclopedia, Oct. 25, 2012; 3 pages, en.wikipedia.org/w/index.php?title=Dynannic_Adaptive_Streanning_over_HTTP&oldid=519749189.
Author Unknown, “GStreamer and in-band metadata,” from RidgeRun Developer Connection, Jun. 19, 2012, 5 pages developersidgerun.conn/wiki/index.php/GStreanner_and_in-band_nnetadata.
Author Unknown, “ISO/IEC JTC 1/SC 29, Information Technology—Dynamic Adaptive Streaming over HTTP (DASH)—Part 1: Media Presentation Description and Segment Formats,” International Standard © ISO/IEC 2012—All Rights Reserved; Jan. 5, 2012; 131 pages.
Author Unknown, “M-PEG 2 Transmission,” © Dr. Gorry Fairhurst, 9 pages [Published on or about Jan. 12, 2012] www.erg.abdn.ac.uk/future-net/digital-video/mpeg2-trans.html.
Author Unknown, “MPEG Transport Stream,” from Wikipedia, the free encyclopedia, Nov. 11, 2012; 7 pages, en.wikipedia.org/w/index.php?title=MPEG_transport_streann&oldid=522468296.
Author Unknown, “3GPP TR 23.803 V7.0.0 (Sep. 2005) Technical Specification: Group Services and System Aspects; Evolution of Policy Control and Charging (Release 7),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Val bonne—France, Sep. 2005; 30 pages.
Author Unknown, “3GPP TS 23.203 V8.9.0 (Mar. 2010) Technical Specification: Group Services and System Aspects; Policy and Charging Control Architecture (Release 8),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Val bonne—France, Mar. 2010; 116 pages.
Author Unknown, “3GPP TS 23.401 V13.5.0 (Dec. 2015) Technical Specification: 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access (Release 13),” 3GPP, 650 Route des Lucioles—Sophia Antipolis Valbonne—France, Dec. 2015, 337 pages.
Author Unknown, “3GPP TS 23.401 V9.5.0 (Jun. 2010) Technical Specification: Group Services and Systems Aspects; General Packet Radio Service (GPRS) Enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Access (Release 9),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Valbonne—France, Jun. 2010; 259 pages.
Author Unknown, “3GPP TS 29.212 V13.1.0 (Mar. 2015) Technical Specification: 3rd Generation Partnership Project; Technical Specification Group Core Network and Terminals; Policy and Charging Control (PCC); Reference points (Release 13),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Valbonne—France, Mar. 2015; 230 pages.
Boucadair, Mohamed, et al., “Differentiated Service Function Chaining Framework,” Network Working Group Internet Draft draft-boucadair-network-function-chaining-03, Aug. 21, 2013, 21 pages.
Fayaz, Seyed K., et al., “Efficient Network Reachability Analysis using a Succinct Control Plane Representation,” 2016, ratul.org, pp. 1-16.
Halpern, Joel, et al., “Service Function Chaining (SFC) Architecture,” Internet Engineering Task Force (IETF), Cisco, Oct. 2015, 32 pages.
Jiang, Yuanlong, et al., “Fault Management in Service Function Chaining,” Network Working Group, China Telecom, Oct. 16, 2015, 13 pages.
Kumar, Surendra, et al., “Service Function Path Optimization: draft-kumar-sfc-sfp-optimization-00.txt,” Internet Engineering Task Force, IETF; Standard Working Draft, May 10, 2014, 14 pages.
Penno, Reinaldo, et al. “Packet Generation in Service Function Chains,” draft-penno-sfc-packet-03, Apr. 29, 2016, 25 pages.
Penno, Reinaldo, et al. “Services Function Chaining Traceroute,” draft-penno-sfc-trace-03, Sep. 30, 2015, 9 pages.
Quinn, Paul, et al., “Network Service Header,” Network Working Group, draft-quinn-sfc-nsh-02.txt, Feb. 14, 2014, 21 pages.
Quinn, Paul, et al., “Network Service Header,” Network Working Group, draft-quinn-nsh-00.txt, Jun. 13, 2013, 20 pages.
Quinn, Paul, et al., “Network Service Header,” Network Working Group Internet Draft draft-quinn-nsh-01, Jul. 12, 2013, 20 pages.
Quinn, Paul, et al., “Service Function Chaining (SFC) Architecture,” Network Working Group Internet Draft draft-quinn-sfc-arch-05.txt, May 5, 2014, 31 pages.
Wong, Fei, et al., “SMPTE-TT Embedded in ID3 for HTTP Live Streaming, draft-smpte-id3-http-live-streaming-00,” Informational Internet Draft, Jun. 2012, 7 pages tools.ietf.org/htnnl/draft-snnpte-id3-http-live-streaming-00.
Related Publications (1)
Number Date Country
20180300173 A1 Oct 2018 US