Performance-based hardware emulation in an on-demand network code execution system

Abstract
Systems and methods are described for providing performance-based hardware emulation in an on-demand network code execution system. A user may generate a task on the system by submitting code. The system may determine, based on the code or its execution, that the code executes more efficiently if certain functionality is available, such as an extension to a processor's instruction set. The system may further determine that it can provide the needed functionality using various computing resources, which may include physical hardware, emulated hardware (e.g., a virtual machine), or combinations thereof. The system may then determine and provide a set of computing resources to use when executing the user-submitted code, which may be based on factors such as availability, cost, estimated performance, desired performance, or other criteria. The system may also migrate code from one set of computing resources to another, and may analyze demand and project future computing resource needs.
Description
BACKGROUND

Computing devices can utilize communication networks to exchange data. Companies and organizations operate computer networks that interconnect a number of computing devices to support operations or provide services to third parties. The computing systems can be located in a single geographic location or located in multiple, distinct geographic locations (e.g., interconnected via private or public communication networks). Specifically, hosted computing environments or data processing centers, generally referred to herein as “data centers,” may include a number of interconnected computing systems to provide computing resources to users of the data center. The data centers may be private data centers operated on behalf of an organization, or public data centers operated on behalf, or for the benefit of, the general public.


To facilitate increased utilization of data center resources, virtualization technologies allow a single physical computing device to host one or more instances of virtual machines that appear and operate as independent computing devices to users of a data center. With virtualization, the single physical computing device can create, maintain, delete, or otherwise manage virtual machines in a dynamic manner. In turn, users can request computing resources from a data center, such as single computing devices or a configuration of networked computing devices, and be provided with varying numbers of virtual machine resources.


In some scenarios, a user can request that a data center provide computing resources to execute a particular task. The task may correspond to a set of computer-executable instructions, which the data center may then execute on behalf of the user. The data center may thus further facilitate increased utilization of data center resources.





BRIEF DESCRIPTION OF THE DRAWINGS

Throughout the drawings, reference numbers may be re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate example embodiments described herein and are not intended to limit the scope of the disclosure.



FIG. 1 is a block diagram depicting an illustrative environment in which an on-demand code execution system can execute tasks corresponding to code, which may be submitted by users of the on-demand code execution system, and can determine which computing resources to use to facilitate execution of the submitted code.



FIG. 2 depicts a general architecture of a computing device providing an emulation performance analysis system that is configured to determine the computing resources used to facilitate execution of tasks on the on-demand code execution system of FIG. 1.



FIG. 3 is a flow diagram depicting illustrative interactions for submitting code corresponding to a task to the on-demand code execution system of FIG. 1, and for the on-demand code execution system to analyze the code and determine a set of computing resources that may be used to facilitate execution.



FIG. 4 is a flow diagram depicting illustrative interactions for the on-demand code execution system of FIG. 1 to analyze performance metrics associated with the execution of code using a particular set of computing resources, determine whether an alternate set of computing resources is available and would be preferable, and if so migrate the code execution to the alternate set of computing resources.



FIG. 5 is a flow chart depicting an illustrative routine for analyzing submitted code to determine a set of resources that may be used to facilitate execution of the code on the on-demand code execution system of FIG. 1.



FIG. 6 is a flow chart depicting an illustrative routine for analyzing the performance of executing code with a particular set of resources on the on-demand code execution system of FIG. 1 and migrating code to alternate sets of resources as needed.





DETAILED DESCRIPTION

Generally described, aspects of the present disclosure relate to an on-demand code execution system. The on-demand code execution system enables rapid execution of code, which may be supplied by users of the on-demand code execution system. More specifically, embodiments of the present disclosure relate to improving the performance of an on-demand code execution system that is implemented using various computing resources. As described in detail herein, the on-demand code execution system may provide a network-accessible service enabling users to submit or designate computer-executable code to be executed by virtual machine instances on the on-demand code execution system. Each set of code on the on-demand code execution system may define a “task,” and implement specific functionality corresponding to that task when executed on a virtual machine instance of the on-demand code execution system. Individual implementations of the task on the on-demand code execution system may be referred to as an “execution” of the task (or a “task execution”). The on-demand code execution system can further enable users to trigger execution of a task based on a variety of potential events, such as detecting new data at a network-based storage system, transmission of an application programming interface (“API”) call to the on-demand code execution system, or transmission of a specially formatted hypertext transport protocol (“HTTP”) packet to the on-demand code execution system. Thus, users may utilize the on-demand code execution system to execute any specified executable code “on-demand,” without requiring configuration or maintenance of the underlying hardware or infrastructure on which the code is executed. Further, the on-demand code execution system may be configured to execute tasks in a rapid manner (e.g., in under 100 milliseconds [ms]), thus enabling execution of tasks in “real-time” (e.g., with little or no perceptible delay to an end user).


The on-demand code execution system may instantiate virtual machine instances to execute the specified tasks on demand. The virtual machine instances may be provisioned with virtual processors or other computing resources, which provide functionality that the user-specified executable code may require during execution. For example, a virtual machine instance may be provisioned with a processor that facilitates or accelerates operations that are frequently used by neural networks. The processor may implement, for example, a particular instruction set (or an extension to an instruction set) that relates to these operations. In some embodiments, the instruction set may also be implemented by the underlying hardware processor of the physical computing device on which the virtual machine instance is instantiated. In other embodiments, the virtual machine instance may emulate a processor that implements a particular instruction set, and the virtual machine instance may be instantiated using a physical processor that does not implement the instruction set. The virtual machine instance may thus translate instructions implemented by the virtual processor into instructions that are implemented by the physical processor, with varying effects on performance or efficiency.


The on-demand code execution system may utilize a pool of computing resources to execute user-submitted code. The pool may include resources that vary in terms of functionality, and the demand for resources that implement particular functionality may exceed the available supply of those resources. For example, several user-submitted tasks may require or prefer a processor that implements a particular instruction set, but only a limited number of these processors may be available. In some embodiments, excess demand for resources that implement particular functionality may be met with resources that emulate the functionality. For example, a virtual machine instance may be instantiated that emulates the processor, as discussed above, and the on-demand code execution system may assign user-submitted tasks to the physical and virtual resources based on factors such as relative performance. For example, a first user-submitted task may be able to execute with acceptable performance on an emulated processor, while a second user-submitted task may execute very slowly or not at all. The on-demand code execution system may thus prioritize the allocation of scarce resources based on assessments of whether the functionality is required or merely desirable in order to execute a particular task, and may further prioritize based on the relative performance of different tasks, such that the tasks that benefit the most from the functionality receive it.


In some embodiments, a virtual machine instance instantiated on a fast physical processor may outperform a virtual or physical computing device that uses a slower processor, even if the slower processor provides functionality that the faster processor does not. For example, an older processor with a particular instruction set may be emulated in a virtual machine instance that is instantiated on a newer processor without the instruction set, and the performance gain realized by executing on the newer processor may more than offset the performance overhead associated with emulated the processor or instruction set. It will thus be understood that the on-demand code execution system is not limited to physical processors when determining a recommended set of computing resources, and that virtual emulation of a physical processor may provide superior performance.


As will be appreciated by one of skill in the art in light of the present disclosure, the embodiments disclosed herein improves the ability of computing systems, such as on-demand code execution systems, to execute code in an efficient manner. Moreover, the presently disclosed embodiments address technical problems inherent within computing systems; specifically, the limited nature of computing resources with which to execute code, the resource overhead associated with provisioning virtual machines to facilitate code execution, and the inefficiencies caused by provisioning functionality that is not utilized (or not provisioning functionality that would be utilized if available). These technical problems are addressed by the various technical solutions described herein, including the provisioning of an execution environment based on the functionality required by the code to be executed. Thus, the present disclosure represents an improvement on existing data processing systems and computing systems in general.


The on-demand code execution system may include a virtual machine instance manager configured to receive user code (threads, programs, etc., composed in any of a variety of programming languages) and execute the code in a highly scalable, low latency manner, without requiring user configuration of a virtual machine instance. Specifically, the virtual machine instance manager can, prior to receiving the user code and prior to receiving any information from a user regarding any particular virtual machine instance configuration, create and configure virtual machine instances according to a predetermined set of configurations, each corresponding to any one or more of a variety of run-time environments. Thereafter, the virtual machine instance manager receives user-initiated requests to execute code, and identifies a pre-configured virtual machine instance to execute the code based on configuration information associated with the request. The virtual machine instance manager can further allocate the identified virtual machine instance to execute the user's code at least partly by creating and configuring containers inside the allocated virtual machine instance, and provisioning the containers with code of the task as well as an dependency code objects. Various embodiments for implementing a virtual machine instance manager and executing user code on virtual machine instances is described in more detail in U.S. Pat. No. 9,323,556, entitled “PROGRAMMATIC EVENT DETECTION AND MESSAGE GENERATION FOR REQUESTS TO EXECUTE PROGRAM CODE,” and filed Sep. 30, 2014 (the “'556 Patent”), the entirety of which is hereby incorporated by reference.


As used herein, the term “virtual machine instance” is intended to refer to an execution of software or other executable code that emulates hardware to provide an environment or platform on which software may execute (an “execution environment”). Virtual machine instances are generally executed by hardware devices, which may differ from the physical hardware emulated by the virtual machine instance. For example, a virtual machine may emulate a first type of processor and memory while being executed on a second type of processor and memory. Thus, virtual machines can be utilized to execute software intended for a first execution environment (e.g., a first operating system) on a physical device that is executing a second execution environment (e.g., a second operating system). In some instances, hardware emulated by a virtual machine instance may be the same or similar to hardware of an underlying device. For example, a device with a first type of processor may implement a plurality of virtual machine instances, each emulating an instance of that first type of processor. Thus, virtual machine instances can be used to divide a device into a number of logical sub-devices (each referred to as a “virtual machine instance”). While virtual machine instances can generally provide a level of abstraction away from the hardware of an underlying physical device, this abstraction is not required. For example, assume a device implements a plurality of virtual machine instances, each of which emulate hardware identical to that provided by the device. Under such a scenario, each virtual machine instance may allow a software application to execute code on the underlying hardware without translation, while maintaining a logical separation between software applications running on other virtual machine instances. This process, which is generally referred to as “native execution,” may be utilized to increase the speed or performance of virtual machine instances. Other techniques that allow direct utilization of underlying hardware, such as hardware pass-through techniques, may be used as well.


While a virtual machine executing an operating system is described herein as one example of an execution environment, other execution environments are also possible. For example, tasks or other processes may be executed within a software “container,” which provides a runtime environment without itself providing virtualization of hardware. Containers may be implemented within virtual machines to provide additional security, or may be run outside of a virtual machine instance.


Although example embodiments are described herein with regard to processor instruction sets, it will be understood that the present disclosure is not limited to any particular computing resource or functionality. For example, code may be analyzed to determine that it spends a significant amount of time waiting for a storage device to read or write information, and a determination may be made to provide a higher-speed data store (e.g., a memory cache or solid state device) to facilitate more efficient execution of the code. As a further example, code or performance metrics may be analyzed to determine that providing a particular graphics processing unit (“GPU”) would facilitate execution of the user-submitted task, and the identified GPU may be provided or emulated. As a still further example, code may be analyzed to determine that it is optimized for a particular type of memory, such as non-volatile random access memory (“NVRAM”) or dynamic random access memory (“DRAM”), and the particular type of memory may be supplied. The example embodiments are thus understood to be illustrative and not limiting.


In some embodiments, a user may submit code that requires particular functionality, or code that runs more efficiently if certain functionality is provided, without being aware of the dependency. For example, the user-submitted code may make use of a third-party library, and the library may require the functionality or make extensive use of it if available. In other embodiments, the user may be aware that particular functionality is needed, but may not know whether the on-demand code execution system provides the functionality or if so whether the functionality is currently available. By implementing the embodiments described herein, the on-demand code execution system addresses these issues and allows the user to submit code without identifying the functionality it requires, and without having to specifically request that the on-demand code execution system provide the functionality.


Embodiments of the disclosure will now be described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described.



FIG. 1 is a block diagram of an illustrative operating environment 100 in which an on-demand code execution system 110 may operate based on communication with user computing devices 102, auxiliary services 106, and network-based data storage services 108. In general, the user computing devices 102 can be any computing device such as a desktop, laptop or tablet computer, personal computer, wearable computer, server, personal digital assistant (PDA), hybrid PDA/mobile phone, mobile phone, electronic book reader, set-top box, voice command device, camera, digital media player, and the like. The on-demand code execution system 110 may provide the user computing devices 102 with one or more user interfaces, command-line interfaces (CLIs), application programing interfaces (APIs), and/or other programmatic interfaces for generating and uploading user-executable code (e.g., including metadata identifying dependency code objects for the uploaded code), invoking the user-provided code (e.g., submitting a request to execute the user codes on the on-demand code execution system 110), scheduling event-based jobs or timed jobs, tracking the user-provided code, and/or viewing other logging or monitoring information related to their requests and/or user codes. Although one or more embodiments may be described herein as using a user interface, it should be appreciated that such embodiments may, additionally or alternatively, use any CLIs, APIs, or other programmatic interfaces.


The illustrative environment 100 further includes one or more network-based data storage services 108, configured to enable the on-demand code execution system 110 to store and retrieve data from one or more persistent or substantially persistent data sources. Illustratively, the network-based data storage services 108 may enable the on-demand code execution system 110 to store information corresponding to a task, such as code or metadata, to store additional code objects representing dependencies of tasks, to retrieve data to be processed during execution of a task, and to store information (e.g., results) regarding that execution. The network-based data storage services 108 may represent, for example, a relational or non-relational database. In another example, the network-based data storage services 108 may represent a network-attached storage (NAS), configured to provide access to data arranged as a file system. The network-based data storage services 108 may further enable the on-demand code execution system 110 to query for and retrieve information regarding data stored within the on-demand code execution system 110, such as by querying for a number of relevant files or records, sizes of those files or records, file or record names, file or record creation times, etc. In some instances, the network-based data storage services 108 may provide additional functionality, such as the ability to separate data into logical groups (e.g., groups associated with individual accounts, etc.). While shown as distinct from the auxiliary services 106, the network-based data storage services 108 may in some instances also represent a type of auxiliary service 106.


The user computing devices 102, auxiliary services 106, and network-based data storage services 108 may communicate with the on-demand code execution system 110 via a network 104, which may include any wired network, wireless network, or combination thereof. For example, the network 104 may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof. As a further example, the network 104 may be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. In some embodiments, the network 104 may be a private or semi-private network, such as a corporate or university intranet. The network 104 may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network 104 can use protocols and components for communicating via the Internet or any of the other aforementioned types of networks. For example, the protocols used by the network 104 may include Hypertext Transfer Protocol (HTTP), HTTP Secure (HTTPS), Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and the like. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein.


The on-demand code execution system 110 is depicted in FIG. 1 as operating in a distributed computing environment including several computer systems that are interconnected using one or more computer networks (not shown in FIG. 1). The on-demand code execution system 110 could also operate within a computing environment having a fewer or greater number of devices than are illustrated in FIG. 1. Thus, the depiction of the on-demand code execution system 110 in FIG. 1 should be taken as illustrative and not limiting to the present disclosure. For example, the on-demand code execution system 110 or various constituents thereof could implement various Web services components, hosted or “cloud” computing environments, and/or peer to peer network configurations to implement at least a portion of the processes described herein.


Further, the on-demand code execution system 110 may be implemented directly in hardware or software executed by hardware devices and may, for instance, include one or more physical or virtual servers implemented on physical computer hardware configured to execute computer executable instructions for performing various features that will be described herein. The one or more servers may be geographically dispersed or geographically co-located, for instance, in one or more data centers. In some instances, the one or more servers may operate as part of a system of rapidly provisioned and released computing resources, often referred to as a “cloud computing environment.”


In the example of FIG. 1, the on-demand code execution system 110 is illustrated as connected to the network 104. In some embodiments, any of the components within the on-demand code execution system 110 can communicate with other components of the on-demand code execution system 110 via the network 104. In other embodiments, not all components of the on-demand code execution system 110 are capable of communicating with other components of the virtual environment 100. In one example, only the frontend 120 (which may in some instances represent multiple frontends 120) may be connected to the network 104, and other components of the on-demand code execution system 110 may communicate with other components of the environment 100 via the frontends 120.


In FIG. 1, users, by way of user computing devices 102, may interact with the on-demand code execution system 110 to provide executable code, and establish rules or logic defining when and how such code should be executed on the on-demand code execution system 110, thus establishing a “task.” For example, a user may wish to run a piece of code in connection with a web or mobile application that the user has developed. One way of running the code would be to acquire virtual machine instances from service providers who provide infrastructure as a service, configure the virtual machine instances to suit the user's needs, and use the configured virtual machine instances to run the code. In order to avoid the complexity of this process, the user may alternatively provide the code to the on-demand code execution system 110, and request that the on-demand code execution system 110 execute the code. The on-demand code execution system 110 can handle the acquisition and configuration of compute capacity (e.g., containers, instances, etc., which are described in greater detail below) based on the code execution request, and execute the code using the compute capacity. The on-demand code execution system 110 may automatically scale up and down based on the volume, thereby relieving the user from the burden of having to worry about over-utilization (e.g., acquiring too little computing resources and suffering performance issues) or under-utilization (e.g., acquiring more computing resources than necessary to run the codes, and thus overpaying). In accordance with embodiments of the present disclosure, and as described in more detail below, the on-demand code execution system 110 may configure the virtual machine instances with customized operating systems to execute the user's code more efficiency and reduce utilization of computing resources.


To enable interaction with the on-demand code execution system 110, the system 110 includes one or more frontends 120, which enable interaction with the on-demand code execution system 110. In an illustrative embodiment, the frontends 120 serve as a “front door” to the other services provided by the on-demand code execution system 110, enabling users (via user computing devices 102) to provide, request execution of, and view results of computer executable code. The frontends 120 include a variety of components to enable interaction between the on-demand code execution system 110 and other computing devices. For example, each frontend 120 may include a request interface providing user computing devices 102 with the ability to upload or otherwise communication user-specified code to the on-demand code execution system 110 and to thereafter request execution of that code. In one embodiment, the request interface communicates with external computing devices (e.g., user computing devices 102, auxiliary services 106, etc.) via a graphical user interface (GUI), CLI, or API. The frontends 120 process the requests and makes sure that the requests are properly authorized. For example, the frontends 120 may determine whether the user associated with the request is authorized to access the user code specified in the request.


References to user code as used herein may refer to any program code (e.g., a program, routine, subroutine, thread, etc.) written in a specific program language. In the present disclosure, the terms “code,” “user code,” and “program code,” may be used interchangeably. Such user code may be executed to achieve a specific function, for example, in connection with a particular web application or mobile application developed by the user. As noted above, individual collections of user code (e.g., to achieve a specific function) are referred to herein as “tasks,” while specific executions of that code (including, e.g., compiling code, interpreting code, or otherwise making the code executable) are referred to as “task executions” or simply “executions.” Tasks may be written, by way of non-limiting example, in JavaScript (e.g., node.js), Java, Python, and/or Ruby (and/or another programming language). Tasks may be “triggered” for execution on the on-demand code execution system 110 in a variety of manners. In one embodiment, a user or other computing device may transmit a request to execute a task may, which can generally be referred to as “call” to execute of the task. Such calls may include the user code (or the location thereof) to be executed and one or more arguments to be used for executing the user code. For example, a call may provide the user code of a task along with the request to execute the task. In another example, a call may identify a previously uploaded task by its name or an identifier. In yet another example, code corresponding to a task may be included in a call for the task, as well as being uploaded in a separate location (e.g., storage of an auxiliary service 106 or a storage system internal to the on-demand code execution system 110) prior to the request being received by the on-demand code execution system 110. As noted above, the code for a task may reference additional code objects maintained at the on-demand code execution system 110 by use of identifiers of those code objects, such that the code objects are combined with the code of a task in an execution environment prior to execution of the task. The on-demand code execution system 110 may vary its execution strategy for a task based on where the code of the task is available at the time a call for the task is processed. A request interface of the frontend 120 may receive calls to execute tasks as Hypertext Transfer Protocol Secure (HTTPS) requests from a user. Also, any information (e.g., headers and parameters) included in the HTTPS request may also be processed and utilized when executing a task. As discussed above, any other protocols, including, for example, HTTP, MQTT, and CoAP, may be used to transfer the message containing a task call to the request interface 122.


A call to execute a task may specify one or more third-party libraries (including native libraries) to be used along with the user code corresponding to the task. In one embodiment, the call may provide to the on-demand code execution system 110 a file containing the user code and any libraries (and/or identifications of storage locations thereof) corresponding to the task requested for execution. In some embodiments, the call includes metadata that indicates the program code of the task to be executed, the language in which the program code is written, the user associated with the call, and/or the computing resources (e.g., memory, etc.) to be reserved for executing the program code. For example, the program code of a task may be provided with the call, previously uploaded by the user, provided by the on-demand code execution system 110 (e.g., standard routines), and/or provided by third parties. Illustratively, code not included within a call or previously uploaded by the user may be referenced within metadata of the task by use of a URI associated with the code. In some embodiments, such resource-level constraints (e.g., how much memory is to be allocated for executing a particular user code) are specified for the particular task, and may not vary over each execution of the task. In such cases, the on-demand code execution system 110 may have access to such resource-level constraints before each individual call is received, and the individual call may not specify such resource-level constraints. In some embodiments, the call may specify other constraints such as permission data that indicates what kind of permissions or authorities that the call invokes to execute the task. Such permission data may be used by the on-demand code execution system 110 to access private resources (e.g., on a private network). In some embodiments, individual code objects may also be associated with permissions or authorizations. For example, a third party may submit a code object and designate the object as readable by only a subset of users. The on-demand code execution system 110 may include functionality to enforce these permissions or authorizations with respect to code objects.


In some embodiments, a call may specify the behavior that should be adopted for handling the call. In such embodiments, the call may include an indicator for enabling one or more execution modes in which to execute the task referenced in the call. For example, the call may include a flag or a header for indicating whether the task should be executed in a debug mode in which the debugging and/or logging output that may be generated in connection with the execution of the task is provided back to the user (e.g., via a console user interface). In such an example, the on-demand code execution system 110 may inspect the call and look for the flag or the header, and if it is present, the on-demand code execution system 110 may modify the behavior (e.g., logging facilities) of the container in which the task is executed, and cause the output data to be provided back to the user. In some embodiments, the behavior/mode indicators are added to the call by the user interface provided to the user by the on-demand code execution system 110. Other features such as source code profiling, remote debugging, etc. may also be enabled or disabled based on the indication provided in a call.


To manage requests for code execution, the frontend 120 can include an execution queue (not shown in FIG. 1), which can maintain a record of requested task executions. Illustratively, the number of simultaneous task executions by the on-demand code execution system 110 is limited, and as such, new task executions initiated at the on-demand code execution system 110 (e.g., via an API call, via a call from an executed or executing task, etc.) may be placed on the execution queue 124 and processed, e.g., in a first-in-first-out order. In some embodiments, the on-demand code execution system 110 may include multiple execution queues, such as individual execution queues for each user account. For example, users of the on-demand code execution system 110 may desire to limit the rate of task executions on the on-demand code execution system 110 (e.g., for cost reasons). Thus, the on-demand code execution system 110 may utilize an account-specific execution queue to throttle the rate of simultaneous task executions by a specific user account. In some instances, the on-demand code execution system 110 may prioritize task executions, such that task executions of specific accounts or of specified priorities bypass or are prioritized within the execution queue. In other instances, the on-demand code execution system 110 may execute tasks immediately or substantially immediately after receiving a call for that task, and thus, the execution queue may be omitted.


As noted above, tasks may be triggered for execution at the on-demand code execution system 110 based on explicit calls from user computing devices 102 (e.g., as received at the request interface). Alternatively or additionally, tasks may be triggered for execution at the on-demand code execution system 110 based on data retrieved from one or more auxiliary services 106 or network-based data storage services 108. To facilitate interaction with auxiliary services 106, the frontend 120 can include a polling interface (not shown in FIG. 1), which operates to poll auxiliary services 106 or data storage services 108 for data. Illustratively, the polling interface may periodically transmit a request to one or more user-specified auxiliary services 106 or data storage services 108 to retrieve any newly available data (e.g., social network “posts,” news articles, files, records, etc.), and to determine whether that data corresponds to a user-established criteria triggering execution a task on the on-demand code execution system 110. Illustratively, criteria for execution of a task may include, but is not limited to, whether new data is available at the auxiliary services 106 or data storage services 108, the type or content of the data, or timing information corresponding to the data. In some instances, the auxiliary services 106 or data storage services 108 may function to notify the frontend 120 of the availability of new data, and thus the polling service may be unnecessary with respect to such services.


In addition to tasks executed based on explicit user calls and data from auxiliary services 106, the on-demand code execution system 110 may in some instances operate to trigger execution of tasks independently. For example, the on-demand code execution system 110 may operate (based on instructions from a user) to trigger execution of a task at each of a number of specified time intervals (e.g., every 10 minutes).


The frontend 120 can further include an output interface (not shown in FIG. 1) configured to output information regarding the execution of tasks on the on-demand code execution system 110. Illustratively, the output interface may transmit data regarding task executions (e.g., results of a task, errors related to the task execution, or details of the task execution, such as total time required to complete the execution, total data processed via the execution, etc.) to the user computing devices 102 or to auxiliary services 106, which may include, for example, billing or logging services. The output interface may further enable transmission of data, such as service calls, to auxiliary services 106. For example, the output interface may be utilized during execution of a task to transmit an API request to an external service 106 (e.g., to store data generated during execution of the task).


In some embodiments, the on-demand code execution system 110 may include multiple frontends 120. In such embodiments, a load balancer (not shown in FIG. 1) may be provided to distribute the incoming calls to the multiple frontends 120, for example, in a round-robin fashion. In some embodiments, the manner in which the load balancer distributes incoming calls to the multiple frontends 120 may be based on the location or state of other components of the on-demand code execution system 110. For example, a load balancer may distribute calls to a geographically nearby frontend 120, or to a frontend with capacity to service the call. In instances where each frontend 120 corresponds to an individual instance of another component of the on-demand code execution system, such as the active pool 140A described below, the load balancer may distribute calls according to the capacities or loads on those other components. As will be described in more detail below, calls may in some instances be distributed between frontends 120 deterministically, such that a given call to execute a task will always (or almost always) be routed to the same frontend 120. This may, for example, assist in maintaining an accurate execution record for a task, to ensure that the task executes only a desired number of times. While distribution of calls via a load balancer is illustratively described, other distribution techniques, such as anycast routing, will be apparent to those of skill in the art.


To execute tasks, the on-demand code execution system 110 includes one or more worker managers 140 that manage the instances used for servicing incoming calls to execute tasks. In the example illustrated in FIG. 1, each worker manager 140 manages an active pool of virtual machine instances 154A-C, which are currently assigned to one or more users and are implemented by one or more physical host computing devices 150A-B. The physical host computing devices 150A-B and the virtual machine instances 154A-C may further implement one or more containers 158A-F, which may contain and execute one or more user-submitted codes 160A-G. Containers are logical units created within a virtual machine instance, or on a host computing device, using the resources available on that instance or device. For example, each worker manager 140 may, based on information specified in a call to execute a task, create a new container or locate an existing container 158A-F and assign the container to handle the execution of the task.


The containers 156A-F, virtual machine instances 154A-C, and host computing devices 150A-B may further include language runtimes, code libraries, or other supporting functions (not depicted in FIG. 1) that facilitate execution of user-submitted code 160A-G. The physical computing devices 150A-B and the virtual machine instances 154A-C may further include operating systems 152A-B and 156A-C. In various embodiments, operating systems 152A-B and 156A-C may be the same operating system, variants of the same operating system, different operating systems, or combinations thereof.


Although the virtual machine instances 154A-C are described here as being assigned to a particular user, in some embodiments, an instance 154A-C may be assigned to a group of users, such that the instance is tied to the group of users and any member of the group can utilize resources on the instance. For example, the users in the same group may belong to the same security group (e.g., based on their security credentials) such that executing one member's task in a container on a particular instance after another member's task has been executed in another container on the same instance does not pose security risks. Similarly, the worker managers 140 may assign the instances and the containers according to one or more policies that dictate which requests can be executed in which containers and which instances can be assigned to which users. An example policy may specify that instances are assigned to collections of users who share the same account (e.g., account for accessing the services provided by the on-demand code execution system 110). In some embodiments, the requests associated with the same user group may share the same containers (e.g., if the user codes associated therewith are identical). In some embodiments, a task does not differentiate between the different users of the group and simply indicates the group to which the users associated with the task belong.


Once a triggering event to execute a task has been successfully processed by a frontend 120, the frontend 120 passes a request to a worker manager 140 to execute the task. In one embodiment, each frontend 120 may be associated with a corresponding worker manager 140 (e.g., a worker manager 140 co-located or geographically nearby to the frontend 120) and thus the frontend 120 may pass most or all requests to that worker manager 140. In another embodiment, a frontend 120 may include a location selector configured to determine a worker manager 140 to which to pass the execution request. In one embodiment, the location selector may determine the worker manager 140 to receive a call based on hashing the call, and distributing the call to a worker manager 140 selected based on the hashed value (e.g., via a hash ring). Various other mechanisms for distributing calls between worker managers 140 will be apparent to one of skill in the art. In accordance with embodiments of the present disclosure, the worker manager 140 can determine a host computing device 150A-B or a virtual machine instance 154A-C for executing a task in accordance with a recommendation from an emulation provisioning system 170.


The on-demand code execution system 110 further includes an emulation provisioning system 170, which implements aspects of the present disclosure including, for example, the determination of how to provide functionality that may be required for a particular task. In some embodiments, the emulation provisioning system 170 includes a code analyzer 162, which may be invoked when the user submits code via the frontend 120 to statically analyze submitted code and determine functionality that is required by the submitted code. As described in more detail below, the code analyzer 162 may analyze the user's code and identify, for example, API calls, operating system calls, function calls, or other indications of functionality that the code will require during execution. In various embodiments, the code analyzer 162 may analyze keywords, symbols, headers, directives, or other aspects of the user's code. In further embodiments, the on-demand code execution system 110 includes an execution analyzer 164, which may be invoked when the user's code is executed to analyze the performance of the executing code and the functionality that is actually utilized during execution of the code. The execution analyzer 164 may identify, for example, a portion of the source code that requires specific functionality, but is seldom or never reached during execution. In further embodiments, the emulation provisioning system 170 may include a computing resource data store 176, which may store information regarding the functionality that is provided by various host computing devices 150A-B or is emulated by various virtual machine instances 154A-C.


As shown in FIG. 1, various combinations and configurations of host computing devices 150A-B, virtual machine instances 154A-C, and containers 158A-F may be used to facilitate execution of user submitted code 160A-G. In the illustrated example, the host computing device 150A implements two virtual machine instances 154A and 154B. Virtual machine instance 154A, in turn, implements two containers 158A and 158B, which contain user-submitted code 160A and 160B respectively. Virtual machine instance 154B implements a single container 158C, which contains user-submitted code 160C. The host computing device 150B further implements a virtual machine instance 154C and directly implements containers 158E and 158F, which contain user-submitted code 160F and 160G. The virtual machine instance 154C, in turn, implements container 158D, which contains user-submitted codes 160D and 160E. It will be understood that these embodiments are illustrated for purposes of example, and that many other embodiments are within the scope of the present disclosure.


While some functionalities are generally described herein with reference to an individual component of the on-demand code execution system 110, other components or a combination of components may additionally or alternatively implement such functionalities. For example, a worker manager 140 may operate to provide functionality associated with execution of user-submitted code as described herein with reference to an emulation provisioning system 170.



FIG. 2 depicts a general architecture of a computing system (referenced as emulation provisioning system 170) that operates to determine how functionality used by a particular task should be provided within the on-demand code execution system 110. The general architecture of the emulation provisioning system 170 depicted in FIG. 2 includes an arrangement of computer hardware and software modules that may be used to implement aspects of the present disclosure. The hardware modules may be implemented with physical electronic devices, as discussed in greater detail below. The emulation provisioning system 170 may include many more (or fewer) elements than those shown in FIG. 2. It is not necessary, however, that all of these generally conventional elements be shown in order to provide an enabling disclosure. Additionally, the general architecture illustrated in FIG. 2 may be used to implement one or more of the other components illustrated in FIG. 1. As illustrated, the emulation provisioning system 170 includes a processor 202, input/output device interfaces 204, a network interface 206, and a data store 208, all of which may communicate with one another by way of a communication bus. The network interface 292 may provide connectivity to one or more networks or computing systems. The processor 202 may thus receive information and instructions from other computing systems or services via the network 104. The processor 202 may also communicate to and from a memory 280 and further provide output information for an optional display (not shown) via the input/output device interfaces 204. The input/output device interfaces 296 may also accept input from an optional input device (not shown).


The memory 220 may contain computer program instructions (grouped as modules in some embodiments) that the processor 202 executes in order to implement one or more aspects of the present disclosure. The memory 220 generally includes random access memory (RAM), read only memory (ROM) and/or other persistent, auxiliary or non-transitory computer readable media. The memory 220 may store an operating system 222 that provides computer program instructions for use by the processor 202 in the general administration and operation of the emulation provisioning system 170. The memory 220 may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in one embodiment, the memory 220 includes a user interface module 224 that generates user interfaces (and/or instructions therefor) for display upon a computing device, e.g., via a navigation and/or browsing interface such as a browser or application installed on the computing device. In addition, the memory 220 may include and/or communicate with one or more data repositories (not shown), for example, to access user program codes and/or libraries.


In addition to and/or in combination with the user interface module 224, the memory 220 may include a code analyzer 172 and an execution analyzer 174 that may be executed by the processor 202. In one embodiment, the code analyzer 172 and the execution analyzer 174 individually or collectively implement various aspects of the present disclosure, e.g., analyzing code or code execution to determine needed functionality and provide that functionality efficiently, as described further below.


While the code analyzer 172 and the execution analyzer 174 are shown in FIG. 2 as part of the emulation provisioning system 170, in other embodiments, all or a portion of the code analyzer 172 and the execution analyzer 174 may be implemented by other components of the on-demand code execution system 110 and/or another computing device. For example, in certain embodiments of the present disclosure, another computing device in communication with the on-demand code execution system 110 may include several modules or components that operate similarly to the modules and components illustrated as part of the emulation provisioning system 170.


The memory 220 may further include user-submitted code 160, which may be loaded into memory in conjunction with a user-submitted request to execute a task on the on-demand code execution system 110. The code 160 may be illustratively analyzed by the code analyzer 172 to identify needed functionality, as described in more detail below. The memory 220 may further include execution performance metrics 226, which may be collected from physical or virtual machines as the code 160 is executed on these platforms, and may be analyzed by the execution analyzer 174.


In some embodiments, the emulation provisioning system 170 may further include components other than those illustrated in FIG. 2. For example, the memory 220 may further include computing resource information that identifies the functionality provided by various physical and virtual computing resources that are available for executing the user-submitted code 160, or may include metadata or other information that was submitted with the request, such as an indication that the user-submitted code 160 was compiled for execution on a computing resource that provided certain functionality. FIG. 2 is thus understood to be illustrative but not limiting.



FIG. 3 depicts illustrative interactions for determining the computing resources to use when executing a task based on an analysis of the user-submitted code for the task. At (1), a user device 102 submits a request to execute a task to a frontend 120 of an on-demand code execution system. The request may include user-submitted code, or in some embodiments may identify user code that has been previously submitted. At (2), the frontend 120 requests that the code analyzer 172 analyze the user-submitted code to identify functionality that the code may require during execution. Illustratively, the user-submitted code may take advantage of a particular instruction set if it is available, such as an instruction set that implements vector instructions, floating point instructions, fused multiply-add instructions, neural network instructions, tensor processing instructions, single instruction multiple data (“SIMD”) instructions, cryptography instructions, or the like. A physical processor may implement one or more these instruction sets. A virtual processor in a virtual machine may also implement one or more of these instruction sets, with varying performance results depending on the interactions between the virtual machine and the underlying physical computing resources.


At (3), the code analyzer 172 may request computing resource data from the computing resource data store 176. The computing resource data may indicate, for example, particular computing resources that are available within the on-demand code execution system 110 for executing the user-submitted code, and may further indicate the functionality associated with these computing resources, the performance of these computing resources when providing specified functionality, and other parameters or information that enable the code analyzer 172 to determine recommended computing resources. At (4), the computing resource data store 176 may provide the requested computing resource data. In some embodiments, the computing resource data store 176 may only provide data regarding available computing resources. In other embodiments, the computing resource data store 176 may provide data regarding computing resources that are unavailable (e.g., because they are currently being used to execute other user-submitted code), and the code analyzer 172 may determine whether to make these resources available by, for example, migrating other tasks.


At (5), the code analyzer 172 may determine a set of computing resources to use when executing the user-submitted code. In some embodiments, the code analyzer 172 may generate a recommendation that the worker manager 140 may optionally implement, depending on resource availability, prioritization of requested tasks, and other factors. In other embodiments, the code analyzer 172 may consider some or all of these factors when making its determination, and may determine a set of computing resources for the worker manager 140 to allocate. The code analyzer 172 may analyze instructions, operations, functions, API calls, libraries, or other aspects of the user-submitted code to identify functionality that the code may use, and may identify computing resources that provide this functionality. For example, the code analyzer 172 may analyze the user-submitted code and identify that it has been compiled for execution on a particular processor (e.g., by setting particular flags at compile time), or that it includes a library that makes frequent use of floating point arithmetic. In some embodiments, the code analyzer 172 may obtain information regarding previous executions of the user-submitted code and determine the functionality used by the code on that basis. In further embodiments, the code analyzer 172 may obtain information regarding other code submitted by the same user, and may assess whether the user's submissions frequently make use of particular functionality. The code analyzer 172 may further analyze data such as user priorities or preferences, and may analyze computing resource data to consider factors such as limited availability of particular resources (which may be expressed as a resource cost), overall demand for certain computing resources, prioritization of requests and tasks, or other considerations.


At (6), the code analyzer 172 provides the resource recommendation to the frontend 120, and at (7) the frontend 120 provides the resource recommendation and the user-submitted code to the worker manager 140. In some embodiments, the interaction at (6) may be omitted and the code analyzer 172 may provide a resource recommendation directly to the worker manager 140. In further embodiments, the interactions at (6) and (7) may be combined and the code analyzer 172 may provide both the user-submitted code and the recommendation to the worker manager 140. In still further embodiments, the worker manager 140 may carry out the interaction at (2) and request a resource recommendation from the code analyzer 172. In various embodiments, the frontend 120 or the code analyzer 172 may provide an identifier or other information that allows the worker manager 140 to obtain the user-submitted code rather than providing the user-submitted code directly.


At (8), in some embodiments, the worker manager 140 may determine the availability of resources that were recommended by the code analyzer 172. In some embodiments, the code analyzer 172 may provide a prioritized or ordered list of potential computing resources for executing the user-submitted code, and the worker manager 140 may determine a “best available” resource by comparing the prioritized list to the available resource pool. The other embodiments, the code analyzer 172 may provide scores or weighting factors for various potential computing resources (or for the relative priority of the task), and the worker manager 140 may determine a computing resource based on these factors. In further embodiments, as described above, the code analyzer 172 may consider resource availability when determining a set of computing resources, and the interaction at (8) may be omitted or may be a determination that tasks must be migrated from one resource to another to free up the resources that will be used to execute the newly submitted task.


At (9), the worker manager 140 may allocate the computing resources that will be used to execute the user-submitted code. In various embodiments, allocating the computing resources may include allocating a host computing device 150A, allocating an existing virtual machine instance (not shown in FIG. 3), instantiating a new virtual machine instance (not shown in FIG. 3), or combinations thereof. In some embodiments, as described above, the worker manager 140 may allocate computing resources other than those recommended by the code analyzer 142. For example, if the computing resources recommended by the code analyzer 142 are not available, then the worker manager 140 may determine alternate resources based on the recommendation or on resource availability. At (10), the host computing device 150A or other allocated computing resource executes the user-submitted code.


In some embodiments, the ordering and implementation of operations described above may be modified, or these interactions may be carried out by additional or alternative elements of the on-demand code execution system 110. For example, in some embodiments, the worker manager 140 may be configured to analyze performance metrics and request a resource recommendation from the execution analyzer 174 in response to the metrics satisfying certain criteria. As a further example, code analysis may be carried out prior to receiving a request to execute user-submitted code, and the results of such analysis may be stored (e.g., in the computing resource data store 176) for later use when code execution is requested.



FIG. 4 depicts illustrative interactions for determining the computing resources to use when executing a task based on an analysis of a current or previous execution of the task. At (1), a computing resource that is executing the task, such as a host computing device 150A or a virtual machine instance executing on the host computing device 150A, collects performance metrics regarding the execution of the task. Performance metrics may include, for example, the number of processor instructions executed per clock cycle, which may provide an indication of how efficiently a virtual machine instance is emulating a processor that is not physically provided. In various embodiments, performance metrics may include measurements such as total execution time, computing resources utilized (or not utilized) during execution, and the like. At (2), the computing resource reports the execution metrics to the execution analyzer 174. In various embodiments, the computing resource may report metrics during execution of the task or after its completion.


At (3), the execution analyzer 174 may determine recommended computing resources. Illustratively, the execution analyzer 174 may determine, based on the execution metrics, that a different set of computing resources could execute the code more efficiently. For example, the execution analyzer 174 may determine that the code is performing a number of operations that would execute more efficiently on a different computing resource (e.g., a processor that implements a particular instruction set). As a further example, the execution analyzer 174 may determine that the code is making little or no use of a computing resource, and thus may be migrated to a different computing resource without significant effect on performance. In some embodiments, the execution analyzer 174 may obtain information from the computing resource data store 176 regarding other computing resources that may be available for executing the code, and may estimate the performance of executing the code on those resources relative to the received performance metrics.


At (4), the execution analyzer 174 may store updated computing resource data to the computing resource data store 176. Illustratively, the execution analyzer 174 may store that a particular set of computing resources executed the code with a particular degree of efficiency (or inefficiency) based on the collected performance metrics, and this information may be used to refine subsequent analyses by the execution analyzer 174 or the code analyzer 172. In some embodiments, as discussed above, the execution analyzer 174 may store that the computing resources were underutilized, or that particular functionality was needed but absent, when executing the code. In some embodiments, the execution analyzer 174 may analyze the results of multiple executions of various user-submitted code to identify trends or patterns that may facilitate allocation of computing resources to the execution of user-submitted code.


At (5), in some embodiments, the execution analyzer 174 may provide an updated resource recommendation to the worker manager 140. In some embodiments, as discussed above, the execution analyzer 174 may consider factors such as resource availability and prioritization, and may instruct the worker manager 140 to migrate the user-submitted code to a different set of computing resources. In other embodiments, the execution analyzer 174 may provide a recommendation and the worker manager 140 may determine whether to implement the recommendation. In such embodiments, at (6), the worker manager 140 may determine whether computing resources or available, or can be made available, to implement the recommendation of the execution analyzer 174. In some embodiments, the worker manager 140 or the execution analyzer 174 may consider whether the estimated performance gain to be realized by migrating executing code to a different set of computing resources outweighs any costs associated with the migration, such as the transfer of execution states or costs associated with freeing up the resources. In further embodiments, the worker manager 140 or the execution analyzer 174 may aggregate or prioritize recommendations to ensure that resources are allocated efficiently overall.


At (7), the worker manager 140 may migrate execution of the user-submitted code from one set of computing resources to another. For example, the worker manager 140 may migrate execution of the code from a host computing device 150A to another host computing device 150B that implements different functionality. In various embodiments, the worker manager 140 may migrate the code execution from one physical computing device to another, from one virtual machine instance to another, from a physical computing device to a virtual machine instance (or vice versa), or combinations thereof. In some embodiments, the worker manager 140 may migrate the code execution by suspending execution on the host computing device 150A, migrating the code and state information to the host computing device 150B, and then resuming code execution on the host computing device 150B and releasing the resources on the host computing device 150A that were executing the code. In other embodiments, the worker manager 140 may migrate the code execution by terminating an in-progress execution on the host computing device 150A and starting over on the host computing device 150B. In further embodiments, the worker manager may perform a “live” migration, and may begin execution on the host computing device 150B without suspending execution on the host computing device 150A, or may execute on both devices 150A-B in parallel for a time before completing the migration. Other embodiments of migrating executing code will be understood to be within the scope of the disclosure. At (8), the host computing device 150A may suspend its execution of the user-submitted code, and at (9) the host computing device 150B (or other physical or virtual computing device) may resume execution of the code from the point at which execution was suspended.


In some embodiments, the execution analyzer 174 may determine a set of computing resources to be used when the user-submitted code is next executed, and may provide this recommendation to the code analyzer 172, worker manager 140, or store it in the computing resource data store 176. Additionally, in some embodiments, the execution analyzer 174 may make a determination or recommendation at the start of a subsequent execution of the code based on performance metrics collected during the previous execution(s), which may supplement or replace a code-based analysis for the subsequent executions.


In some embodiments, the reporting of execution metrics at (2) may be carried out continuously during execution of the user-submitted code, and the execution analyzer 174 may dynamically analyze whether the user-submitted code requires different computing resources at various points during its execution. For example, the execution analyzer 174 may determine that the user-submitted code has begun or ended a phase of execution that makes us of certain functionality, and may recommend making that functionality available or indicate the functionality is no longer required. Additionally, in some embodiments, the worker manager 140 may request that the execution analyzer 174 provide a recommendation for a particular computing resource (e.g., a computing resource that has become available), and may receive a recommendation to migrate execution of a particular user-submitted code to the resource.



FIG. 5 is a flow diagram of an illustrative routine 500 for determining a set of computing resources to recommend based on an analysis of user-submitted code. The routine 500 may be carried out, for example, by the code analyzer 172 of FIG. 1. The routine 500 begins at block 502, where code for a task (e.g., as submitted by a user) may be obtained. In one embodiment, the code for the task is represented as a code object, such as a compressed file including source code for the task. At block 504, the code is analyzed to identify functionality that may be required during code execution. As described above, the code may be analyzed with regard to libraries, programming language features, API calls, or other features to identify functionality that may be advantageous to provide during code execution. In some embodiments, the code may be compared to other code executed by the on-demand code execution system 110, and common features or similarities may be identified.


At block 506, available computing resources may be identified. In some embodiments, as described above, computing resources may be identified regardless of their availability. In various embodiments, availability may be considered both in terms of availability within a particular data center (e.g., the resources are present) and availability for use (e.g., the resources are idle or can be freed). At block 508, sets of computing resources that provide the identified functionality may be identified as potential candidates for executing the user-submitted code. A set of computing resources may include, for example, various physical or virtual resources such as a processor, memory, interfaces, data stores, and the like. In some embodiments, a virtual computing resource may be associated with a particular host computing environment. For example, a virtual processor may be associated with the underlying physical processor in order to assess the performance of the virtual processor when providing the required functionality.


At block 510, a set of resources that has not yet been analyzed by this execution of the routine 500 may be selected. At block 512, the performance of this set of resources may be estimated with regard to executing the user-submitted code. The performance may be estimated, for example, based on benchmarks, metrics, previous executions, or other criteria. In various embodiments, performance estimates may be expressed in terms of numerical scores, grades, categories (e.g., high, medium, and low), or other formats that enable comparison.


At decision block 514, a determination may be made as to whether all of the candidate sets of computing resources have been analyzed by the routine 500. If not, then the routine 500 branches to block 510 and iterates through blocks 512 and 514 until all sets of computing resources have been analyzed. Once all resource sets have been analyzed, the routine 500 branches to block 516, where a recommended set of computing resources may be determined based on the performance estimates. In some embodiments, the set of computing resources having the highest estimated performance may be recommended. In other embodiments, the sets of computing resources may be associated with cost, scarcity, or other criteria that may be factored into the recommendation. For example, a scarce computing resource may be allocated only if the difference in performance relative to a widely available computing resource exceeds a threshold.


At block 518, the code may be executed using the recommended set of computing resources. In some embodiments, as described above, the recommended set of computing resources may be one factor in the allocation of computing resources, and may be weighed along with other considerations such as resource availability or cost. In other embodiments, a prioritized list of candidate computing resources may be output and may be used to determine the computing resources to use when executing the code. In further embodiments, the routine 500 may determine that a particular resource should be made available, and may identify another task (and associated user-submitted code) to migrate in order to free the particular resource based on the estimated or measured performance of various tasks that are executing on the resource to be made available.



FIG. 6 is a flow diagram of an illustrative routine 600 for determining a set of computing resources to recommend based on an analysis of execution performance metrics. The routine 600 may be carried out, for example, by the execution analyzer 174 of FIG. 1. The routine 600 begins at block 602, where performance metrics may be obtained relating to the execution of user-submitted code for a specified task on a particular set of computing resources. In some embodiments, as described above, the performance metrics may be obtained during execution of the code. In further embodiments, performance metrics may be obtained periodically, in response to various events (e.g., computing resources becoming available or unavailable, the code invoking a particular library or API call, etc.), or based on other criteria. In other embodiments, the performance metrics may be obtained after the code has completed an execution.


At block 604, the performance metrics may be analyzed. In some embodiments, the performance metrics may be compared to a threshold or other criterion to assess whether the performance of the set of computing resources is satisfactory. For example, the performance metrics may be used to assess whether a number of instructions executed per clock cycle satisfies a threshold, or to assess whether code execution is completed within a time interval. In other embodiments, the performance metrics may be analyzed relative to other performance metrics, such as metrics obtained when executing the same or similar code on a different set of computing resources, metrics obtained when executing other code on the same set of computing resources, or an average or baseline performance metric.


At decision block 606, a determination may be made as to whether the relative performance of the set of computing resources when executing the user-submitted code is acceptable. If the relative performance is comparable to or better than a baseline, then the routine 600 may end without taking any measures to improve performance. In some embodiments, the routine 600 may branch to block 618 and store performance metrics or other information if the relative performance is better than average, and may use this information when making further recommendations as to which computing resources to use when executing user-submitted code for a particular task. In other embodiments, the routine 600 may branch to block 608 and consider migrating the task to other computing resources if the relative performance is higher than it needs to be. For example, the user may require that a particular task be completed within a specified amount of time, and the set of computing resources may enable completion of the task far more quickly than the user requires. The determination at decision block 606 may thus be that the task could be completed on a slower set of computing resources and still meet the user's performance requirements, and so the task should be migrated in order to free up the faster computing resources for more time-critical tasks.


If the determination at decision block 606 is that execution of the user-submitted code should be migrated to a different set of computing resources, then the routine 600 branches to block 608, where available computing resources may be identified. At block 610, the available computing resources may be analyzed to identify an alternate set of computing resources that may be used to execute the task, and at block 612 the performance of the alternate set of computing resources when executing the task may be estimated. In some embodiments, multiple sets of computing resources may be identified and analyzed, and block 612 may be carried out iteratively for each candidate set. In other embodiments, an alternate set of computing resources may be identified based on the relative performance of the current set of computing resources.


At decision block 614, a determination may be made as to whether execution of the user-submitted code should be migrated from the current set of computing resources to the alternate set of computing resources. In some embodiments, the determination may be as to whether the alternate set of computing resources would provide better performance based on the obtained and estimated performance metrics. For example, the determination may be that the alternate set of computing resources would execute more instructions per clock cycle than the current set of computing resources, and thus would lead to an improvement in performance. In other embodiments, the determination may as to whether the alternate set of computing resources provides acceptable performance based on criteria such as execution time, cost, resource utilization, or other metrics. In these embodiments, the current set of computing resources may provide higher performance than the current set of computing resources, but the other factors discussed above may lead to a determination to use the alternate set of computing resources. If the determination at decision block 614 is that the execution should not be migrated, then the routine 600 ends.


If the determination at decision block 614 is that the code execution should be migrated, then at block 616 the code execution is migrated to the alternate set of computing resources. At block 618, the obtained or estimated performance metrics may be stored to improve the accuracy of estimate or improve decision-making when initially allocating computing resources, as discussed above. The routine 600 then ends.


The blocks of the routines described above may vary in embodiments of the present disclosure. For example, in some implementations of either routine, the identification of available computing resources may be deferred or delegated to the worker manager 140, and the routine 500 or 600 may provide a recommendation that the worker manager 140 determines whether to implement, based on factors such as resource availability and the cost-benefit of migrating tasks from one set of resources to another. The routines may further include additional blocks, or the blocks of the routines may be rearranged, according to various embodiments.


It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.


All of the processes described herein may be embodied in, and fully automated via, software code modules, including one or more specific computer-executable instructions, that are executed by a computing system. The computing system may include one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.


Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.


The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processing unit or processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.


Conditional language such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.


Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.


Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.


Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B, and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.

Claims
  • 1. A system comprising: a non-transitory data store storing information regarding physical and virtual processors, the information identifying, for each processor of the physical and virtual processors, an instruction set implemented by the processor from among a plurality of instruction sets implemented among the physical and virtual processors; anda computing device configured with executable instructions to: receive user-submitted code executable on an on-demand code execution system;identify an instruction set, from the plurality of instruction sets, associated with the user-submitted code, wherein the user-submitted code utilizes the instruction set when executed on the on-demand code execution system; andin response to a request to execute the user-submitted code: obtain availability information regarding a physical processor from the physical processors, that implements the instruction set;obtain first performance information regarding the physical processor;obtain second performance information regarding a virtual processor, from the virtual processors, that implements the instruction set;determine, based at least in part on the availability information, the first performance information, and the second performance information, a recommended processor for executing the user-submitted code, wherein the recommended processor is one of the physical processor and the virtual processor; andcause the on-demand code execution system to execute the user-submitted code on the recommended processor.
  • 2. The system of claim 1, wherein the instruction set comprises a vector instruction set, floating point instruction set, fused multiply-add instruction set, neural network instruction set, tensor processing instruction set, single instruction multiple data instruction set, security instruction set, or cryptography instruction set.
  • 3. The system of claim 1, wherein the instruction set is identified based at least in part on a software library invoked by the user-submitted code.
  • 4. The system of claim 1, wherein the second performance information is associated with instantiating the virtual processor on a physical processor that does not implement the instruction set.
  • 5. The system of claim 1, wherein the user-submitted code is compiled for the physical processor.
  • 6. A computer-implemented method comprising: receiving user-submitted code executable on an on-demand code execution system;determining, based at least in part on the user-submitted code, computing resource functionality associated with executing the user-submitted code on the on-demand code execution system; andin response to a request to execute the user-submitted code: obtaining first performance information regarding a first computing resource that physically implements the computing resource functionality;obtaining second performance information regarding a second computing resource that does not physically implement the computing resource functionality but that virtually emulates the computing resource functionality;determining, based at least in part on the first performance information and the second performance information, a recommended computing resource for executing the user-submitted code, the recommended computing resource being one of the first computing resources or the second computing resource; andproviding a recommendation that includes the recommended computing resource to the on-demand code execution system, wherein the on-demand code execution system selects a computing resource for executing the user-submitted code based at least in part on the recommendation.
  • 7. The computer-implemented method of claim 6, wherein at least one of the first performance information and the second performance information was generated during a previous execution of the user-submitted code on the on-demand code execution system.
  • 8. The computer-implemented method of claim 6 further comprising: obtaining, from the on-demand code execution system, performance metrics regrading execution of the user-submitted code with the selected computing resource;identifying, based at least in part on the performance metrics, an alternate computing resource; andproviding an updated recommendation that includes the alternate computing resource to the on-demand code execution system, wherein providing the updated recommendation causes the on-demand code execution system to migrate execution of the user-submitted code to a different computing resource.
  • 9. The computer-implemented method of claim 8, wherein the performance metrics include a number of processor instructions executed per clock cycle.
  • 10. The computer-implemented method of claim 8 further comprising determining that the alternate computing resource is available.
  • 11. The computer-implemented method of claim 6, wherein the first computing resource is a physical computing resource and the second computing resource is a virtual computing resource.
  • 12. The computer-implemented method of claim 6 further comprising identifying, based at least in part on the computing resource functionality associated with executing the user-submitted code, the first computing resource and the second computing resource.
  • 13. The computer-implemented method of claim 6 further comprising aggregating a plurality of previous recommendations for computing resources to determine a recommended hardware configuration for the on-demand code execution system.
  • 14. The computer-implemented method of claim 13, wherein the recommended hardware configuration is based at least in part on one or more trends in the plurality of previous recommendations.
  • 15. The computer-implemented method of claim 13, wherein the recommended hardware configuration is based at least in part on performance metrics.
  • 16. The computer-implemented method of claim 6, wherein the first computing resource implements the computing resource functionality by emulating the computing resource functionality.
  • 17. Non-transitory computer-readable media including computer-executable instructions that, when executed by an on-demand code execution system, cause the on-demand code execution system to: obtain user-submitted code executable on the on-demand code execution system;determine, based at least in part on the user-submitted code, computing resource functionality associated with executing the user-submitted code on the on-demand code execution system, wherein the on-demand code execution system does not receive a request to provide the computing resource functionality;identifying a plurality of computing resources that implement the computing resource functionality;in response to a request to execute the user-submitted code: identifying an available subset of the plurality of computing resources that either physically implement or virtually emulate the computing resource functionality;selecting, from the available subset, a recommended computing resource for executing the user-submitted code based at least in part on performance of individual computing resources in the available subset at providing the computing resource functionality; andexecuting the user-submitted code on the recommended computing resource.
  • 18. The non-transitory computer-readable media of claim 17 including further computer-executable instructions that, when executed by the on-demand code execution system, cause the on-demand code execution system to generate a prioritized list of computing resources based at least in part on performance estimates for individual computing resources in the plurality of computing resources.
  • 19. The non-transitory computer-readable media of claim 17 including further computer-executable instructions that, when executed by the on-demand code execution system, cause the on-demand code execution system to determine, based at least in part on a performance estimate for an unavailable computing resource, to migrate at least one other task to make the unavailable computing resource available.
  • 20. The non-transitory computer-readable media of claim 17, wherein the request to execute the user-submitted code specifies a preferred computing resource for executing the user-submitted code, and wherein the preferred computing resource does not provide the computing resource functionality.
  • 21. The non-transitory computer-readable media of claim 20, wherein the recommended computing resource is determined based at least in part on comparing a performance estimate for executing the user-submitted code on the recommended computing resource to a performance estimate for executing the user-submitted code on the preferred computing resource.
US Referenced Citations (529)
Number Name Date Kind
4949254 Shorter Aug 1990 A
5283888 Dao et al. Feb 1994 A
5970488 Crowe et al. Oct 1999 A
6385636 Suzuki May 2002 B1
6463509 Teoman et al. Oct 2002 B1
6501736 Smolik et al. Dec 2002 B1
6523035 Fleming et al. Feb 2003 B1
6708276 Yarsa et al. Mar 2004 B1
7036121 Casabona et al. Apr 2006 B1
7590806 Harris et al. Sep 2009 B2
7665090 Tormasov et al. Feb 2010 B1
7707579 Rodriguez Apr 2010 B2
7730464 Trowbridge Jun 2010 B2
7774191 Berkowitz et al. Aug 2010 B2
7823186 Pouliot Oct 2010 B2
7886021 Scheifler et al. Feb 2011 B2
8010990 Ferguson et al. Aug 2011 B2
8024564 Bassani et al. Sep 2011 B2
8046765 Cherkasova et al. Oct 2011 B2
8051180 Mazzaferri et al. Nov 2011 B2
8051266 DeVal et al. Nov 2011 B2
8065676 Sahai et al. Nov 2011 B1
8065682 Baryshnikov et al. Nov 2011 B2
8095931 Chen et al. Jan 2012 B1
8127284 Meijer et al. Feb 2012 B2
8146073 Sinha Mar 2012 B2
8166304 Murase et al. Apr 2012 B2
8171473 Lavin May 2012 B2
8209695 Pruyne et al. Jun 2012 B1
8219987 Vlaovic et al. Jul 2012 B1
8321554 Dickinson Nov 2012 B2
8321558 Sirota et al. Nov 2012 B1
8336079 Budko et al. Dec 2012 B2
8352608 Keagy et al. Jan 2013 B1
8387075 McCann et al. Feb 2013 B1
8429282 Ahuja Apr 2013 B1
8448165 Conover May 2013 B1
8490088 Tang Jul 2013 B2
8555281 Van Dijk et al. Oct 2013 B1
8566835 Wang et al. Oct 2013 B2
8613070 Borzycki et al. Dec 2013 B1
8631130 Jackson Jan 2014 B2
8677359 Cavage et al. Mar 2014 B1
8694996 Cawlfield et al. Apr 2014 B2
8700768 Benari Apr 2014 B2
8719415 Sirota et al. May 2014 B1
8725702 Raman et al. May 2014 B1
8756696 Miller Jun 2014 B1
8769519 Leitman et al. Jul 2014 B2
8799236 Azari et al. Aug 2014 B1
8799879 Wright et al. Aug 2014 B2
8806468 Meijer et al. Aug 2014 B2
8819679 Agarwal et al. Aug 2014 B2
8825863 Hansson et al. Sep 2014 B2
8825964 Sopka et al. Sep 2014 B1
8839035 Dimitrovich et al. Sep 2014 B1
8850432 Mcgrath et al. Sep 2014 B2
8874952 Tameshige et al. Oct 2014 B2
8904008 Calder et al. Dec 2014 B2
8997093 Dimitrov Mar 2015 B2
9027087 Ishaya et al. May 2015 B2
9038068 Engle et al. May 2015 B2
9052935 Rajaa Jun 2015 B1
9086897 Oh et al. Jul 2015 B2
9092837 Bala et al. Jul 2015 B2
9098528 Wang Aug 2015 B2
9110732 Forschmiedt et al. Aug 2015 B1
9110770 Raju et al. Aug 2015 B1
9111037 Nalis et al. Aug 2015 B1
9112813 Jackson Aug 2015 B2
9141410 Leafe et al. Sep 2015 B2
9146764 Wagner Sep 2015 B1
9152406 De et al. Oct 2015 B2
9164754 Pohlack Oct 2015 B1
9183019 Kruglick Nov 2015 B2
9208007 Harper et al. Dec 2015 B2
9218190 Anand et al. Dec 2015 B2
9223561 Orveillon et al. Dec 2015 B2
9223966 Satish et al. Dec 2015 B1
9250893 Blahaerath et al. Feb 2016 B2
9268586 Voccio et al. Feb 2016 B2
9298633 Zhao et al. Mar 2016 B1
9317689 Aissi Apr 2016 B2
9323556 Wagner Apr 2016 B2
9361145 Wilson et al. Jun 2016 B1
9413626 Reque et al. Aug 2016 B2
9436555 Dornemann et al. Sep 2016 B2
9461996 Hayton et al. Oct 2016 B2
9471775 Wagner et al. Oct 2016 B1
9483335 Wagner et al. Nov 2016 B1
9489227 Oh et al. Nov 2016 B2
9497136 Ramarao et al. Nov 2016 B1
9501345 Lietz et al. Nov 2016 B1
9514037 Dow et al. Dec 2016 B1
9537788 Reque et al. Jan 2017 B2
9575798 Terayama et al. Feb 2017 B2
9588790 Wagner et al. Mar 2017 B1
9594590 Hsu Mar 2017 B2
9596350 Dymshyts et al. Mar 2017 B1
9600312 Wagner et al. Mar 2017 B2
9628332 Bruno, Jr. et al. Apr 2017 B2
9635132 Lin et al. Apr 2017 B1
9652306 Wagner et al. May 2017 B1
9652617 Evans et al. May 2017 B1
9654508 Barton et al. May 2017 B2
9661011 Van Horenbeeck et al. May 2017 B1
9678773 Wagner et al. Jun 2017 B1
9678778 Youseff Jun 2017 B1
9703681 Taylor et al. Jul 2017 B2
9715402 Wagner et al. Jul 2017 B2
9727725 Wagner et al. Aug 2017 B2
9733967 Wagner et al. Aug 2017 B2
9760387 Wagner et al. Sep 2017 B2
9767271 Ghose Sep 2017 B2
9785476 Wagner et al. Oct 2017 B2
9787779 Frank et al. Oct 2017 B2
9811363 Wagner Nov 2017 B1
9811434 Wagner Nov 2017 B1
9830175 Wagner Nov 2017 B1
9830193 Wagner et al. Nov 2017 B1
9830449 Wagner Nov 2017 B1
9864636 Patel et al. Jan 2018 B1
9910713 Wisniewski et al. Mar 2018 B2
9921864 Singaravelu et al. Mar 2018 B2
9928108 Wagner et al. Mar 2018 B1
9929916 Subramanian et al. Mar 2018 B1
9930103 Thompson Mar 2018 B2
9930133 Susarla et al. Mar 2018 B2
9952896 Wagner et al. Apr 2018 B2
9977691 Marriner et al. May 2018 B2
9979817 Huang et al. May 2018 B2
10002026 Wagner Jun 2018 B1
10013267 Wagner et al. Jul 2018 B1
10042660 Wagner et al. Aug 2018 B2
10048974 Wagner et al. Aug 2018 B1
10061613 Brooker et al. Aug 2018 B1
10067801 Wagner Sep 2018 B1
10102040 Marriner et al. Oct 2018 B2
10108443 Wagner et al. Oct 2018 B2
10139876 Lu et al. Nov 2018 B2
10140137 Wagner Nov 2018 B2
10162672 Wagner et al. Dec 2018 B2
10162688 Wagner Dec 2018 B2
10203990 Wagner et al. Feb 2019 B2
10248467 Wisniewski et al. Apr 2019 B2
10277708 Wagner et al. Apr 2019 B2
10303492 Wagner et al. May 2019 B1
10353678 Wagner Jul 2019 B1
10353746 Reque et al. Jul 2019 B2
10365985 Wagner Jul 2019 B2
10387177 Wagner et al. Aug 2019 B2
10402231 Marriner et al. Sep 2019 B2
10437629 Wagner et al. Oct 2019 B2
10445140 Sagar et al. Oct 2019 B1
10528390 Brooker et al. Jan 2020 B2
10552193 Wagner et al. Feb 2020 B2
10564946 Wagner et al. Feb 2020 B1
10572375 Wagner Feb 2020 B1
10592269 Wagner et al. Mar 2020 B2
10623476 Thompson Apr 2020 B2
10649749 Brooker et al. May 2020 B1
10691498 Wagner Jun 2020 B2
10713080 Brooker et al. Jul 2020 B1
20010044817 Asano et al. Nov 2001 A1
20020120685 Srivastava et al. Aug 2002 A1
20020172273 Baker et al. Nov 2002 A1
20030071842 King et al. Apr 2003 A1
20030084434 Ren May 2003 A1
20030191795 Bernardin et al. Oct 2003 A1
20030229794 James, II et al. Dec 2003 A1
20040003087 Chambliss et al. Jan 2004 A1
20040044721 Song et al. Mar 2004 A1
20040049768 Matsuyama et al. Mar 2004 A1
20040098154 McCarthy May 2004 A1
20040158551 Santosuosso Aug 2004 A1
20040205493 Simpson et al. Oct 2004 A1
20040249947 Novaes et al. Dec 2004 A1
20040268358 Darling et al. Dec 2004 A1
20050027611 Wharton Feb 2005 A1
20050044301 Vasilevsky et al. Feb 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050132167 Longobardi Jun 2005 A1
20050132368 Sexton et al. Jun 2005 A1
20050149535 Frey et al. Jul 2005 A1
20050193113 Kokusho et al. Sep 2005 A1
20050193283 Reinhardt et al. Sep 2005 A1
20050237948 Wan et al. Oct 2005 A1
20050257051 Richard Nov 2005 A1
20060080678 Bailey et al. Apr 2006 A1
20060123066 Jacobs et al. Jun 2006 A1
20060129684 Datta Jun 2006 A1
20060168174 Gebhart et al. Jul 2006 A1
20060184669 Vaidyanathan et al. Aug 2006 A1
20060200668 Hybre et al. Sep 2006 A1
20060212332 Jackson Sep 2006 A1
20060242647 Kimbrel et al. Oct 2006 A1
20060248195 Toumura et al. Nov 2006 A1
20070033085 Johnson Feb 2007 A1
20070094396 Takano et al. Apr 2007 A1
20070130341 Ma Jun 2007 A1
20070174419 O'Connell et al. Jul 2007 A1
20070192082 Gaos et al. Aug 2007 A1
20070199000 Shekhel et al. Aug 2007 A1
20070220009 Morris et al. Sep 2007 A1
20070240160 Paterson-Jones Oct 2007 A1
20070255604 Seelig Nov 2007 A1
20080028409 Cherkasova et al. Jan 2008 A1
20080052401 Bugenhagen et al. Feb 2008 A1
20080052725 Stoodley et al. Feb 2008 A1
20080082977 Araujo et al. Apr 2008 A1
20080104247 Venkatakrishnan et al. May 2008 A1
20080104608 Hyser et al. May 2008 A1
20080115143 Shimizu et al. May 2008 A1
20080126110 Haeberle et al. May 2008 A1
20080126486 Heist May 2008 A1
20080127125 Anckaert et al. May 2008 A1
20080147893 Marripudi et al. Jun 2008 A1
20080189468 Schmidt et al. Aug 2008 A1
20080195369 Duyanovich et al. Aug 2008 A1
20080201568 Quinn et al. Aug 2008 A1
20080201711 Amir Husain Aug 2008 A1
20080209423 Hirai Aug 2008 A1
20090006897 Sarsfield Jan 2009 A1
20090013153 Hilton Jan 2009 A1
20090025009 Brunswig et al. Jan 2009 A1
20090055810 Kondur Feb 2009 A1
20090055829 Gibson Feb 2009 A1
20090070355 Cadarette et al. Mar 2009 A1
20090077569 Appleton et al. Mar 2009 A1
20090125902 Ghosh et al. May 2009 A1
20090158275 Wang et al. Jun 2009 A1
20090177860 Zhu et al. Jul 2009 A1
20090183162 Kindel et al. Jul 2009 A1
20090193410 Arthursson et al. Jul 2009 A1
20090198769 Keller et al. Aug 2009 A1
20090204960 Ben-Yehuda et al. Aug 2009 A1
20090204964 Foley et al. Aug 2009 A1
20090222922 Sidiroglou et al. Sep 2009 A1
20090271472 Scheifler et al. Oct 2009 A1
20090288084 Astete et al. Nov 2009 A1
20090300599 Piotrowski Dec 2009 A1
20100023940 Iwamatsu et al. Jan 2010 A1
20100031274 Sim-Tang Feb 2010 A1
20100031325 Maigne et al. Feb 2010 A1
20100036925 Haffner Feb 2010 A1
20100058342 Machida Mar 2010 A1
20100058351 Yahagi Mar 2010 A1
20100064299 Kacin et al. Mar 2010 A1
20100070678 Zhang et al. Mar 2010 A1
20100070725 Prahlad et al. Mar 2010 A1
20100094816 Groves, Jr. et al. Apr 2010 A1
20100106926 Kandasamy et al. Apr 2010 A1
20100114825 Siddegowda May 2010 A1
20100115098 De Baer et al. May 2010 A1
20100122343 Ghosh May 2010 A1
20100131936 Cheriton May 2010 A1
20100131959 Spiers et al. May 2010 A1
20100186011 Magenheimer Jul 2010 A1
20100198972 Umbehocker Aug 2010 A1
20100199285 Medovich Aug 2010 A1
20100257116 Mehta et al. Oct 2010 A1
20100269109 Cartales Oct 2010 A1
20100312871 Desantis et al. Dec 2010 A1
20100325727 Neystadt et al. Dec 2010 A1
20110010722 Matsuyama Jan 2011 A1
20110029970 Arasaratnam Feb 2011 A1
20110029984 Norman et al. Feb 2011 A1
20110040812 Phillips Feb 2011 A1
20110055378 Ferris et al. Mar 2011 A1
20110055396 DeHaan Mar 2011 A1
20110055683 Jiang Mar 2011 A1
20110078679 Bozek et al. Mar 2011 A1
20110099204 Thaler Apr 2011 A1
20110099551 Fahrig et al. Apr 2011 A1
20110131572 Elyashev et al. Jun 2011 A1
20110134761 Smith Jun 2011 A1
20110141124 Halls et al. Jun 2011 A1
20110153727 Li Jun 2011 A1
20110153838 Belkine et al. Jun 2011 A1
20110154353 Theroux et al. Jun 2011 A1
20110179162 Mayo et al. Jul 2011 A1
20110184993 Chawla et al. Jul 2011 A1
20110225277 Freimuth et al. Sep 2011 A1
20110231680 Padmanabhan et al. Sep 2011 A1
20110247005 Benedetti et al. Oct 2011 A1
20110265164 Lucovsky Oct 2011 A1
20110271276 Ashok et al. Nov 2011 A1
20110276945 Chasman et al. Nov 2011 A1
20110314465 Smith et al. Dec 2011 A1
20110321033 Kelkar et al. Dec 2011 A1
20110321051 Rastogi Dec 2011 A1
20120011496 Shimamura Jan 2012 A1
20120011511 Horvitz et al. Jan 2012 A1
20120016721 Weinman Jan 2012 A1
20120041970 Ghosh et al. Feb 2012 A1
20120054744 Singh et al. Mar 2012 A1
20120072762 Atchison et al. Mar 2012 A1
20120072914 Ota Mar 2012 A1
20120079004 Herman Mar 2012 A1
20120096271 Ramarathinam et al. Apr 2012 A1
20120096468 Chakravorty et al. Apr 2012 A1
20120102307 Wong Apr 2012 A1
20120102333 Wong Apr 2012 A1
20120102481 Mani et al. Apr 2012 A1
20120102493 Allen et al. Apr 2012 A1
20120110155 Adlung et al. May 2012 A1
20120110164 Frey et al. May 2012 A1
20120110570 Jacobson et al. May 2012 A1
20120110588 Bieswanger et al. May 2012 A1
20120131379 Tameshige et al. May 2012 A1
20120144290 Goldman et al. Jun 2012 A1
20120166624 Suit et al. Jun 2012 A1
20120192184 Burckart et al. Jul 2012 A1
20120197795 Campbell et al. Aug 2012 A1
20120197958 Nightingale et al. Aug 2012 A1
20120198442 Kashyap et al. Aug 2012 A1
20120222038 Katragadda et al. Aug 2012 A1
20120233464 Miller et al. Sep 2012 A1
20120331113 Jain et al. Dec 2012 A1
20130014101 Ballani et al. Jan 2013 A1
20130042234 DeLuca et al. Feb 2013 A1
20130054804 Jana et al. Feb 2013 A1
20130054927 Raj et al. Feb 2013 A1
20130055262 Lubsey et al. Feb 2013 A1
20130061208 Tsao et al. Mar 2013 A1
20130061220 Gnanasambandam et al. Mar 2013 A1
20130067494 Srour et al. Mar 2013 A1
20130080641 Lui et al. Mar 2013 A1
20130097601 Podvratnik et al. Apr 2013 A1
20130111032 Alapati et al. May 2013 A1
20130111469 B et al. May 2013 A1
20130124807 Nielsen et al. May 2013 A1
20130132942 Wang May 2013 A1
20130139152 Chang et al. May 2013 A1
20130139166 Zhang et al. May 2013 A1
20130151648 Luna Jun 2013 A1
20130152047 Moorthi et al. Jun 2013 A1
20130179574 Calder et al. Jul 2013 A1
20130179881 Calder et al. Jul 2013 A1
20130179894 Calder et al. Jul 2013 A1
20130179895 Calder et al. Jul 2013 A1
20130185719 Kar et al. Jul 2013 A1
20130185729 Vasic et al. Jul 2013 A1
20130191924 Tedesco Jul 2013 A1
20130198319 Shen et al. Aug 2013 A1
20130198743 Kruglick Aug 2013 A1
20130198748 Sharp et al. Aug 2013 A1
20130198763 Kunze et al. Aug 2013 A1
20130205092 Roy et al. Aug 2013 A1
20130219390 Lee et al. Aug 2013 A1
20130227097 Yasuda et al. Aug 2013 A1
20130227534 Ike et al. Aug 2013 A1
20130227563 McGrath Aug 2013 A1
20130227641 White et al. Aug 2013 A1
20130227710 Barak et al. Aug 2013 A1
20130232480 Winterfeldt et al. Sep 2013 A1
20130239125 Iorio Sep 2013 A1
20130262556 Xu et al. Oct 2013 A1
20130263117 Konik et al. Oct 2013 A1
20130275376 Hudlow et al. Oct 2013 A1
20130275958 Ivanov et al. Oct 2013 A1
20130275969 Dimitrov Oct 2013 A1
20130275975 Masuda et al. Oct 2013 A1
20130283176 Hoole et al. Oct 2013 A1
20130290538 Gmach et al. Oct 2013 A1
20130291087 Kailash et al. Oct 2013 A1
20130297964 Hegdal et al. Nov 2013 A1
20130311650 Brandwine et al. Nov 2013 A1
20130326506 McGrath et al. Dec 2013 A1
20130339950 Ramarathinam et al. Dec 2013 A1
20130346470 Obstfeld et al. Dec 2013 A1
20130346946 Pinnix Dec 2013 A1
20130346964 Nobuoka et al. Dec 2013 A1
20130346987 Raney et al. Dec 2013 A1
20130346994 Chen et al. Dec 2013 A1
20130347095 Barjatiya et al. Dec 2013 A1
20140007097 Chin et al. Jan 2014 A1
20140019523 Heymann et al. Jan 2014 A1
20140019735 Menon et al. Jan 2014 A1
20140019965 Neuse et al. Jan 2014 A1
20140019966 Neuse et al. Jan 2014 A1
20140040343 Nickolov et al. Feb 2014 A1
20140040857 Trinchini et al. Feb 2014 A1
20140040880 Brownlow et al. Feb 2014 A1
20140059209 Alnoor Feb 2014 A1
20140059226 Messerli et al. Feb 2014 A1
20140059552 Cunningham et al. Feb 2014 A1
20140068568 Wisnovsky Mar 2014 A1
20140068611 McGrath et al. Mar 2014 A1
20140081984 Sitsky et al. Mar 2014 A1
20140082165 Marr et al. Mar 2014 A1
20140082201 Shankari et al. Mar 2014 A1
20140101649 Kamble et al. Apr 2014 A1
20140108722 Lipchuk et al. Apr 2014 A1
20140109087 Jujare et al. Apr 2014 A1
20140109088 Dournov et al. Apr 2014 A1
20140129667 Ozawa May 2014 A1
20140130040 Lemanski May 2014 A1
20140137110 Engle et al. May 2014 A1
20140173614 Konik et al. Jun 2014 A1
20140173616 Bird et al. Jun 2014 A1
20140180862 Certain et al. Jun 2014 A1
20140189677 Curzi et al. Jul 2014 A1
20140201735 Kannan et al. Jul 2014 A1
20140207912 Thibeault Jul 2014 A1
20140215073 Dow et al. Jul 2014 A1
20140229221 Shih et al. Aug 2014 A1
20140245297 Hackett Aug 2014 A1
20140279581 Devereaux Sep 2014 A1
20140280325 Krishnamurthy et al. Sep 2014 A1
20140282559 Verduzco et al. Sep 2014 A1
20140282615 Cavage et al. Sep 2014 A1
20140282629 Gupta et al. Sep 2014 A1
20140283045 Brandwine et al. Sep 2014 A1
20140289286 Gusak Sep 2014 A1
20140298295 Overbeck Oct 2014 A1
20140304698 Chigurapati et al. Oct 2014 A1
20140304815 Maeda Oct 2014 A1
20140317617 O'Donnell Oct 2014 A1
20140344457 Bruno, Jr. et al. Nov 2014 A1
20140344736 Ryman et al. Nov 2014 A1
20140380085 Rash et al. Dec 2014 A1
20150033241 Jackson et al. Jan 2015 A1
20150039891 Ignatchenko et al. Feb 2015 A1
20150040229 Chan et al. Feb 2015 A1
20150046926 Kenchammana-Hosekote et al. Feb 2015 A1
20150052258 Johnson et al. Feb 2015 A1
20150058914 Yadav Feb 2015 A1
20150067830 Johansson et al. Mar 2015 A1
20150074659 Madsen et al. Mar 2015 A1
20150081885 Thomas et al. Mar 2015 A1
20150106805 Melander et al. Apr 2015 A1
20150120928 Gummaraju et al. Apr 2015 A1
20150121391 Wang Apr 2015 A1
20150134626 Theimer et al. May 2015 A1
20150135287 Medeiros et al. May 2015 A1
20150142952 Bragstad et al. May 2015 A1
20150143381 Chin et al. May 2015 A1
20150178110 Li et al. Jun 2015 A1
20150186129 Apte et al. Jul 2015 A1
20150188775 Van Der Walt et al. Jul 2015 A1
20150199218 Wilson et al. Jul 2015 A1
20150205596 Hiltegen et al. Jul 2015 A1
20150227598 Hahn et al. Aug 2015 A1
20150235144 Gusev et al. Aug 2015 A1
20150242225 Muller et al. Aug 2015 A1
20150254248 Burns et al. Sep 2015 A1
20150256621 Noda et al. Sep 2015 A1
20150261578 Greden et al. Sep 2015 A1
20150289220 Kim et al. Oct 2015 A1
20150309923 Iwata et al. Oct 2015 A1
20150319160 Ferguson et al. Nov 2015 A1
20150324229 Valine Nov 2015 A1
20150332048 Mooring et al. Nov 2015 A1
20150332195 Jue Nov 2015 A1
20150350701 Lemus et al. Dec 2015 A1
20150356294 Tan et al. Dec 2015 A1
20150363181 Alberti et al. Dec 2015 A1
20150370560 Tan et al. Dec 2015 A1
20150371244 Neuse et al. Dec 2015 A1
20150378762 Saladi et al. Dec 2015 A1
20150378764 Sivasubramanian et al. Dec 2015 A1
20150378765 Singh et al. Dec 2015 A1
20150379167 Griffith et al. Dec 2015 A1
20160011901 Hurwitz et al. Jan 2016 A1
20160012099 Tuatini et al. Jan 2016 A1
20160019536 Ortiz et al. Jan 2016 A1
20160026486 Abdallah Jan 2016 A1
20160048606 Rubinstein et al. Feb 2016 A1
20160072727 Leafe et al. Mar 2016 A1
20160077901 Roth et al. Mar 2016 A1
20160098285 Davis et al. Apr 2016 A1
20160100036 Lo et al. Apr 2016 A1
20160117254 Susarla et al. Apr 2016 A1
20160124665 Jain et al. May 2016 A1
20160140180 Park et al. May 2016 A1
20160191420 Nagarajan et al. Jun 2016 A1
20160212007 Alatorre et al. Jul 2016 A1
20160285906 Fine et al. Sep 2016 A1
20160292016 Bussard et al. Oct 2016 A1
20160294614 Searle et al. Oct 2016 A1
20160306613 Busi et al. Oct 2016 A1
20160350099 Suparna et al. Dec 2016 A1
20160357536 Firlik et al. Dec 2016 A1
20160364265 Cao et al. Dec 2016 A1
20160371127 Antony et al. Dec 2016 A1
20160371156 Merriman Dec 2016 A1
20160378449 Khazanchi et al. Dec 2016 A1
20160378554 Gummaraju et al. Dec 2016 A1
20170041309 Ekambaram et al. Feb 2017 A1
20170060615 Thakkar et al. Mar 2017 A1
20170060621 Whipple et al. Mar 2017 A1
20170068574 Cherkasova et al. Mar 2017 A1
20170075749 Ambichl et al. Mar 2017 A1
20170083381 Cong et al. Mar 2017 A1
20170085447 Chen et al. Mar 2017 A1
20170085591 Ganda et al. Mar 2017 A1
20170093684 Jayaraman et al. Mar 2017 A1
20170093920 Ducatel et al. Mar 2017 A1
20170230499 Mumick et al. Aug 2017 A1
20170272462 Kraemer et al. Sep 2017 A1
20170286143 Wagner et al. Oct 2017 A1
20170371724 Wagner et al. Dec 2017 A1
20180046453 Nair et al. Feb 2018 A1
20180046482 Karve et al. Feb 2018 A1
20180060221 Yim et al. Mar 2018 A1
20180067841 Mahimkar Mar 2018 A1
20180121245 Wagner et al. May 2018 A1
20180143865 Wagner et al. May 2018 A1
20180239636 Arora et al. Aug 2018 A1
20180253333 Gupta Sep 2018 A1
20180275987 Vandeputte Sep 2018 A1
20180309819 Thompson Oct 2018 A1
20190072529 Andrawes et al. Mar 2019 A1
20190102231 Wagner Apr 2019 A1
20190108058 Wagner et al. Apr 2019 A1
20190155629 Wagner et al. May 2019 A1
20190171470 Wagner Jun 2019 A1
20190179725 Mital Jun 2019 A1
20190196884 Wagner Jun 2019 A1
20190227849 Wisniewski et al. Jul 2019 A1
20190384647 Reque et al. Dec 2019 A1
20190391834 Mullen et al. Dec 2019 A1
20190391841 Mullen et al. Dec 2019 A1
20200057680 Marriner et al. Feb 2020 A1
20200104198 Hussels et al. Apr 2020 A1
20200104378 Wagner et al. Apr 2020 A1
20200110691 Bryant Apr 2020 A1
20200142724 Wagner et al. May 2020 A1
Foreign Referenced Citations (33)
Number Date Country
2663052 Nov 2013 EP
2002287974 Oct 2002 JP
2006-107599 Apr 2006 JP
2007-538323 Dec 2007 JP
2010-026562 Feb 2010 JP
2011-233146 Nov 2011 JP
2011257847 Dec 2011 JP
2013-156996 Aug 2013 JP
2014-525624 Sep 2014 JP
2017-534107 Nov 2017 JP
2017-534967 Nov 2017 JP
2018-503896 Feb 2018 JP
2018-512087 May 2018 JP
2018-536213 Dec 2018 JP
WO 2008114454 Sep 2008 WO
WO 2009137567 Nov 2009 WO
WO 2012039834 Mar 2012 WO
WO 2012050772 Apr 2012 WO
WO 2013106257 Jul 2013 WO
WO 2015078394 Jun 2015 WO
WO 2015108539 Jul 2015 WO
WO 2016053950 Apr 2016 WO
WO 2016053968 Apr 2016 WO
WO 2016053973 Apr 2016 WO
WO 2016090292 Jun 2016 WO
WO 2016126731 Aug 2016 WO
WO 2016164633 Oct 2016 WO
WO 2016164638 Oct 2016 WO
WO 2017059248 Apr 2017 WO
WO 2017112526 Jun 2017 WO
WO 2017172440 Oct 2017 WO
WO 2020005764 Jan 2020 WO
WO 2020069104 Apr 2020 WO
Non-Patent Literature Citations (77)
Entry
International Search Report and Written Opinion for International Application No. PCT/US2019/065365 dated Mar. 19, 2020, in 15 pages.
Anonymous: “Docker run reference”, Dec. 7, 2015, XP055350246, Retrieved from the Internet: URL:https://web.archive.org/web/20151207111702/https:/docs.docker.com/engine/reference/run/ [retrieved on Feb. 28, 2017].
Adapter Pattern, Wikipedia, https://en.wikipedia.org/w/index.php?title=Adapter_pattern&oldid=654971255, [retrieved May 26, 2016], 6 pages.
Amazon, “AWS Lambda: Developer Guide”, Retrieved from the Internet, Jun. 26, 2016, URL : http://docs.aws.amazon.com/lambda/ latest/dg/lambda-dg.pdf, 346 pages.
Amazon, “AWS Lambda: Developer Guide”, Retrieved from the Internet, 2019, URL : http://docs.aws.amazon.com/lambda/ latest/dg/lambda-dg.pdf, 521 pages.
Balazinska et al., Moirae: History-Enhanced Monitoring, Published: 2007, 12 pages.
Ben-Yehuda et al., “Deconstructing Amazon EC2 Spot Instance Pricing”, ACM Transactions on Economics and Computation 1.3, 2013, 15 pages.
Bhadani et al., Performance evaluation of web servers using central load balancing policy over virtual machines on cloud, Jan. 2010, 4 pages.
CodeChef ADMIN discussion web page, retrieved from https://discuss.codechef.com/t/what-are-the-memory-limit-and-stack-size-on-codechef/14159, 2019.
CodeChef IDE web page, Code, Compile & Run, retrieved from https://www.codechef.com/ide, 2019.
Czajkowski, G., and L. Daynes, Multitasking Without Compromise: A Virtual Machine Evolution 47(4a):60-73, ACM SIGPLAN Notices—Supplemental Issue, Apr. 2012.
Das et al., Adaptive Stream Processing using Dynamic Batch Sizing, 2014, 13 pages.
Deis, Container, 2014, 1 page.
Dombrowski, M., et al., Dynamic Monitor Allocation in the Java Virtual Machine, JTRES '13, Oct. 9-11, 2013, pp. 30-37.
Dynamic HTML, Wikipedia page from date Mar. 27, 2015, retrieved using the WayBackMachine, from https://web.archive.org/web/20150327215418/https://en.wikipedia.org/wiki/Dynamic_HTML, 2015, 6 pages.
Espadas, J., et al., A Tenant-Based Resource Allocation Model for Scaling Software-as-a-Service Applications Over Cloud Computing Infrastructures, Future Generation Computer Systems, vol. 29, pp. 273-286, 2013.
Han et al., Lightweight Resource Scaling for Cloud Applications, 2012, 8 pages.
Hoffman, Auto scaling your website with Amazon Web Services (AWS)—Part 2, Cardinalpath, Sep. 2015, 15 pages.
http://discuss.codechef.com discussion web page from date Nov. 11, 2012, retrieved using the WayBackMachine, from https://web.archive.org/web/20121111040051/http://discuss.codechef.com/questions/2881 /why-are-simple-java-programs-using-up-so-much-space, 2012.
https://www.codechef.com code error help page from Jan. 2014, retrieved from https://www.codechef.com/JAN14/status/ERROR,va123, 2014.
http://www.codechef.com/ide web page from date Apr. 5, 2015, retrieved using the WayBackMachine, from https://web.archive.org/web/20150405045518/http://www.codechef.com/ide, 2015.
Kamga et al., Extended scheduler for efficient frequency scaling in virtualized systems, Jul. 2012, 8 pages.
Kato, et al. “Web Service Conversion Architecture of the Web Application and Evaluation”; Research Report from Information Processing Society, Apr. 3, 2006 with Machine Translation.
Kazempour et al., AASH: an asymmetry-aware scheduler for hypervisors, Jul. 2010, 12 pages.
Kraft et al., 10 performance prediction in consolidated virtualized environments, Mar. 2011, 12 pages.
Krsul et al., “VMPlants: Providing and Managing Virtual Machine Execution Environments for Grid Computing”, Supercomputing, 2004. Proceedings of the ACM/IEEESC 2004 Conference Pittsburgh, PA, XP010780332, Nov. 6-12, 2004, 12 pages.
Meng et al., Efficient resource provisioning in compute clouds via VM multiplexing, Jun. 2010, 10 pages.
Merkel, “Docker: Lightweight Linux Containers for Consistent Development and Deployment”, Linux Journal, vol. 2014 Issue 239, Mar. 2014, XP055171140, 16 pages.
Monteil, Coupling profile and historical methods to predict execution time of parallel applications. Parallel and Cloud Computing, 2013, <hal-01228236, pp. 81-89.
Nakajima, J., et al., Optimizing Virtual Machines Using Hybrid Virtualization, SAC'11, Mar. 21-25, 2011, TaiChung, Taiwan, pp. 573-578.
Qian, H., and D. Medhi, et al., Estimating Optimal Cost of Allocating Virtualized Resources With Dynamic Demand, ITC 2011, Sep. 2011, pp. 320-321.
Sakamoto, et al. “Platform for Web Services using Proxy Server”; Research Report from Information Processing Society, Mar. 22, 2002, vol. 2002, No. 31.
Shim (computing), Wikipedia, https://en.wikipedia.org/w/index.php?title+Shim_(computing)&oldid+654971528, [retrieved on May 26, 2016], 2 pages.
Stack Overflow, Creating a database connection pool, 2009, 4 pages.
Tan et al., Provisioning for large scale cloud computing services, Jun. 2012, 2 pages.
Vaghani, S.B., Virtual Machine File System, ACM SIGOPS Operating Systems Review 44(4):57-70, Dec. 2010.
Vaquero, L., et al., Dynamically Scaling Applications in the cloud, ACM SIGCOMM Computer Communication Review 41(1):45-52, Jan. 2011.
Wang et al., “Improving utilization through dynamic VM resource allocation in hybrid cloud environment”, Parallel and Distributed V Systems (ICPADS), IEEE, 2014. Retrieved on Feb. 14, 2019, Retrieved from the internet: URL<https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097814, 8 pages.
Wikipedia “API” pages from date Apr. 7, 2015, retrieved using the WayBackMachine from https://web.archive.org/web/20150407191158/https://en.wikipedia.org/wiki/Application_programming_interface.
Wikipedia List_of_HTTP status_codes web page, retrieved from https://en.wikipedia.org/wiki/List_of_HTTP status_codes, 2019.
Wikipedia Recursion web page from date Mar. 26, 2015, retrieved using the WayBackMachine, from https://web.archive.org/web/20150326230100/https://en.wikipedia.org/wiki/Recursion_(computer_science), 2015.
Wikipedia subroutine web page, retrieved from https://en.wikipedia.org/wiki/Subroutine, 2019.
Wu et al., HC-Midware: A Middleware to Enable High Performance Communication System Simulation in Heterogeneous Cloud, Association for Computing Machinery, Oct. 20-22, 2017, 10 pages.
Yamasaki et al. “Model-based resource selection for efficient virtual cluster deployment”, Virtualization Technology in Distributed Computing, ACM, Nov. 2007, pp. 1-7.
Yue et al., AC 2012-4107: Using Amazon EC2 in Computer and Network Security Lab Exercises: Design, Results, and Analysis, 2012, American Society for Engineering Education 2012.
Zheng, C., and D. Thain, Integrating Containers into Workflows: A Case Study Using Makeflow, Work Queue, and Docker, VTDC '15, Jun. 15, 2015, Portland, Oregon, pp. 31-38.
International Search Report and Written Opinion in PCT/US2015/052810 dated Dec. 17, 2015.
International Preliminary Report on Patentability in PCT/US2015/052810 dated Apr. 4, 2017.
Extended Search Report in European Application No. 15846932.0 dated May 3, 2018.
International Search Report and Written Opinion in PCT/US2015/052838 dated Dec. 18, 2015.
International Preliminary Report on Patentability in PCT/US2015/052838 dated Apr. 4, 2017.
Extended Search Report in European Application No. 15847202.7 dated Sep. 9, 2018.
International Search Report and Written Opinion in PCT/US2015/052833 dated Jan. 13, 2016.
International Preliminary Report on Patentability in PCT/US2015/052833 dated Apr. 4, 2017.
Extended Search Report in European Application No. 15846542.7 dated Aug. 27, 2018.
International Search Report and Written Opinion in PCT/US2015/064071 dated Mar. 16, 2016.
International Preliminary Report on Patentability in PCT/US2015/064071 dated Jun. 6, 2017.
International Search Report and Written Opinion in PCT/US2016/016211 dated Apr. 13, 2016.
International Preliminary Report on Patentability in PCT/US2016/016211 dated Aug. 17, 2017.
International Search Report and Written Opinion in PCT/US2016/026514 dated Jun. 8, 2016.
International Preliminary Report on Patentability in PCT/US2016/026514 dated Oct. 10, 2017.
International Search Report and Written Opinion in PCT/US2016/026520 dated Jul. 5, 2016.
International Preliminary Report on Patentability in PCT/US2016/026520 dated Oct. 10, 2017.
International Search Report and Written Opinion in PCT/US2016/054774 dated Dec. 16, 2016.
International Preliminary Report on Patentability in PCT/US2016/054774 dated Apr. 3, 2018.
International Search Report and Written Opinion in PCT/US2016/066997 dated Mar. 20, 2017.
International Preliminary Report on Patentability in PCT/US2016/066997 dated Jun. 26, 2018.
International Search Report and Written Opinion in PCT/US/2017/023564 dated Jun. 6, 2017.
International Preliminary Report on Patentability in PCT/US/2017/023564 dated Oct. 2, 2018.
International Search Report and Written Opinion in PCT/US2017/040054 dated Sep. 21, 2017.
International Preliminary Report on Patentability in PCT/US2017/040054 dated Jan. 1, 2019.
International Search Report and Written Opinion in PCT/US2017/039514 dated Oct. 10, 2017.
International Preliminary Report on Patentability in PCT/US2017/039514 dated Jan. 1, 2019.
Extended European Search Report in application No. 17776325.7 dated Oct. 23, 2019.
Office Action in European Application No. 17743108.7 dated Jan. 14, 2020.
Tange, “GNU Parallel: The Command-Line Power Tool”, vol. 36, No. 1, Jan. 1, 1942, pp. 42-47.
Extended Search Report in European Application No. 19199402.9 dated Mar. 6, 2020.
Related Publications (1)
Number Date Country
20200192707 A1 Jun 2020 US