Endpoint management system providing an application programming interface proxy service

Information

  • Patent Grant
  • 10623476
  • Patent Number
    10,623,476
  • Date Filed
    Friday, March 23, 2018
    6 years ago
  • Date Issued
    Tuesday, April 14, 2020
    4 years ago
Abstract
An endpoint management and proxy system is described, by which users can manage and enable exposure of application programming interfaces (“APIs”) usable to cause execution of program code on a remote or third party system. Systems and methods are disclosed which facilitate the handling of user requests to perform certain tasks on remote systems. The endpoint management system allows the application developer to define and specify a first proxy API which maps to a second API associated with the remote system. The endpoint proxy system receives requests to execute the proxy API, determines the API mapping, and sends one or more backend API requests to execute program codes by the associated remote systems. Responses from the remote systems are received by the endpoint proxy system which parses and/or transforms the results associated with the response and generates an output result for response back to the user computing systems.
Description
BACKGROUND

Generally described, computing devices utilize a communication network, or a series of communication networks, to exchange data. Companies and organizations operate computer networks that interconnect a number of computing devices to support operations or provide services to third parties. The computing systems can be located in a single geographic location or located in multiple, distinct geographic locations (e.g., interconnected via private or public communication networks). Specifically, data centers or data processing centers, herein generally referred to as a “data center,” may include a number of interconnected computing systems to provide computing resources to users of the data center. The data centers may be private data centers operated on behalf of an organization or public data centers operated on behalf, or for the benefit of, the general public.


To facilitate increased utilization of data center resources, virtualization technologies may allow a single physical computing device to host one or more instances of virtual machines that appear and operate as independent computing devices to users of a data center. With virtualization, the single physical computing device can create, maintain, delete, or otherwise manage virtual machines in a dynamic manner. In turn, users can request computer resources from a data center, including single computing devices or a configuration of networked computing devices, and be provided with varying numbers of virtual machine resources.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing aspects and many of the attendant advantages of this disclosure will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:



FIG. 1 is a block diagram depicting an illustrative environment for providing an application programming interface proxy service using an endpoint management system, according to an example aspect.



FIG. 2 depicts a general architecture of a computing device which may be implemented to enable various features for various subsystems and units of the endpoint management system, according to an example aspect.



FIG. 3 depicts an example user interface which provides users with various endpoint management configuration options, according to an example aspect.



FIGS. 4A and 4B are flow diagrams illustrating an application programming interface proxy routine as implemented by an endpoint management system, according to an example aspect.



FIG. 5 is a block diagram illustrating an embodiment of a networked computing environment including a client computing device and a service provider computer network.





DETAILED DESCRIPTION

Generally described, aspects of the present disclosure describe an endpoint management system by which users, such as application developers, can manage and enable exposure of application programming interfaces (“APIs”) usable to cause execution of program code on a remote or third party system. Specifically, systems and methods are disclosed which facilitate the handling of user requests to perform certain tasks on remote or third party systems. The endpoint management system allows the application developer to define and specify a first proxy API which maps to a second “backend” API associated with the remote or third party system. Remote or third party systems may include for example a system on a local network, a system on an open or publicly accessible network, a system which hosts one or more services such as a virtual compute environment, and so forth. Requests to execute the proxy API are received from user computing systems by the endpoint management system, which determines the API mapping based on the user-provided specification of various configuration options. The endpoint management system in turn generates and sends one or more backend API requests to execute program codes by the associated remote or backend systems. Responses from the remote or backend systems are received by the endpoint management system which can then analyze, parse, and/or transform the results associated with the response and generate an output result for response back to the user computing systems.


Thus, in embodiments described herein, a developer can describe an exposed API (e.g., a proxy API) and define logic and one or more endpoints (e.g., a backend API). For example, a cloud based “proxy” API may be called by a client device to the endpoint management system, where the endpoint management system knows which endpoints to select for the proxy API. The endpoints can be heterogeneous (e.g., web services, Internet of Things (“IoT”) devices, other cloud-based service provider functions, datacenter functionality, and so on), and can also include other APIs. For example, a Representational State Transfer (“REST”) API may be exposed which maps to a legacy SOAP-based API. In some embodiments a proxy fleet may be implemented as part of the endpoint management system to improve performance, efficiency, and scalability. Additional features described herein include the ability to chain or link multiple functionality or backend API calls (dependent or independent) based on a single proxy API call; additional security mechanisms for users of the endpoint management system to manage exposure of backend APIs and services; dynamic and intelligent caching of results returned from backend systems to improve efficiency and relieve remote and backend systems from performing repeat tasks which may yield results usable by multiple proxy APIs; performance management to protect remote and/or backend systems from being overloaded by a high volume of API requests, including user-configurable settings to throttle incoming requests (e.g., limit servicing of requests to a certain number in a given time period) and metering of received requests.


The endpoint management system may enable configuration of a proxy interface in a variety of protocol formats, including but not limited to Hypertext Transfer Protocol (HTTP), HTTP Secure (“HTTPS”), HTTP2, a REST API, a remote procedure call (“RPC”), a binary API, Web Sockets, Message Queue Telemetry Transport (“MQTT”), Constrained Application Protocol (“CoAP”), Java Message Service (“JMS”), Advanced Message Queuing Protocol (“AMQP”), Simple (or Streaming) Text Oriented Message Protocol (“STOMP”), Electronic data interchange (“EDI”), Simple Mail Transfer Protocol (“SMTP”), Internet Message Access Protocol (“IMAP”), Post Office Protocol (“POP”), File Transfer Protocol (“FTP”), Open Database Connectivity (“ODBC”), Thrift, Protocol Buffers, Avro, Cap'n Proto, FlatBuffers, and other types of protocols. Some of these protocols describe a network and data format, and some may act as a container for other formats. Other data formats not implicit to the above listed protocols may include, for example: JavaScript Object Notation (“JSON”), Extensible Markup Language (“XML”), Simple Object Access protocol (“SOAP”), Hypertext markup language (“HTML”), comma separated values (“CSV”), tab separated values (“TSV”), INT file, YAML Ain't Markup Language (“YAML”), Binary JSON (“B SON”), MessagePack, Sereal, and Bencode. Any of the protocols and data formats may be used for either endpoint of an API proxy mapping in any combination. For example, a REST API may be mapped to a binary API; a HTTP API may be mapped to a remote procedure call; a first binary API may be mapped to a second binary API; and so on.


Specific embodiments and example applications of the present disclosure will now be described with reference to the drawings. These embodiments and example applications are intended to illustrate, and not limit, the present disclosure.


With reference to FIG. 1, a block diagram illustrating an embodiment of a computing environment 100 will be described. The example shown in FIG. 1 includes a computing environment 100 in which users of user computing devices 102 may access a variety of services provided by an endpoint management system 106, an endpoint proxy system 132, and backend systems 114 via a network 104A and/or a network 104B.


In the example of FIG. 1, various example user computing devices 102 are shown, including a desktop computer, laptop, a mobile phone, and a tablet. In general, the user computing devices 102 can be a wide variety of computing devices including personal computing devices, laptop computing devices, hand-held computing devices, terminal computing devices, mobile devices (e.g., mobile phones, smartphones, tablet computing devices, electronic book readers, etc.), wireless devices, various electronic devices and appliances, and the like. In addition, the user computing devices 102 may include web services running on the same or different data centers, where, for example, different web services may programmatically communicate with each other to perform one or more techniques described herein. Further, the user computing devices 102 may include Internet of Things (IoT) devices such as Internet appliances and connected devices. Other components of the computing environment 100 (e.g., endpoint management system 106) may provide the user computing devices 102 with one or more user interfaces, command-line interfaces (CLI), application programming interfaces (API), and/or other programmatic interfaces for utilizing one or more services offered by the respective components. Such services may include generating and uploading user codes, invoking the user codes (e.g., submitting a request to execute the user codes (e.g., via the endpoint proxy system 132), configuring one or more APIs (e.g., via the endpoint management system 106), caching results of execution of user codes and APIs, and/or monitoring API call usage for security, performance, metering, and other factors. Although one or more embodiments may be described herein as using a user interface, it should be appreciated that such embodiments may, additionally or alternatively, use any CLIs, APIs, or other programmatic interfaces.


The user computing devices 102 access endpoint proxy system 132 and/or the endpoint management system 106 over the network 104A. The endpoint proxy system 132 may comprise one or more servers or systems (e.g., a proxy fleet) which may be configured to manage execution of endpoint or backend APIs (e.g., as executed on the backend systems 114). The endpoint proxy system 132 may access other components of the computing environment 100, such as the backend systems 114 and an endpoint results cache 130 over the network 104B. The networks 104A and/or 104B may be any wired network, wireless network, or combination thereof. In addition, the networks 104A and/or 104B may be a personal area network, local area network, wide area network, over-the-air broadcast network (e.g., for radio or television), cable network, satellite network, cellular telephone network, or combination thereof. For example, the network 104A may be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. In some embodiments, the network 104B may be a private or semi-private network, such as a corporate or university intranet, or a publicly accessible network such as the Internet. In one embodiment, the network 104B may be co-located or located in close proximity to the endpoint proxy system 132, such that communication over the network 104B between the endpoint proxy system 132 and backend system(s) 114 may benefit from increased performance (e.g., faster and/or more efficient communication). The networks 104A and/or 104B may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network 104A and/or 104B can use protocols and components for communicating via the Internet or any of the other aforementioned types of networks. For example, the protocols used by the network 104A and/or 104B may include Hypertext Transfer Protocol (HTTP), HTTP Secure (HTTPS), Message Queue Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and the like. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art and, thus, are not described in more detail herein.


The computing environment 100 is depicted in FIG. 1 as operating in a distributed computing environment including several computer systems that are interconnected using one or more computer networks. The endpoint management system 106 and/or the endpoint proxy system 132 could also operate within a computing environment having a fewer or greater number of devices than are illustrated in FIG. 1. Thus, the depiction of the computing environment 100 in FIG. 1 should be taken as illustrative and not limiting to the present disclosure. For example, the computing environment 100 or various constituents thereof could implement various Web services components, hosted or “cloud” computing environments, and/or peer-to-peer network configurations to implement at least a portion of the processes described herein.


Further, the various components of the computing environment 100 may be implemented in hardware and/or software and may, for instance, include one or more physical or virtual servers implemented on physical computer hardware configured to execute computer executable instructions for performing various features that will be described herein. The one or more servers may be geographically dispersed or geographically co-located, for instance, in one or more data centers.


As illustrated in FIG. 1, the endpoint proxy system 132 includes a response handler 108, a cache manager 120, and a performance unit 124. The response handler 108 may be configured to, for example, receive requests from calling systems (including, for example, user devices 102) to execute proxy APIs which correspond to one or more APIs to be called or invoked on one or more backend system(s) 114. The response handler 108 may be in communication with and access an endpoint/API mapping definitions data source 128 to look up API mapping definition for a received request. The response handler 108 can, based at least in part on the API mapping definition, determine a backend API (or APIs) and backend system(s) to be used to service the request. The response handler 108 may also be configured to parse and/or analyze the request and any associated input parameters provided with the request, and determine based on the API mapping definition any appropriate data transformations and mappings of the associated input parameters to input parameters for the backend API. In some embodiments the response handler 108 may check with the cache manager 120 to determine whether a cached result for the proxy API request is available, as will be described in more detail below. The response handler 108 may then send the transformed API request to the appropriate backend system(s) 114 and in turn receive a result back in response. The response handler 108 may in turn parse and/or transform the result into an output result for response to the original calling system, and provide the output result. The result may be parsed and/or transformed based in part on the API mapping definition. The result may also be provided to the cache manager 120 for further handling as described herein.


The cache manager 120 may be configured to manage results received from backend system(s) 114 in association with backend API requests in a number of ways. The cache manager 120 may be in communication with an endpoint results cache 130, where results received from backend system(s) 114 in association with backend API requests may be stored and accessed for future API proxy requests. Cached results may include both original backend API result from a backend system(s) 114, as well as a transformed or output result after the original backend API result is processed by the response handler 108.


The caching of results may be performed based at least in part on the API mapping definition. For example, the API mapping definition may include a user-provided configuration setting to specify whether results from a backend API should be cached and, if so, for how long. Thus a developer may indicate that results from a particular backend API may be cached for a period of time (e.g., seconds, minutes, hours, days, weeks, months, years, or any other amount of time). As described above with reference to the response handler 108, when a proxy API request is received and processed, the cache manager 120 may perform a cache check to determine whether cached results are available, valid, and/or otherwise unexpired (e.g., past the cache duration). If cached results are available, the cache manager 120 may access and retrieve them from the endpoint results cache 130 and provide them to the response handler 108.


As referenced above, the cache manager 120 may cache results from backend API calls in a number of ways. For example, in certain embodiments, if a first API call is received multiple times by the endpoint proxy system 132 and a cached result for a first backend API is available, then a copy of the cached result may be provided without the need to send the associated first backend API request to the respective backend system 114 again or multiple times with the cache duration period of time. In another embodiment, a second proxy API call may be received by the endpoint proxy system 132 which maps to the same backend API as the first proxy API call, in which case the response handler 108 and/or cache manager 120 may determine that the same cached result may be provided without the need to send the associated backend API request to the respective backend system 114. In yet another example, the second proxy API call may map to a second backend API which nevertheless returns the same result, or a portion of the same result, as the first backend API. In such a scenario the same cached result (or the relevant portion thereof) may be provided without the need to send the associated second backend API request to the respective backend system 114.


As an illustrative example of the flexible and dynamic caching feature described above, consider a first backend API which provides a result comprising a set of records including a name, a telephone number, and a mailing address for respective individuals; and a second backend API which provides a result comprising a mailing address for a particular individual. If a first proxy API call is received and processed to perform the first backend API, then the set of records may be cached by the cache manager 120. Subsequently, if the first proxy API call is received again, then the cached set of records may be accessed instead of issuing a request to the backend system(s) 114. In addition, if a second proxy API call is received corresponding to the second backend API requesting a mailing address for a particular individual, then the cached set of records may also be accessed to provide an output result to the calling system instead of issuing a request to the backend system(s) 114 to execute the second proxy API call. Thus, it may be possible to pre-emptively cache results for a backend API call without the need to call that backend API, for example if the cached results are cumulative of overlapping with another backend API call.


The performance unit 124 may be configured to manage performance related aspects involving backend API requests sent to backend system(s) 114. For example, the API mapping definition may include a user-provided configuration setting to specify a limit or frequency for how often a backend API may be called. This feature may be of benefit when a backend system 114 may be a legacy system or one that is outdated, under-performing or less efficient, or over-burdened with servicing backend API requests. Thus for example a user can specify that a certain backend API may be called only a certain number of times over a certain length of time (e.g., 100 times per minute, 10 times per hour, or any other frequency); or a certain number of times over a certain time period (e.g., to throttle requests received during peak service hours).


Another performance configuration option that may be provided and utilized in association with the performance unit 124 is a setting to specify whether a metering identifier is required or to be used to track or monitor calling systems' use of the backend APIs. Such metering information may be of benefit to enable visibility into which proxy API and/or backend APIs are called, how often, and by which calling system.


As illustrated in FIG. 1, the endpoint management system 106 includes a manager user console 132, a security manager 122, and a Software Developer Kit (“SDK”) generation service 126. The manager user console 132 may provide one or more user interfaces by which users, such as system administrators and/or developers, can manage, for example, API proxy settings including API mapping definitions, caching options, performance options, and security options. One example endpoint management user interface which may be generated and provided by the manager user console 132 is example user interface 300 illustrated and described herein with reference to FIG. 3. Users may access the manager user console 132 and related user interfaces over the network 104A (e.g., when the network 104A is configured as a public network) or over the network 104B (e.g., when the network 104B is configured as a private network), for example using a user computing device 102. For example, the manager user console 132 may provide a web, mobile, standalone, or other application which may be accessed or installed on a user computing device 102 and configured to communicate with the endpoint management system 106. API mapping definitions created and revised by users via the endpoint management system 106 may be stored in the endpoint/API mapping definitions data source 128. The endpoint management system 106 may be configured to publish, push, or otherwise transmit various of the API mapping definitions to the endpoint proxy system 132, which may use the API definitions for various response handling and related procedures described herein.


The security manager 122 may be configured to manage security and access to backend system(s) 114 and backend APIs. For example, the API mapping definition may include a user-provided configuration setting to specify whether only certain user(s) or group(s) may be allowed to call the backend API. A proxy API request may include an indicator (or security token) associated with a requesting user or group and, based on the API mapping definition, the security manager 122 may determine whether the request should be allowed or denied. If the calling system (e.g., a user computing device 102) provides an indicator or security token that maps to a user or group that is allowed to call the backend API, then the security manager 122 may indicate to the response handler 108 that processing of the request can proceed. If the calling system fails to provide an indicator or security token, or provides an indicator or security token that does not map to a user or group that is allowed to call the backend API, then the security manager 122 may indicate to the response handler 108 that processing of the request should stop (in which case a return indicator may optionally be provided by the endpoint proxy system 132 to indicate that the request was denied due to lack of authorization). In this way, for example, a developer may safeguard or limit access to certain backend APIs.


The endpoint management system 106 may also include an SDK generation service 126 to enable users to generate an SDK based on one or more API mapping definitions. This feature may be of particular benefit to users of the endpoint management system 106 who have invested considerable time and effort in mapping a suite of legacy backend APIs to a new set of proxy APIs. An SDK may be generated based on the API mapping definitions and provided to other users (such as system developers who wish to interface with or use a backend system 114 using more modern API protocols) to facilitate development of other applications and services which utilize the backend system(s) 114 via the suite of proxy APIs.


An example configuration which may be used to implement the various subsystems and units of the endpoint management system 106 and/or endpoint proxy system 132 is described in greater detail below with reference to FIG. 2.


As shown in FIG. 1, the endpoint management system 106 communicates with the endpoint proxy fleet 123A . . . N. For example, a first proxy server may be configured to manage execution of a first proxy API; a second proxy server may be configured to manage execution of a second proxy API; and an nth proxy server may be configured to manage execution of an nth proxy API. Or, one proxy server may be configured to manage execution of multiple proxy APIs which may be related or grouped together, for example based in part on similarity of backend APIs, cumulative or overlapping results obtained from backend system(s) 114 for associated backend APIs, and so on. Each proxy server in the fleet of proxy servers may be configured for performance or efficiency with respect to particular tasks or backend systems. For example, a proxy server may be configured for efficient performance and execution of proxy APIs which involve database queries, while another proxy server may be configured for efficient performance and execution of proxy APIs which involve significant data transformation of backend API results to output results.


In the example of FIG. 1, the endpoint proxy system 132 is illustrated as being connected to the network 104A and the network 104B. In some embodiments, any of the components within the endpoint proxy system 132 can communicate with other components (e.g., the user computing devices 102 and backend system(s) 114) of the computing environment 100 via the network 104A and/or network 104B. In other embodiments, not all components of the endpoint proxy system 132 are capable of communicating with other components of the computing environment 100. In one example, only the response handler 108 may be connected to the network 104A, and other components of the endpoint proxy system 132 may communicate with other components of the computing environment 100 via the response handler 108.


In the example of FIG. 1, the endpoint management system 106 is illustrated as being connected to the network 104A. In some embodiments, any of the components within the endpoint management system 106 can communicate with other components (e.g., the user computing devices 102 and backend system(s) 114) of the computing environment 100 via the network 104A and/or network 104B. In other embodiments, not all components of the endpoint management system 106 are capable of communicating with other components of the computing environment 100. In one example, only the manager user console 132 may be connected to the network 104A, and other components of the endpoint management system 106 may communicate with other components of the computing environment 100 via the manager user console 132.


The backend system(s) 114 may include legacy systems that have protocols that are not compatible with those of the user computing devices 102 or otherwise not easily accessible by the user computing devices 102. The backend system(s) 114 may also include devices that have device-specific protocols (e.g., IoT devices).


In some embodiments, the endpoint proxy system 132 provides to the user computing devices 102 a more convenient access to the backend system(s) 114 or other systems or devices. In some of such embodiments, the endpoint proxy system 132 may communicate with an IoT device with device-specific protocols. For example, the IoT device may have a temperature sensor, and the user can request temperature information from the IoT device. In another example, the IoT device may be a thermostat and the user may be able to cause it to set the temperature to a given temperature. Depending on what the device is, it can have different capabilities. All those capabilities may be managed by some type of API (e.g., backend API) that would exist for manipulating the capability. The endpoint proxy system 132 may perform the necessary protocol translation and/or data manipulation to allow users to seamlessly communicate with such IoT devices without having to worry about device-specific protocols or requirements. For example, the endpoint proxy system 132 may query the IoT devices for data or send commands to the IoT devices. The responses received from those IoT devices may be used to shape the response back to the caller based on the requirements of the caller.



FIG. 2 depicts a general architecture of a computing device 106A that which may be implemented to enable various features of the various subsystems and units of the endpoint management system, including but not limited to the response handler 108, the cache manager 120, the security manager 122, the performance unit 124, and the SDK generation service 126. The general architecture of the computing device 106A depicted in FIG. 2 includes an arrangement of computer hardware and software modules that may be used to implement aspects of the present disclosure. The computing device 106A may include many more (or fewer) elements than those shown in FIG. 2. It is not necessary, however, that all of these generally conventional elements be shown in order to provide an enabling disclosure. As illustrated, the computing device 106A includes a processing unit 190, a network interface 192, a computer readable medium drive 194, an input/output device interface 196, all of which may communicate with one another by way of a communication bus. The network interface 192 may provide connectivity to one or more networks or computing systems. The processing unit 190 may thus receive information and instructions from other computing systems or services via the network 104A or 104B. The processing unit 190 may also communicate to and from the memory 180 and further provide output information for an optional display (not shown) via the input/output device interface 196. The input/output device interface 196 may also accept input from an optional input device (not shown).


The memory 180 may contain computer program instructions (grouped as modules in some embodiments) that the processing unit 190 executes in order to implement one or more aspects of the present disclosure. The memory 180 generally includes RAM, ROM and/or other persistent, auxiliary or non-transitory computer-readable media. The memory 180 may store an operating system 184 that provides computer program instructions for use by the processing unit 190 in the general administration and operation of the response handler 108. The memory 180 may further include computer program instructions and other information for implementing aspects of the present disclosure. For example, in one embodiment, the memory 180 includes a user interface unit 182 that generates user interfaces (and/or instructions therefor) for display upon a computing device, e.g., via a navigation and/or browsing interface such as a browser or application installed on the computing device. For example, the user interface unit 182 may generate one or more endpoint management configuration user interfaces such as the example user interface 300 illustrated and described herein with reference to FIG. 3. Although the example of FIG. 2 is described in the context of user interfaces, it should be appreciated that one or more embodiments described herein may be implemented using, additionally or alternatively, any CLIs, APIs, or other programmatic interfaces. In addition, the memory 180 may include and/or communicate with one or more data repositories (not shown), for example, to access program codes, pattern matching definitions, and/or libraries.


In addition to and/or in combination with the user interface unit 182, the memory 180 additional units 186A . . . N that may be executed by the processing unit 190 to provide the various features associated with particular instances of the subsystems and units of the endpoint management system 106 and/or endpoint proxy fleet 132. For example, the response handler 108 may include a response parsing unit that may be executed to parse responses or results received from backend system(s) 114. The cache manager 120 may include a caching unit that may be executed to determine whether to cache results received from backend system(s) 114, and whether cached results should be used to respond to certain proxy API requests. The security manager 122 may include an authorization unit that may be executed to determine whether a proxy API request has proper security identification and should be allowed to proceed. The performance unit 124 may include a throttle unit that may be executed to determine whether a proxy API request should be allowed to proceed under current demand conditions. The SDK generation service 126 may include an API mapping analysis unit that may be executed to aggregate a set of API mapping definitions into a unified SDK library.


In various embodiments, all or a portion of the additional units 186A . . . N may be implemented by other components of the endpoint management system 106, the endpoint proxy system 132, and/or another computing device. For example, in certain embodiments of the present disclosure, another computing device in communication with the endpoint management system 106 and/or the endpoint proxy system 132 may include several modules or components that operate similarly to the modules and components illustrated as part of the computing device 106A.


Turning now to FIG. 3, an example user interface 300 which provides users with various endpoint management configuration options in association with the endpoint management system 106 will be described. In various embodiments, the user interface 300 shown in FIG. 3 may be presented as a web page, as a mobile application display, as a stand-alone application display, as a popup window or dialog box, as an email message, or by other communication means. In other embodiments, analogous interfaces may be presented using audio or other forms of communication. In an embodiment, the interfaces shown in FIG. 3 is configured to be interactive and respond to various user interactions. Such user interactions may include clicks with a mouse, typing with a keyboard, touches and/or gestures on a touch screen, voice commands, and/or the like. The display elements shown in user interface 300 are merely for example purposes; more or less display elements and user input fields may be presented depending on the embodiment.


As shown, example user interface 300 includes a number of display elements (e.g., descriptions of various API mapping configuration option) and user input fields (e.g., text boxes, check or radio boxes, and so forth). At display element 302 the user interface presents a number of Endpoint API Options, including for example: a system access/connection setting (display element 304) and an associated text input field by which the user may specify a system or connection setting for the backend API; a function name (display element 306) and an associated text input field by which the user may specify the name of the backend API; input parameters (display element 308) and an associated text input field by which the user may specify one or more input parameters for the backend API; and output result parameters (display element 308) and an associated text input field by which the user may specify the type of output(s) provided by backend API.


At display element 312 the user interface presents a number of Proxy API Options, including for example: a function name (display element 314) and an associated text input field by which the user may specify the name of the proxy API; input parameters (display element 316) and an associated text input field by which the user may specify one or more input parameters for the proxy API; and output parameters (display element 318) and an associated text input field by which the user may specify the type of output(s) provided by proxy API.


At display element 322 the user interface presents a number of Cache Options, including for example: a cache results setting (display element 324) and an associated radio box selection user input field by which the user may specify whether output results should be cached by the endpoint management system; and a cache duration (display element 326) and an associated text input field by which the user may specify a duration for how long the cached results should remain valid.


At display element 328 the user interface presents a number of Security and User Access Options, including for example: a limit access setting (display element 330) and an associated text box user input field by which the user may specify users and/or groups permitted to call the proxy and/or associated backend API(s); and a metering identification requirement setting (display element 332) and an associated radio box selection field by which the user may specify whether a metering identifier requirement should be enforced or required for execution of the proxy API.


At display element 334 the user interface presents a number of Performance Options, including for example: an API call service limit setting (display element 336) and an associated text box user input field by which the user may specify a maximum number of backend API requests over a certain amount of time.


At display element 338, the user interface presents a save button to Save the API mapping definition and settings, which when selected by the user may cause the endpoint management system to save the API mapping definition in the endpoint/API mapping definitions data source 128. Display element 340 presents a Cancel button to cancel or end the current configuration without saving the API mapping or settings.


Another feature not illustrated in FIG. 3 which may be provided by the endpoint management system 106 may be an indicator that changes to an API mapping definition may result in a breaking change of the API mapping. For example, a change to the name, the input parameters, and/or the output parameters of the proxy API may comprise a breaking change, such that a calling system using the proxy API before the breaking change may no longer be able to use the proxy API without updating to the changed proxy API definition. For example, if the name of the proxy API is changed then a calling system will no longer be able to call the proxy API using the old name; or, if the number of required input parameters (and/or associated attributes) changes, then a calling system may not be able to call the proxy API using fewer inputs than are now required; and so on. In various instances the endpoint management system 106 may be configured to detect when a breaking change to an API proxy definition may occur and provide a warning or indicator to the user. The indicator may optionally include suggestions on how to address the breaking change (e.g., creating a new proxy API instead of changing an existing API; making new or modified input and/or output parameters optional; leaving existing attribute names or identifiers the same and only adding new attribute names or identifiers; and similar types of actions to maintain or preserve an existing API proxy mapping definition).


Turning now to FIG. 4A, a routine 400A implemented by one or more components of the endpoint management system 106 and/or the endpoint proxy system 132 (e.g., the response handler 108, the cache manager 120, the security manager 122, the performance unit 124, and/or the SDK generation service 126) will be described. Although routine 400A is described with regard to implementation by the endpoint management system 106, one skilled in the relevant art will appreciate that alternative components may implement routine 400A or that one or more of the blocks may be implemented by a different component or in a distributed manner.


At block 402 of the illustrative routine 400A, the endpoint management system 106 receives an API mapping definition for interfacing with a backend or endpoint API and associated backend system. The API mapping definition may be received for example via the user interface 300 illustrated and described herein with reference to FIG. 3. The API mapping definition may be stored for example in the endpoint/API mapping definitions data source 128. The API mapping definition may include a number of configuration options as described throughout this disclosure.


Next, at block 404, the endpoint proxy system 132 receives a request from a calling system to execute program code via an API proxy. The request may be received, for example, from a user computing device 102.


At block 406, the endpoint proxy system 132 determines an API mapping definition based on the received request. The determination may be based, for example, on various factors associated with the received request, including the name of the proxy API, input parameters associated with the proxy API, the calling system or requesting entity, any security or identification information provided with the request (such as an identification token, a metering identifier, or other identifier), and so forth.


At block 408, the endpoint proxy system 132 optionally performs some preprocessing associated with the API mapping definition. For example, in one embodiment the response handler 108 may determine whether the proxy API request has proper security identification and should be allowed to proceed. Or, in the same or another embodiment, the response handler 108 may interact with the cache manager 120 to determine whether a cached result is available and/or should be used to respond to the proxy API request. Or, in the same or another embodiment, the response handler 108 may interact with the performance unit 124 to determine whether the proxy API request should be allowed to proceed under current demand conditions. For example, in response to determining that a certain limit to the number of API requests to allow (as indicated in the API mapping definition) has been exceeded, the performance unit 124 may deny the proxy API request.


At block 410, the endpoint proxy system 132 transforms the API proxy request for processing by a backend system via a backend API as specified in the API mapping definition. For example, the API mapping definition may specify that one or more input parameters associated with the proxy API request are to be mapped, parsed, and/or transformed into one or more input parameters for the backend or endpoint API request. The endpoint proxy system 132 may also determine from the API mapping definition a particular backend system to which the backend API request is to be sent. Once this is complete, the routine 400A can proceed to block 412 of FIG. 4B.


Turning now to FIG. 4B, the routine 400A continues with illustrative routine 400B. At block 412 of routine 400B, the endpoint proxy system 132 sends, to the particular backend system 114, a request to execute program code via the backend API. In some embodiments, the endpoint proxy system 132 may send multiple backend API requests associated with the proxy API request, which may be specified in the API mapping definition. The multiple backend API requests may be sent serially or in parallel depending on the particular configuration in the API mapping definition. For example, one API mapping definition may specify a workflow, wherein a single proxy API request corresponds to and involves multiple backend API requests, some of which may be independent (e.g., can be executed in parallel) and some of which may be dependent (e.g., execution of a second backend API may depend on the outcome results received from the execution of a first backend API, a scenario in which serial processing of the first backend API followed by execution of the second backend API may be necessary).


At block 414, the endpoint proxy system 132 receives results of the backend API request (e.g., from execution of the program code) from the particular backend system 114.


Next, at block 416, the endpoint proxy system 132 transforms the received results based at least in part on the API mapping definition. For example, the API mapping definition may specify that one or more result parameters associated with the backend API request are to be mapped, parsed, and/or transformed into one or more output result parameters for the proxy API request. For example, a result received from the backend system 114 may be in one format (e.g., an XML document) which is to be transformed into another format (e.g., a JSON object) according to the API mapping definition.


At block 418, the endpoint proxy system 132 optionally caches the received results and/or the transformed results. For example, the API mapping definition may include a user-specified configuration option indicating whether the received results and/or the transformed results (or both) should be cached, and if so, for how long. The results can be cached, for example, in the endpoint results cache 130, as discussed above.


At block 420, the endpoint proxy system 132 provides the transformed results to the calling system (e.g., a user computing device 102) in response to the received proxy API request. In some embodiments, the endpoint proxy system 132 may continue sending additional backend API requests associated with the proxy API request, which may be specified in the API mapping definition.


While the routine 400A-400B of FIGS. 4A-4B has been described above with reference to blocks 402-410 and 412-420, the embodiments described herein are not limited as such, and one or more blocks may be omitted, modified, or switched without departing from the spirit of the present disclosure.



FIG. 5 is a block diagram illustrating an embodiment of a networked computing environment 500 including one or more client computing devices (“clients”) 102 in communication with a service provider computer network 501 through a communication networks 104A and/or 104B. The networked computing environment 500 may include different components, a greater or fewer number of components, and can be structured differently. For example, there can be more than one service provider computer networks 501 so that hosting services or data storage services can be implemented across the multiple service provider computer networks 501 based, for example, on established protocols or agreements. As another example, the service provider computer network 501 may include more or fewer components and some components may communicate with one another through the communication networks 104A and/or 104B.


Illustratively, the client 102 can be utilized by a customer of the service provider computer network 501. In an illustrative embodiment, the client 102 includes necessary hardware and software components for establishing communications with various components of the service provider computer network 501 over the communication networks 104A and/or 104B, such as a wide area network or local area network. For example, the client 102 may be equipped with networking equipment and browser software applications that facilitate communications via the Internet or an intranet. The client 102 may have varied local computing resources such as central processing units and architectures, memory, mass storage, graphics processing units, communication network availability and bandwidth, etc. In one embodiment, the client 102 may have access to or control over a virtual machine instance hosted by the service provider computer network 501. The client 102 may also have access to data storage resources provided by the service provider computer network 501.


With continued reference to FIG. 5, according to one illustrative embodiment, the service provider computer network 501 may include interconnected components such as the endpoint management system 106, endpoint proxy system 132, one or more host computing devices 510, a storage management service 503, and one or more storage systems 507, having a logical association of one or more data centers associated with one or more service providers. The endpoint management system 106 may be implemented by one or more computing devices. For example, the endpoint management system 106 may be implemented by computing devices that include one or more processors to execute one or more instructions, memory, and communication devices to communicate with one or more clients 102 or other components of the service provider computer network 501. In some embodiments, the endpoint management system 106 is implemented on one or more servers capable of communicating over a network. In other embodiments, the endpoint management system 106 is implemented by one or more virtual machines in a hosted computing environment. Illustratively, the endpoint management system 106 can proxy API management and configuration and other relevant functionalities disclosed herein.


The endpoint proxy system 132 may also be implemented by one or more computing devices. In some embodiments, the endpoint proxy system 132 is implemented on one or more computing devices capable of communicating over a network. In other embodiments, the endpoint proxy system 132 is implemented by one or more virtual machines instances in a hosted computing environment. The endpoint proxy system 132 may receive and respond to electronic requests to execute proxy APIs and communicate with backend systems 114 as described herein.


Each host computing device 510 may be a physical computing device hosting one or more virtual machine instances 514. The host computing device 510 may host a virtual machine instance 114 by executing a software virtual machine manager 122, such as a hypervisor, that manages the virtual machine instance 114. The virtual machine instance 114 may execute an instance of an operating system and application software.


In some embodiments, host computing devices 510 may be associated with private network addresses, such as IP addresses, within the service provider computer network 501 such that they may not be directly accessible by clients 102. The virtual machine instances, as facilitated by the virtual machine manager 122 and endpoint management system 106, may be associated with public network addresses that may be made available by a gateway at the edge of the service provider computer network 501. Accordingly, the virtual machine instances 514 may be directly addressable by a client 102 via the public network addresses. One skilled in the relevant art will appreciate that each host computing device 510 would include other physical computing device resources and software to execute multiple virtual machine instances or to dynamically instantiate virtual machine instances. Such instantiations can be based on a specific request, such as a request from a client 102.


The storage management service 503 can be associated with one or more storage systems 507. The storage systems 507 may be servers used for storing data generated or utilized by virtual machine instances or otherwise provided by clients. Illustratively, the storage management service 503 can logically organize and maintain data in data storage volumes. For example, the storage management service 503 may perform or facilitate storage space allocation, input/output operations, metadata management, or other functionalities with respect to volumes.


In some embodiments, a volume may be distributed across multiple storage systems, may be replicated for performance purposes on storage systems in different network areas. The storage systems may be attached to different power sources or cooling systems, may be located in different rooms of a datacenter or in different datacenters, or may be attached to different routers or network switches.


In an illustrative embodiment, host computing devices 510 or storage systems 507 are considered to be logically grouped, regardless of whether the components, or portions of the components, are physically separate. For example, a service provider computer network 501 may maintain separate locations for providing the host and storage components. Additionally, the host computing devices 510 can be geographically distributed in a manner to best serve various demographics of its users. One skilled in the relevant art will appreciate that the service provider computer network 501 can be associated with various additional computing resources, such additional computing devices for administration of content and resources, and the like.


It will be appreciated by those skilled in the art and others that all of the functions described in this disclosure may be embodied in software executed by one or more physical processors of the disclosed components and mobile communication devices. The software may be persistently stored in any type of non-volatile storage.


Example Embodiments (EEs)

EE 1. A system for providing endpoint management of application programming interfaces, the system comprising: an electronic data store configured to store application programming interface (“API”) mapping definitions; and an endpoint system comprising one or more hardware computing devices executing specific computer-executable instructions, wherein the endpoint system is in communication with the electronic data store, and configured to at least: receive a plurality of API mapping definitions, wherein each respective API mapping definition associates a proxy API with at least one endpoint API; receive a request from a calling system to execute a program code by a particular proxy API; determine, based at least in part on the received request and the particular proxy API, an API mapping definition associated with the particular proxy API; transform the request into an endpoint request for processing by an endpoint API system, wherein the request is transformed based at least in part on the API mapping definition and wherein the endpoint request includes an instruction to execute the program code on the endpoint API system; transmit the endpoint request to the endpoint API system to cause execution of the program code on the endpoint API system; receive an endpoint result from the endpoint API system, wherein the endpoint result is generated from the execution of the program code on the endpoint API system; transform the endpoint result into a proxy result, wherein the endpoint result is transformed based at least in part on the API mapping definition; and provide a return response to the calling system, wherein the return response comprises at least the proxy result.


EE 2. The system of EE 1, wherein the endpoint system is further configured to store a copy of the endpoint result in a second electronic data store configured to store cached results received from respective endpoint API systems.


EE 3. The system of EE 1, wherein the endpoint system is further configured to access a cached copy of the endpoint result a second electronic data store configured to store cached results received from respective endpoint API systems.


EE 4. A system, comprising: an endpoint proxy system comprising one or more hardware computing devices adapted to execute specific computer-executable instructions and in communication with an electronic data store configured to store application programming interface (“API”) mapping definitions, wherein the endpoint proxy system is configured to at least: receive a request from a user computing device to execute a proxy API; determine, based at least in part on the received request and the proxy API, an API mapping definition associated with the proxy API; transform the request into a backend request for processing by a backend API system, wherein the request is transformed based at least in part on the API mapping definition and wherein the backend request includes an instruction to execute a backend API the endpoint API system; transmit the backend request to the endpoint API system, wherein the backend request is adapted to cause execution of the backend API on the endpoint API system; receive a backend result from the backend API system, wherein the endpoint result is generated by execution of the backend API on the backend API system; transform the backend result into an output result, wherein the backend result is transformed based at least in part on the API mapping definition; and provide the output result to the user computing device.


EE 5. The system of EE 4, wherein the endpoint proxy system is further configured to store a copy of the backend result in a second electronic data store configured to store cached results received from respective backend API systems.


EE 6. The system of EE 5, wherein the endpoint proxy system is further configured to store the copy of the backend result in the second electronic data store according to a cache duration setting associated with the API mapping definition.


EE 7. The system of EE 4, wherein the endpoint proxy system is further configured to access a cached copy of the backend result from a second electronic data store configured to store cached results received from respective backend API systems.


EE 8. The system of EE 4, wherein the API mapping definition comprises at least associated configuration settings for the proxy API and associated configuration settings for the backend API.


EE 9. The system of EE 8, wherein the associated configuration settings for the proxy API comprises a proxy API name, a proxy API input parameter, and a proxy API output result type.


EE 10. The system of EE 8, wherein the associated configuration settings for the backend API comprises a backend API name, a backend API input parameter, and a backend API output result type.


EE 11. The system of EE 4, wherein the request to execute a proxy API is received from the user computing device over a first network and the backend result is received from the backend API system over a second network, wherein the second network is separate and distinct from the first network.


EE 12. The system of EE 11, wherein the endpoint proxy system and the backend system are co-located on the second network.


EE 13. A computer-implemented method comprising: as implemented by one or more computing devices configured with specific executable instructions, receiving a request from a calling system to execute a proxy API; determining, based at least in part on the received request and the proxy API, an API mapping definition associated with the proxy API; transforming the request into a backend request for processing by a backend API system, wherein the request is transformed based at least in part on the API mapping definition and wherein the backend request includes an instruction to execute a backend API the endpoint API system; sending the backend request to the endpoint API system, wherein the backend request is adapted to cause execution of the backend API on the endpoint API system; receiving a backend result from the backend API system, wherein the endpoint result is generated by execution of the backend API on the backend API system; transforming the backend result into an output result, wherein the backend result is transformed based at least in part on the API mapping definition; and providing the output result to the calling system.


EE 14. The computer-implemented method of EE 13, further comprising storing a copy of the backend result in a second electronic data store configured to store cached results received from respective backend API systems.


EE 15. The computer-implemented method of EE 13, further comprising: receiving a second request to execute a second proxy API; and determining, based at least in part on the received second request and the second proxy API, a second API mapping definition associated with the second proxy API.


EE 16. The computer-implemented method of EE 15, further comprising: determining, based on the API mapping definition, that the second proxy API is mapped to the backend API; and accessing a cached copy of the backend result from a second electronic data store configured to store cached results received from respective backend API systems.


EE 17. The computer-implemented method of EE 15, further comprising: determining, based on the API mapping definition, that the second proxy API is mapped to a second backend API, wherein the second backend API is configured to return a second backend result that is a subset of the backend result generated by execution of the backend API; accessing a cached copy of the backend result from a second electronic data store configured to store cached results received from respective backend API systems; and providing a transformed output result of the cached copy to the calling system.


EE 18. A computer-readable, non-transitory storage medium storing computer executable instructions that, when executed by one or more computing devices, configure the one or more computing devices to perform operations comprising: receiving a request from a calling system to execute a proxy API; determining, based at least in part on the received request and the proxy API, an API mapping definition associated with the proxy API; transforming the request into a backend request for processing by a backend API system, wherein the request is transformed based at least in part on the API mapping definition and wherein the backend request includes an instruction to execute a backend API the endpoint API system; sending the backend request to the endpoint API system, wherein the backend request is adapted to cause execution of the backend API on the endpoint API system; receiving a backend result from the backend API system, wherein the endpoint result is generated by execution of the backend API on the backend API system; transforming the backend result into an output result, wherein the backend result is transformed based at least in part on the API mapping definition; and providing the output result to the calling system.


EE 19. The computer-readable, non-transitory storage medium of EE 18, wherein the operations further comprise storing a copy of the backend result in a second electronic data store configured to store cached results received from respective backend API systems.


EE 20. The computer-readable, non-transitory storage medium of EE 18, wherein the operations further comprise determining, based at least in part on an authorization setting associated with the API mapping definition, that a user identifier associated with the request to execute the proxy API is included in the API mapping definition as an authorized user or group.


EE 21. The computer-readable, non-transitory storage medium of EE 18, wherein the operations further comprise determining that the request to execute the proxy API is allowed to proceed based at least in part on comparison of a performance throttling setting associated with the API mapping definition to a current API request workload associated with the backend API system.


EE 22. The computer-readable, non-transitory storage medium of EE 18, wherein the API mapping definition comprises at least associated configuration settings for the proxy API and associated configuration settings for the backend API.


EE 23. The computer-readable, non-transitory storage medium of EE 18, wherein the proxy API is associated with a first protocol and the backend API is associated with a second protocol.


OTHER CONSIDERATIONS

Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.


Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art. It will further be appreciated that the data and/or components described above may be stored on a computer-readable medium and loaded into memory of the computing device using a drive mechanism associated with a computer readable storage medium storing the computer executable components such as a CD-ROM, DVD-ROM, or network interface. Further, the component and/or data can be included in a single device or distributed in any manner. Accordingly, general purpose computing devices may be configured to implement the processes, algorithms, and methodology of the present disclosure with the processing and/or execution of the various data and/or components described above.


It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Claims
  • 1. A system, comprising: one or more processors; andone or more memories, the one or more memories having stored thereon instructions, which, when executed by the one or more processors, configure the one or more processors to: maintain application programming interface (API) mapping definitions that map a plurality of proxy APIs to a plurality of backend APIs associated with a backend system;receive an API mapping request to create a mapping between a first proxy API and a first backend API, the API mapping request including at least (i) one or more proxy API parameters associated with the first proxy API and (ii) one or more backend API parameters associated with the first backend API;update the API mapping definitions such that the API mapping definitions include the mapping between the first proxy API and the first backend API, the updated API mapping definitions configured to cause execution of the first backend API in response to requests to execute the first API proxy;receive a proxy API execution request to execute the first proxy API;in response to the proxy API execution request to execute the first proxy API, determine, based on the updated API mapping definitions, that the first proxy API is mapped to the first backend API; andtransmit to the backend system a backend API execution request to execute the first backend API on the backend system.
  • 2. The system of claim 1, wherein the one or more proxy API parameters include one or more of a function name associated with the first proxy API, an input parameter associated with the first proxy API, or an output parameter associated with the first proxy API.
  • 3. The system of claim 1, wherein the one or more proxy API parameters specify a proxy output format in which a result of executing the first proxy API is to be provided.
  • 4. The system of claim 1, wherein the one or more backend API parameters include one or more of a function name associated with the first backend API, an input parameter associated with the first backend API, or an output parameter associated with the first backend API.
  • 5. The system of claim 1, wherein the one or more backend API parameters specify a backend output format in which a result of executing the first backend API on the backend system is to be provided.
  • 6. The system of claim 1, wherein the API mapping request further includes a cache parameter for setting a duration for maintaining a cached result associated with executing at least one of the first proxy API or the first backend API.
  • 7. The system of claim 1, wherein the API mapping request further includes a security parameter for limiting access to at least one of the first proxy API or the first backend API.
  • 8. The system of claim 1, wherein the instructions, when executed by the one or more processors, further configure the one or more processors to, in response to a request to update the API mapping definitions, output a warning based on a determination that the requested update would disrupt an existing mapping in the API mapping definitions between a proxy API and a backend API.
  • 9. The system of claim 1, wherein the instructions, when executed by the one or more processors, further configure the one or more processors to provide a user interface to a user computing device in network communication with the system, wherein the user interface allows a user of the user computing device to specify the one or more proxy API parameters and the one or more backend API parameters.
  • 10. The system of claim 9, wherein the user interface is presented in one of a web page, a mobile application display, a stand-alone application display, a popup window or dialog box, or an email message.
  • 11. A computer-implemented method, as implemented by one or more computing devices configured with specific executable instructions, comprising: maintaining application programming interface (API) mapping definitions that map a plurality of proxy APIs to a plurality of backend APIs associated with a backend system;receiving an API mapping request to create a mapping between a first proxy API and a first backend API, the API mapping request including at least (i) one or more proxy API parameters associated with the first proxy API and (ii) one or more backend API parameters associated with the first backend API;updating the API mapping definitions such that the API mapping definitions include the mapping between the first proxy API and the first backend API, the updated API mapping definitions configured to cause execution of the first backend API in response to requests to execute the first API proxy;receiving a proxy API execution request to execute the first proxy API;in response to the proxy API execution request to execute the first proxy API, determining, based on the updated API mapping definitions, that the first proxy API is mapped to the first backend API; andtransmitting to the backend system a backend API execution request to execute the first backend API on the backend system.
  • 12. The computer-implemented method of claim 11, wherein the one or more proxy API parameters include one or more of a function name associated with the first proxy API, an input parameter associated with the first proxy API, or an output parameter associated with the first proxy API, and the one or more backend API parameters include one or more of a function name associated with the first backend API, an input parameter associated with the first backend API, or an output parameter associated with the first backend API.
  • 13. The computer-implemented method of claim 11, wherein the one or more proxy API parameters specify a proxy output format in which a result of executing the first proxy API is to be provided, and the one or more backend API parameters specify a backend output format in which a result of executing the first backend API on the backend system is to be provided.
  • 14. The computer-implemented method of claim 11, wherein the API mapping request further includes a cache parameter for setting a duration for maintaining a cached result associated with executing at least one of the first proxy API or the first backend API.
  • 15. The computer-implemented method of claim 11, wherein the API mapping request further includes a security parameter for limiting access to at least one of the first proxy API or the first backend API.
  • 16. Non-transitory physical computer storage storing computer-executable instructions, which, when executed by one or more computing devices, configure the one or more computing devices to: maintain application programming interface (API) mapping definitions that map a plurality of proxy APIs to a plurality of backend APIs associated with a backend system;receive an API mapping request to create a mapping between a first proxy API and a first backend API, the API mapping request including at least (i) one or more proxy API parameters associated with the first proxy API and (ii) one or more backend API parameters associated with the first backend API;update the API mapping definitions such that the API mapping definitions include the mapping between the first proxy API and the first backend API, the updated API mapping definitions configured to cause execution of the first backend API in response to requests to execute the first API proxy;receive a proxy API execution request to execute the first proxy API;in response to the API execution request to execute the first proxy API, determine, based on the updated API mapping definitions, that the first proxy API is mapped to the first backend API; andtransmit to the backend system a backend API execution request to execute the first backend API on the backend system.
  • 17. The non-transitory physical computer storage of claim 16, wherein the one or more proxy API parameters include one or more of a function name associated with the first proxy API, an input parameter associated with the first proxy API, or an output parameter associated with the first proxy API, and the one or more backend API parameters include one or more of a function name associated with the first backend API, an input parameter associated with the first backend API, or an output parameter associated with the first backend API.
  • 18. The non-transitory physical computer storage of claim 16, wherein the one or more proxy API parameters specify a proxy output format in which a result of executing the first proxy API is to be provided, and the one or more backend API parameters specify a backend output format in which a result of executing the first backend API on the backend system is to be provided.
  • 19. The non-transitory physical computer storage of claim 16, wherein the API mapping request further includes a cache parameter for setting a duration for maintaining a cached result associated with executing at least one of the first proxy API or the first backend API.
  • 20. The non-transitory physical computer storage of claim 16, wherein the API mapping request further includes a security parameter for limiting access to at least one of the first proxy API or the first backend API.
CROSS-REFERENCE TO OTHER APPLICATIONS

This application is a continuation of U.S. application Ser. No. 14/682,033, filed Apr. 8, 2015 and titled “ENDPOINT MANAGEMENT SYSTEM PROVIDING AN APPLICATION PROGRAMMING INTERFACE PROXY SERVICE,” the disclosure of which is hereby incorporated by reference in its entirety. The present application's Applicant previously filed the following U.S. patent applications: Application Ser. No.Title14/502,992THREADING AS A SERVICE14/682,046ENDPOINT MANAGEMENT SYSTEM AND VIRTUAL COMPUTE SYSTEM The disclosures of the above-referenced applications are hereby incorporated by reference in their entireties.

US Referenced Citations (461)
Number Name Date Kind
4949254 Shorter Aug 1990 A
5283888 Dao et al. Feb 1994 A
5970488 Crowe et al. Oct 1999 A
6385636 Suzuki May 2002 B1
6463509 Teoman et al. Oct 2002 B1
6523035 Fleming Feb 2003 B1
6708276 Yarsa et al. Mar 2004 B1
7036121 Casabona et al. Apr 2006 B1
7590806 Harris et al. Sep 2009 B2
7665090 Tormasov et al. Feb 2010 B1
7707579 Rodriguez Apr 2010 B2
7730464 Trowbridge Jun 2010 B2
7774191 Berkowitz et al. Aug 2010 B2
7823186 Pouliot Oct 2010 B2
7886021 Scheifler et al. Feb 2011 B2
8010990 Ferguson et al. Aug 2011 B2
8024564 Bassani et al. Sep 2011 B2
8046765 Cherkasova et al. Oct 2011 B2
8051180 Mazzaferri et al. Nov 2011 B2
8051266 DeVal et al. Nov 2011 B2
8065676 Sahai et al. Nov 2011 B1
8065682 Baryshnikov et al. Nov 2011 B2
8095931 Chen et al. Jan 2012 B1
8127284 Meijer et al. Feb 2012 B2
8146073 Sinha Mar 2012 B2
8166304 Murase et al. Apr 2012 B2
8171473 Lavin May 2012 B2
8209695 Pruyne et al. Jun 2012 B1
8219987 Vlaovic et al. Jul 2012 B1
8321554 Dickinson Nov 2012 B2
8321558 Sirota et al. Nov 2012 B1
8336079 Budko et al. Dec 2012 B2
8352608 Keagy et al. Jan 2013 B1
8429282 Ahuja Apr 2013 B1
8448165 Conover May 2013 B1
8490088 Tang Jul 2013 B2
8555281 Van Dijk et al. Oct 2013 B1
8566835 Wang et al. Oct 2013 B2
8613070 Borzycki et al. Dec 2013 B1
8631130 Jackson Jan 2014 B2
8677359 Cavage et al. Mar 2014 B1
8694996 Cawlfield et al. Apr 2014 B2
8700768 Benari Apr 2014 B2
8719415 Sirota et al. May 2014 B1
8725702 Raman et al. May 2014 B1
8756696 Miller Jun 2014 B1
8769519 Leitman et al. Jul 2014 B2
8799236 Azari et al. Aug 2014 B1
8799879 Wright et al. Aug 2014 B2
8806468 Meijer et al. Aug 2014 B2
8819679 Agarwal et al. Aug 2014 B2
8825863 Hansson et al. Sep 2014 B2
8825964 Sopka et al. Sep 2014 B1
8839035 Dimitrovich et al. Sep 2014 B1
8850432 Mcgrath et al. Sep 2014 B2
8874952 Tameshige et al. Oct 2014 B2
8904008 Calder et al. Dec 2014 B2
8997093 Dimitrov Mar 2015 B2
9027087 Ishaya et al. May 2015 B2
9038068 Engle et al. May 2015 B2
9052935 Rajaa Jun 2015 B1
9086897 Oh et al. Jul 2015 B2
9092837 Bala et al. Jul 2015 B2
9098528 Wang Aug 2015 B2
9110732 Forschmiedt et al. Aug 2015 B1
9110770 Raju et al. Aug 2015 B1
9111037 Nalis et al. Aug 2015 B1
9112813 Jackson Aug 2015 B2
9141410 Leafe et al. Sep 2015 B2
9146764 Wagner Sep 2015 B1
9152406 De et al. Oct 2015 B2
9164754 Pohlack Oct 2015 B1
9183019 Kruglick Nov 2015 B2
9208007 Harper et al. Dec 2015 B2
9218190 Anand et al. Dec 2015 B2
9223561 Orveillon et al. Dec 2015 B2
9223966 Satish et al. Dec 2015 B1
9250893 Blahaerath et al. Feb 2016 B2
9268586 Voccio et al. Feb 2016 B2
9298633 Zhao et al. Mar 2016 B1
9317689 Aissi Apr 2016 B2
9323556 Wagner Apr 2016 B2
9361145 Wilson et al. Jun 2016 B1
9413626 Reque et al. Aug 2016 B2
9436555 Dornemann et al. Sep 2016 B2
9461996 Hayton et al. Oct 2016 B2
9471775 Wagner et al. Oct 2016 B1
9483335 Wagner et al. Nov 2016 B1
9489227 Oh et al. Nov 2016 B2
9497136 Ramarao et al. Nov 2016 B1
9501345 Lietz et al. Nov 2016 B1
9514037 Dow et al. Dec 2016 B1
9537788 Reque et al. Jan 2017 B2
9575798 Terayama et al. Feb 2017 B2
9588790 Wagner et al. Mar 2017 B1
9596350 Dymshyts et al. Mar 2017 B1
9600312 Wagner et al. Mar 2017 B2
9628332 Bruno, Jr. et al. Apr 2017 B2
9652306 Wagner et al. May 2017 B1
9652617 Evans et al. May 2017 B1
9654508 Barton et al. May 2017 B2
9661011 Van Horenbeeck et al. May 2017 B1
9678773 Wagner et al. Jun 2017 B1
9678778 Youseff Jun 2017 B1
9703681 Taylor et al. Jul 2017 B2
9715402 Wagner et al. Jul 2017 B2
9727725 Wagner et al. Aug 2017 B2
9733967 Wagner et al. Aug 2017 B2
9760387 Wagner et al. Sep 2017 B2
9767271 Ghose Sep 2017 B2
9785476 Wagner et al. Oct 2017 B2
9787779 Frank et al. Oct 2017 B2
9811363 Wagner Nov 2017 B1
9811434 Wagner Nov 2017 B1
9830175 Wagner Nov 2017 B1
9830193 Wagner et al. Nov 2017 B1
9830449 Wagner Nov 2017 B1
9864636 Patel et al. Jan 2018 B1
9910713 Wisniewski et al. Mar 2018 B2
9921864 Singaravelu et al. Mar 2018 B2
9928108 Wagner et al. Mar 2018 B1
9929916 Subramanian et al. Mar 2018 B1
9930103 Thompson Mar 2018 B2
9930133 Susarla et al. Mar 2018 B2
9952896 Wagner et al. Apr 2018 B2
9977691 Marriner et al. May 2018 B2
9979817 Huang et al. May 2018 B2
10002026 Wagner Jun 2018 B1
10013267 Wagner et al. Jul 2018 B1
10042660 Wagner et al. Aug 2018 B2
10048974 Wagner et al. Aug 2018 B1
10061613 Brooker et al. Aug 2018 B1
10067801 Wagner Sep 2018 B1
10102040 Marriner et al. Oct 2018 B2
10108443 Wagner et al. Oct 2018 B2
10140137 Wagner Nov 2018 B2
10162672 Wagner et al. Dec 2018 B2
10162688 Wagner Dec 2018 B2
10203990 Wagner et al. Feb 2019 B2
10248467 Wisniewski et al. Apr 2019 B2
10277708 Wagner et al. Apr 2019 B2
10303492 Wagner et al. May 2019 B1
10353678 Wagner Jul 2019 B1
10353746 Reque et al. Jul 2019 B2
10365985 Wagner Jul 2019 B2
10387177 Wagner et al. Aug 2019 B2
10402231 Marriner et al. Sep 2019 B2
10437629 Wagner et al. Oct 2019 B2
20010044817 Asano et al. Nov 2001 A1
20020172273 Baker et al. Nov 2002 A1
20030071842 King et al. Apr 2003 A1
20030084434 Ren May 2003 A1
20030229794 James, II et al. Dec 2003 A1
20040003087 Chambliss et al. Jan 2004 A1
20040098154 Mccarthy May 2004 A1
20040249947 Novaes et al. Dec 2004 A1
20040268358 Darling et al. Dec 2004 A1
20050027611 Wharton Feb 2005 A1
20050044301 Vasilevsky et al. Feb 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050132167 Longobardi Jun 2005 A1
20050132368 Sexton et al. Jun 2005 A1
20050149535 Frey et al. Jul 2005 A1
20050193113 Kokusho et al. Sep 2005 A1
20050193283 Reinhardt et al. Sep 2005 A1
20050237948 Wan et al. Oct 2005 A1
20050257051 Richard Nov 2005 A1
20060123066 Jacobs et al. Jun 2006 A1
20060129684 Datta Jun 2006 A1
20060184669 Vaidyanathan et al. Aug 2006 A1
20060200668 Hybre et al. Sep 2006 A1
20060212332 Jackson Sep 2006 A1
20060242647 Kimbrel et al. Oct 2006 A1
20060248195 Toumura et al. Nov 2006 A1
20070094396 Takano et al. Apr 2007 A1
20070130341 Ma Jun 2007 A1
20070174419 O'Connell et al. Jul 2007 A1
20070192082 Gaos et al. Aug 2007 A1
20070199000 Shekhel et al. Aug 2007 A1
20070220009 Morris et al. Sep 2007 A1
20070240160 Paterson-Jones Oct 2007 A1
20070255604 Seelig Nov 2007 A1
20080028409 Cherkasova et al. Jan 2008 A1
20080052401 Bugenhagen et al. Feb 2008 A1
20080052725 Stoodley et al. Feb 2008 A1
20080082977 Araujo et al. Apr 2008 A1
20080104247 Venkatakrishnan et al. May 2008 A1
20080104608 Hyser et al. May 2008 A1
20080126110 Haeberle et al. May 2008 A1
20080126486 Heist May 2008 A1
20080127125 Anckaert et al. May 2008 A1
20080147893 Marripudi et al. Jun 2008 A1
20080189468 Schmidt et al. Aug 2008 A1
20080201568 Quinn et al. Aug 2008 A1
20080201711 Amir Husain Aug 2008 A1
20080209423 Hirai Aug 2008 A1
20090013153 Hilton Jan 2009 A1
20090025009 Brunswig et al. Jan 2009 A1
20090055810 Kondur Feb 2009 A1
20090055829 Gibson Feb 2009 A1
20090070355 Cadarette et al. Mar 2009 A1
20090077569 Appleton et al. Mar 2009 A1
20090125902 Ghosh et al. May 2009 A1
20090158275 Wang et al. Jun 2009 A1
20090177860 Zhu et al. Jul 2009 A1
20090193410 Arthursson et al. Jul 2009 A1
20090198769 Keller et al. Aug 2009 A1
20090204960 Ben-yehuda et al. Aug 2009 A1
20090204964 Foley et al. Aug 2009 A1
20090222922 Sidiroglou et al. Sep 2009 A1
20090271472 Scheifler et al. Oct 2009 A1
20090288084 Astete et al. Nov 2009 A1
20090300599 Piotrowski Dec 2009 A1
20100023940 Iwamatsu et al. Jan 2010 A1
20100031274 Sim-Tang Feb 2010 A1
20100031325 Maigne et al. Feb 2010 A1
20100036925 Haffner Feb 2010 A1
20100058342 Machida Mar 2010 A1
20100064299 Kacin et al. Mar 2010 A1
20100070678 Zhang et al. Mar 2010 A1
20100070725 Prahlad et al. Mar 2010 A1
20100114825 Siddegowda May 2010 A1
20100115098 De Baer et al. May 2010 A1
20100122343 Ghosh May 2010 A1
20100131936 Cheriton May 2010 A1
20100131959 Spiers et al. May 2010 A1
20100186011 Magenheimer Jul 2010 A1
20100198972 Umbehocker Aug 2010 A1
20100199285 Medovich Aug 2010 A1
20100257116 Mehta et al. Oct 2010 A1
20100269109 Cartales Oct 2010 A1
20100312871 Desantis et al. Dec 2010 A1
20100325727 Neystadt et al. Dec 2010 A1
20110010722 Matsuyama Jan 2011 A1
20110029970 Arasaratnam Feb 2011 A1
20110040812 Phillips Feb 2011 A1
20110055378 Ferris et al. Mar 2011 A1
20110055396 DeHaan Mar 2011 A1
20110078679 Bozek et al. Mar 2011 A1
20110099204 Thaler Apr 2011 A1
20110099551 Fahrig et al. Apr 2011 A1
20110131572 Elyashev et al. Jun 2011 A1
20110134761 Smith Jun 2011 A1
20110141124 Halls et al. Jun 2011 A1
20110153727 Li Jun 2011 A1
20110153838 Belkine et al. Jun 2011 A1
20110154353 Theroux et al. Jun 2011 A1
20110179162 Mayo et al. Jul 2011 A1
20110184993 Chawla et al. Jul 2011 A1
20110225277 Freimuth et al. Sep 2011 A1
20110231680 Padmanabhan et al. Sep 2011 A1
20110247005 Benedetti et al. Oct 2011 A1
20110265164 Lucovsky Oct 2011 A1
20110271276 Ashok et al. Nov 2011 A1
20110314465 Smith et al. Dec 2011 A1
20110321033 Kelkar et al. Dec 2011 A1
20120011496 Shimamura Jan 2012 A1
20120016721 Weinman Jan 2012 A1
20120041970 Ghosh et al. Feb 2012 A1
20120054744 Singh et al. Mar 2012 A1
20120072762 Atchison et al. Mar 2012 A1
20120072914 Ota Mar 2012 A1
20120079004 Herman Mar 2012 A1
20120096271 Ramarathinam et al. Apr 2012 A1
20120096468 Chakravorty et al. Apr 2012 A1
20120102307 Wong Apr 2012 A1
20120102333 Wong Apr 2012 A1
20120110155 Adlung et al. May 2012 A1
20120110164 Frey et al. May 2012 A1
20120110570 Jacobson et al. May 2012 A1
20120110588 Bieswanger et al. May 2012 A1
20120131379 Tameshige et al. May 2012 A1
20120192184 Burckart et al. Jul 2012 A1
20120197795 Campbell et al. Aug 2012 A1
20120197958 Nightingale et al. Aug 2012 A1
20120233464 Miller Sep 2012 A1
20120331113 Jain et al. Dec 2012 A1
20130014101 Ballani et al. Jan 2013 A1
20130042234 DeLuca et al. Feb 2013 A1
20130054804 Jana et al. Feb 2013 A1
20130054927 Raj et al. Feb 2013 A1
20130055262 Lubsey et al. Feb 2013 A1
20130061208 Tsao et al. Mar 2013 A1
20130067494 Srour et al. Mar 2013 A1
20130080641 Lui et al. Mar 2013 A1
20130097601 Podvratnik et al. Apr 2013 A1
20130111032 Alapati et al. May 2013 A1
20130111469 B et al. May 2013 A1
20130124807 Nielsen et al. May 2013 A1
20130132942 Wang May 2013 A1
20130139152 Chang et al. May 2013 A1
20130139166 Zhang et al. May 2013 A1
20130151648 Luna Jun 2013 A1
20130152047 Moorthi et al. Jun 2013 A1
20130179574 Calder et al. Jul 2013 A1
20130179881 Calder et al. Jul 2013 A1
20130179894 Calder et al. Jul 2013 A1
20130179895 Calder et al. Jul 2013 A1
20130185719 Kar et al. Jul 2013 A1
20130185729 Vasic et al. Jul 2013 A1
20130191924 Tedesco Jul 2013 A1
20130198319 Shen et al. Aug 2013 A1
20130198743 Kruglick Aug 2013 A1
20130198748 Sharp et al. Aug 2013 A1
20130198763 Kunze et al. Aug 2013 A1
20130205092 Roy et al. Aug 2013 A1
20130219390 Lee et al. Aug 2013 A1
20130227097 Yasuda et al. Aug 2013 A1
20130227534 Ike et al. Aug 2013 A1
20130227563 Mcgrath Aug 2013 A1
20130227641 White et al. Aug 2013 A1
20130227710 Barak et al. Aug 2013 A1
20130232480 Winterfeldt et al. Sep 2013 A1
20130239125 Iorio Sep 2013 A1
20130262556 Xu et al. Oct 2013 A1
20130263117 Konik et al. Oct 2013 A1
20130275376 Hudlow et al. Oct 2013 A1
20130275958 Ivanov et al. Oct 2013 A1
20130275969 Dimitrov Oct 2013 A1
20130275975 Masuda et al. Oct 2013 A1
20130283176 Hoole et al. Oct 2013 A1
20130290538 Gmach et al. Oct 2013 A1
20130291087 Kailash et al. Oct 2013 A1
20130297964 Hegdal et al. Nov 2013 A1
20130339950 Ramarathinam et al. Dec 2013 A1
20130346946 Pinnix Dec 2013 A1
20130346964 Nobuoka et al. Dec 2013 A1
20130346987 Raney et al. Dec 2013 A1
20130346994 Chen et al. Dec 2013 A1
20130347095 Barjatiya et al. Dec 2013 A1
20140007097 Chin et al. Jan 2014 A1
20140019523 Heymann et al. Jan 2014 A1
20140019735 Menon et al. Jan 2014 A1
20140019965 Neuse et al. Jan 2014 A1
20140019966 Neuse et al. Jan 2014 A1
20140040343 Nickolov et al. Feb 2014 A1
20140040857 Trinchini et al. Feb 2014 A1
20140040880 Brownlow et al. Feb 2014 A1
20140059209 Alnoor Feb 2014 A1
20140059226 Messerli et al. Feb 2014 A1
20140059552 Cunningham et al. Feb 2014 A1
20140068611 McGrath et al. Mar 2014 A1
20140081984 Sitsky et al. Mar 2014 A1
20140082165 Marr et al. Mar 2014 A1
20140101649 Kamble et al. Apr 2014 A1
20140108722 Lipchuk et al. Apr 2014 A1
20140109087 Jujare et al. Apr 2014 A1
20140109088 Dournov et al. Apr 2014 A1
20140129667 Ozawa May 2014 A1
20140130040 Lemanski May 2014 A1
20140173614 Konik et al. Jun 2014 A1
20140173616 Bird et al. Jun 2014 A1
20140180862 Certain et al. Jun 2014 A1
20140189677 Curzi et al. Jul 2014 A1
20140201735 Kannan et al. Jul 2014 A1
20140207912 Thibeault et al. Jul 2014 A1
20140215073 Dow et al. Jul 2014 A1
20140245297 Hackett Aug 2014 A1
20140279581 Devereaux Sep 2014 A1
20140280325 Krishnamurthy et al. Sep 2014 A1
20140282615 Cavage et al. Sep 2014 A1
20140283045 Brandwine et al. Sep 2014 A1
20140289286 Gusak Sep 2014 A1
20140304698 Chigurapati et al. Oct 2014 A1
20140304815 Maeda Oct 2014 A1
20140317617 O'Donnell Oct 2014 A1
20140344457 Bruno, Jr. et al. Nov 2014 A1
20140344736 Ryman et al. Nov 2014 A1
20140380085 Rash et al. Dec 2014 A1
20150033241 Jackson et al. Jan 2015 A1
20150039891 Ignatchenko et al. Feb 2015 A1
20150040229 Chan et al. Feb 2015 A1
20150046926 Kenchammana-Hosekote et al. Feb 2015 A1
20150052258 Johnson et al. Feb 2015 A1
20150058914 Yadav Feb 2015 A1
20150074659 Madsen et al. Mar 2015 A1
20150081885 Thomas et al. Mar 2015 A1
20150106805 Melander et al. Apr 2015 A1
20150120928 Gummaraju et al. Apr 2015 A1
20150134626 Theimer et al. May 2015 A1
20150135287 Medeiros et al. May 2015 A1
20150142952 Bragstad et al. May 2015 A1
20150143381 Chin et al. May 2015 A1
20150178110 Li et al. Jun 2015 A1
20150186129 Apte et al. Jul 2015 A1
20150188775 Van Der Walt et al. Jul 2015 A1
20150199218 Wilson et al. Jul 2015 A1
20150235144 Gusev et al. Aug 2015 A1
20150242225 Muller et al. Aug 2015 A1
20150254248 Burns et al. Sep 2015 A1
20150256621 Noda et al. Sep 2015 A1
20150261578 Greden et al. Sep 2015 A1
20150289220 Kim et al. Oct 2015 A1
20150309923 Iwata et al. Oct 2015 A1
20150319160 Ferguson et al. Nov 2015 A1
20150332048 Mooring et al. Nov 2015 A1
20150350701 Lemus et al. Dec 2015 A1
20150356294 Tan et al. Dec 2015 A1
20150363181 Alberti et al. Dec 2015 A1
20150370560 Tan et al. Dec 2015 A1
20150371244 Neuse et al. Dec 2015 A1
20150378762 Saladi et al. Dec 2015 A1
20150378764 Sivasubramanian et al. Dec 2015 A1
20150378765 Singh et al. Dec 2015 A1
20150379167 Griffith et al. Dec 2015 A1
20160012099 Tuatini et al. Jan 2016 A1
20160019536 Ortiz Jan 2016 A1
20160026486 Abdallah Jan 2016 A1
20160048606 Rubinstein et al. Feb 2016 A1
20160072727 Leafe et al. Mar 2016 A1
20160077901 Roth Mar 2016 A1
20160098285 Davis et al. Apr 2016 A1
20160100036 Lo et al. Apr 2016 A1
20160117254 Susarla et al. Apr 2016 A1
20160124665 Jain et al. May 2016 A1
20160140180 Park et al. May 2016 A1
20160191420 Nagarajan et al. Jun 2016 A1
20160285906 Fine et al. Sep 2016 A1
20160292016 Bussard et al. Oct 2016 A1
20160294614 Searle et al. Oct 2016 A1
20160350099 Suparna et al. Dec 2016 A1
20160364265 Cao et al. Dec 2016 A1
20160371127 Antony et al. Dec 2016 A1
20160371156 Merriman Dec 2016 A1
20160378554 Gummaraju et al. Dec 2016 A1
20170060621 Whipple et al. Mar 2017 A1
20170068574 Cherkasova et al. Mar 2017 A1
20170083381 Cong et al. Mar 2017 A1
20170085447 Chen et al. Mar 2017 A1
20170090961 Wagner et al. Mar 2017 A1
20170093920 Ducatel et al. Mar 2017 A1
20170116051 Wagner et al. Apr 2017 A1
20170177391 Wagner et al. Jun 2017 A1
20170192804 Wagner Jul 2017 A1
20170199766 Wagner et al. Jul 2017 A1
20170206116 Reque et al. Jul 2017 A1
20170286143 Wagner et al. Oct 2017 A1
20170286156 Wagner et al. Oct 2017 A1
20170371703 Wagner et al. Dec 2017 A1
20170371724 Wagner et al. Dec 2017 A1
20180004553 Wagner et al. Jan 2018 A1
20180004572 Wagner et al. Jan 2018 A1
20180039506 Wagner et al. Feb 2018 A1
20180046453 Nair et al. Feb 2018 A1
20180046482 Karve et al. Feb 2018 A1
20180060221 Yim et al. Mar 2018 A1
20180067841 Mahimkar Mar 2018 A1
20180121245 Wagner et al. May 2018 A1
20180143865 Wagner et al. May 2018 A1
20180157568 Wagner Jun 2018 A1
20180203717 Wagner et al. Jul 2018 A1
20180275987 Vandeputte Sep 2018 A1
20190050271 Marriner et al. Feb 2019 A1
20190073234 Wagner et al. Mar 2019 A1
20190102231 Wagner Apr 2019 A1
20190108058 Wagner et al. Apr 2019 A1
20190155629 Wagner et al. May 2019 A1
20190171470 Wagner Jun 2019 A1
20190196884 Wagner Jun 2019 A1
20190205171 Brooker et al. Jul 2019 A1
20190227849 Wisniewski et al. Jul 2019 A1
Foreign Referenced Citations (30)
Number Date Country
2663052 Nov 2013 EP
2002287974 Oct 2002 JP
2006-107599 Apr 2006 JP
2007-538323 Dec 2007 JP
2010-026562 Feb 2010 JP
2011-233146 Nov 2011 JP
2011257847 Dec 2011 JP
2013-156996 Aug 2013 JP
2014-525624 Sep 2014 JP
2017-534107 Nov 2017 JP
2017-534967 Nov 2017 JP
2018-503896 Feb 2018 JP
2018-512087 May 2018 JP
2018-536213 Dec 2018 JP
WO 2008114454 Sep 2008 WO
WO 2009137567 Nov 2009 WO
WO 2012050772 Apr 2012 WO
WO 2013106257 Jul 2013 WO
WO 2015078394 Jun 2015 WO
WO 2015108539 Jul 2015 WO
WO 2016053950 Apr 2016 WO
WO 2016053968 Apr 2016 WO
WO 2016053973 Apr 2016 WO
WO 2016090292 Jun 2016 WO
WO 2016126731 Aug 2016 WO
WO 2016164633 Oct 2016 WO
WO 2016164638 Oct 2016 WO
WO 2017059248 Apr 2017 WO
WO 2017112526 Jun 2017 WO
WO 2017172440 Oct 2017 WO
Non-Patent Literature Citations (72)
Entry
Anonymous: “Docker run reference”, Dec. 7, 2015, XP055350246, Retrieved from the Internet: URL:https://web.archive.org/web/20151207111702/https:/docs.docker.com/engine/reference/run/ [retrieved on Feb. 28, 2017].
Adapter Pattern, Wikipedia, https://en.wikipedia.org/w/index.php?title=Adapter_pattern&oldid=654971255, [retrieved May 26, 2016], 6 pages.
Amazon, “AWS Lambda: Developer Guide”, Retrieved from the Internet, Jun. 26, 2016, URL : http://docs.aws.amazon.com/lambda/ latest/dg/lambda-dg.pdf.
Balazinska et al., Moirae: History-Enhanced Monitoring, Published: 2007, 12 pages.
Ben-Yehuda et al., “Deconstructing Amazon EC2 Spot Instance Pricing”, ACM Transactions on Economics and Computation 1.3, 2013, 15 pages.
Czajkowski, G., and L. Daynes, Multitasking Without Compromise: A Virtual Machine Evolution 47(4a):60-73, ACM SIGPLAN Notices—Supplemental Issue, Apr. 2012.
Das et al., Adaptive Stream Processing using Dynamic Batch Sizing, 2014, 13 pages.
Deis, Container, 2014, 1 page.
Dombrowski, M., et al., Dynamic Monitor Allocation in the Java Virtual Machine, JTRES '13, Oct. 9-11, 2013, pp. 30-37.
Espadas, J., et al., A Tenant-Based Resource Allocation Model for Scaling Software-as-a-Service Applications Over Cloud Computing Infrastructures, Future Generation Computer Systems, vol. 29, pp. 273-286, 2013.
Hoffman, Auto scaling your website with Amazon Web Services (AWS)—Part 2, Cardinalpath, Sep. 2015, 15 pages.
Kamga et al., Extended scheduler for efficient frequency scaling in virtualized systems, Jul. 2012, 8 pages.
Kazempour et al., AASH: an asymmetry-aware scheduler for hypervisors, Jul. 2010, 12 pages.
Kraft et al., 10 performance prediction in consolidated virtualized environments, Mar. 2011, 12 pages.
Monteil, Coupling profile and historical methods to predict execution time of parallel applications. Parallel and Cloud Computing, 2013, <hal-01228236, pp. 81-89.
Nakajima, J., et al., Optimizing Virtual Machines Using Hybrid Virtualization, SAC '11, Mar. 21-25, 2011, TaiChung, Taiwan, pp. 573-578.
Qian, H., and D. Medhi, et al., Estimating Optimal Cost of Allocating Virtualized Resources With Dynamic Demand, ITC 2011, Sep. 2011, pp. 320-321.
Shim (computing), Wikipedia, https://en.wikipedia.org/w/index.php?title+Shim_(computing)&oldid+654971528, [retrieved on May 26, 2016], 2 pages.
Stack Overflow, Creating a database connection pool, 2009, 4 pages.
Vaghani, S.B., Virtual Machine File System, ACM SIGOPS Operating Systems Review 44(4):57-70, Dec. 2010.
Vaquero, L., et al., Dynamically Scaling Applications in the cloud, ACM SIGCOMM Computer Communication Review 41(1):45-52, Jan. 2011.
Yue et al., AC 2012-4107: Using Amazon EC2 in Computer and Network Security Lab Exercises: Design, Results, and Analysis, 2012, American Society for Engineering Education 2012.
Zheng, C., and D. Thain, Integrating Containers into Workflows: A Case Study Using Makeflow, Work Queue, and Docker, VTDC '15, Jun. 15, 2015, Portland, Oregon, pp. 31-38.
International Search Report and Written Opinion in PCT/US2015/052810 dated Dec. 17, 2015.
International Preliminary Report on Patentability in PCT/US2015/052810 dated Apr. 4, 2017.
International Search Report and Written Opinion in PCT/US2015/052838 dated Dec. 18, 2015.
International Preliminary Report on Patentability in PCT/US2015/052838 dated Apr. 4, 2017.
International Search Report and Written Opinion in PCT/US2015/052833 dated Jan. 13, 2016.
International Preliminary Report on Patentability in PCT/US2015/052833 dated Apr. 4, 2017.
International Search Report and Written Opinion in PCT/US2015/064071 dated Mar. 16, 2016.
International Preliminary Report on Patentability in PCT/US2015/064071 dated Jun. 6, 2017.
International Search Report and Written Opinion in PCT/US2016/016211 dated Apr. 13, 2016.
International Preliminary Report on Patentability in PCT/US2016/016211 dated Aug. 17, 2017.
International Search Report and Written Opinion in PCT/US2016/026514 dated Jun. 8, 2016.
International Preliminary Report on Patentability in PCT/US2016/026514 dated Oct. 10, 2017.
International Search Report and Written Opinion in PCT/US2016/026520 dated Jul. 5, 2016.
International Preliminary Report on Patentability in PCT/US2016/026520 dated Oct. 10, 2017.
International Search Report and Written Opinion in PCT/US2016/054774 dated Dec. 16, 2016.
International Preliminary Report on Patentability in PCT/US2016/054774 dated Apr. 3, 2018.
International Search Report and Written Opinion in PCT/US2016/066997 dated Mar. 20, 2017.
International Search Report and Written Opinion in PCT/US/2017/023564 dated Jun. 6, 2017.
International Search Report and Written Opinion in PCT/US2017/040054 dated Sep. 21, 2017.
International Search Report and Written Opinion in PCT/US2017/039514 dated Oct. 10, 2017.
Amazon, “AWS Lambda: Developer Guide”, Retrieved from the Internet, 2019, URL : http://docs.aws.amazon.com/lambda/ latest/dg/lambda-dg.pdf, 521 pages.
Bhadani et al., Performance evaluation of web servers using central load balancing policy over virtual machines on cloud, Jan. 2010, 4 pages.
Dynamic HTML, Wikipedia page from date Mar. 27, 2015, retrieved using the WayBackMachine, from https://web.archive.org/web/20150327215418/https://en.wikipedia.org/wiki/Dynamic_HTML, 2015, 6 pages.
Han et al., Lightweight Resource Scaling for Cloud Applications, 2012, 8 pages.
Kato, et al. “Web Service Conversion Architecture of the Web Application and Evaluation”; Research Report from Information Processing Society, Apr. 3, 2006 with Machine Translation.
Krsul et al., “VMPlants: Providing and Managing Virtual Machine Execution Environments for Grid Computing”, Supercomputing, 2004. Proceedings of the ACM/IEEESC 2004 Conference Pittsburgh, PA, XP010780332, Nov. 6-12, 2004, 12 pages.
Meng et al., Efficient resource provisioning in compute clouds via VM multiplexing, Jun. 2010, 10 pages.
Merkel, “Docker: Lightweight Linux Containers for Consistent Development and Deployment”, Linux Journal, vol. 2014 Issue 239, Mar. 2014, XP055171140, 16 pages.
Sakamoto, et al. “Platform for Web Services using Proxy Server”; Research Report from Information Processing Society, Mar. 22, 2002, vol. 2002, No. 31.
Tan et al., Provisioning for large scale cloud computing services, Jun. 2012, 2 pages.
Wang et al., “Improving utilization through dynamic VM resource allocation in hybrid cloud environment”, Parallel and Distributed V Systems (ICPADS), IEEE, 2014. Retrieved on Feb. 14, 2019, Retrieved from the internet: URL<https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097814, 8 pages.
Wu et al., HC-Midware: A Middleware to Enable High Performance Communication System Simulation in Heterogeneous Cloud, Association for Computing Machinery, Oct. 20-22, 2017, 10 pages.
Yamasaki et al. “Model-based resource selection for efficient virtual cluster deployment”, Virtualization Technology in Distributed Computing, ACM, Nov. 2007, pp. 1-7.
Extended Search Report in European Application No. 15846932.0 dated May 3, 2018.
Extended Search Report in European Application No. 15847202.7 dated Sep. 9, 2018.
Extended Search Report in European Application No. 15846542.7 dated Aug. 27, 2018.
International Preliminary Report on Patentability in PCT/US2016/066997 dated Jun. 26, 2018.
International Preliminary Report on Patentability in PCT/US/2017/023564 dated Oct. 2, 2018.
International Preliminary Report on Patentability in PCT/US2017/040054 dated Jan. 1, 2019.
International Preliminary Report on Patentability in PCT/US2017/039514 dated Jan. 1, 2019.
CodeChef ADMIN discussion web page, retrieved from https://discuss.codechef.com/t/what-are-the-memory-limit-and-stack-size-on-codechef/14159, 2019.
CodeChef IDE web page, Code, Compile & Run, retrieved from https://www.codechef.com/ide, 2019.
http://discuss.codechef.com discussion web page from date Nov. 11, 2012, retrieved using the WayBackMachine, from https://web.archive.org/web/20121111040051/http://discuss.codechef.com/questions/2881 /why-are-simple-java-programs-using-up-so-much-space, 2012.
https://www.codechef.com code error help page from Jan. 2014, retrieved from https://www.codechef.com/JAN14/status/ERROR,va123, 2014.
http://www.codechef.com/ide web page from date Apr. 5, 2015, retrieved using the WayBackMachine, from https://web.archive.org/web/20150405045518/http://www.codechef.com/ide, 2015.
Wikipedia List_of_HTTP status_codes web page, retrieved from https://en.wikipedia.org/wiki/List_of_HTTP status_codes, 2019.
Wikipedia Recursion web page from date Mar. 26, 2015, retrieved using the WayBackMachine, from https://web.archive.org/web/20150326230100/https://en .wikipedia.org/wiki/Recursion_(computer_science), 2015.
Wikipedia subroutine web page, retrieved from https://en.wikipedia.org/wiki/Subroutine, 2019.
Extended European Search Report in application No. 17776325.7 dated Oct. 23, 2019.
Related Publications (1)
Number Date Country
20180309819 A1 Oct 2018 US
Continuations (1)
Number Date Country
Parent 14682033 Apr 2015 US
Child 15934733 US