An application programming interface (API) management platform may be a platform for building, managing, and/or securing APIs. The API management platform may offer API proxies to create a consistent, reliable interface to a backend service. An API proxy may be associated with a layer in between the backend service and internal/external clients that want to use the backend service. The API management platform may provide an array of policies that allow security, traffic management, data mediation, and other features to be added to the API proxy.
The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
An API may be an interface that allows one application to consume capabilities or data from another application. By defining stable, simplified entry points to application logic and data, APIs may enable developers to easily access and reuse application logic built by other developers. Since applications that consume APIs may be sensitive to changes, APIs may provide a level of assurance that, over time, the APIs may only change in a predictable manner. An API proxy may decouple an application-facing API from a backend service, which may shield those applications from backend code changes. When backend changes are made to the backend service, applications may continue to call the same API without any interruption.
An API management platform may be a platform for building, managing, and/or securing APIs. The API management platform may offer API proxies to create a consistent, reliable interface to a backend service. The API management platform may provide the API proxy to provide granular control over security, rate limiting, quotas, and/or analytics. The API proxy may be associated with a layer in between the backend service and internal/external clients that want to use the backend service. The API management platform may provide an array of policies that allow security, traffic management, data mediation, and other features to be added to the API proxy. The API management platform may allow custom code, conditional logic, fault handling, rate limiting, caching, and/or other actions. Since policies and/or actions may be implemented on the API management platform, in the API proxy, the backend service may remain unchanged. The API management platform may be designed to aid API produces and API consumers. An API producer may build and manage an API that exposes their backend service. An API consumer may use data provided by the API in their client application.
In an API management platform, different API proxies may be defined for Internet facing users (vendors) with different client certifications, different authentication, and/or different authorization on endpoints. In existing API management platforms, e.g., APIGEE, a single API proxy may support only a single user, which may result in increased development, maintenance, and testing times since each proxy needs to be individually addressed. The existing API management platforms may require the creation of separate entry points for each client and/or certification, which may each require managing those entry points. The existing API management platforms may have such limitations. Such an approach may also result in increased infrastructure and computation resources needed, which may degrade an overall system performance.
In some implementations, in an API management platform, a unified (common) API proxy may be defined for Internet facing users with different client certifications, different authentication, and/or different authorization on endpoints. The unified API proxy may be built to support multiple users with different client certifications, different sets of registered endpoints, and/or different API keys and authentication tokens (e.g., open authentication (OAuth) 2.0 tokens). The API keys and authentication tokens may be derived from a consumer key/secret pair.
In some implementations, by using the unified API proxy, multiple users may be supported, which may save development, maintenance, and testing times. The unified API proxy may save infrastructure and computation resources. The unified API proxy may allow addition of support for new users, removal of support for existing users, addition of endpoints for existing users, and/or updating of endpoints for existing users. Further, the unified API proxy may reduce time needed to bring new changes to production.
As shown in
As indicated above,
As shown by reference number 202, the API management platform 106 may receive, from the UE 102, a request to a unified API proxy that supports multiple endpoints for multiple users. The unified API proxy may simultaneously support multiple users with different certifications, different sets of registered endpoints, different API keys, and/or different authentication tokens. The request may be associated with a user of the multiple users. The request may be associated with a request uniform resource locator (URL) having a standardized URL pattern, and the request URL may indicate a domain name, a context path, a user name, an environment, a service name, and/or an endpoint.
As shown by reference number 204, the API management platform 106 may perform an evaluation of the request based on a comparison of information indicated in the request and information associated with the user. The API management platform 106 may validate the request or invalidate the request based on the comparison. The information associated with the user may be indicated in a KVM for registered users. The API management platform 106 may accept or allow the request based on a validation of the request. Alternatively, the API management platform 106 may block the request based on an invalidation of the request. In other words, the API management platform 106 may compare the information indicated in the request and the information associated with the user (e.g., information indicated in the KVM regarding the user), and based on the comparison, the API management platform 106 may either validate the request or invalidate the request. An allowed request may be provided to the backend service 108, whereas a blocked request may not be provided to the backend service 108. In some cases, the API management platform 106 may generate, based on the validation of the request, a target endpoint URL that is derived dynamically from the request URL, where the request URL may for the backend service 108.
In some implementations, the request may indicate an API key associated with the user or an authentication token associated with the user. The API management platform 106, when performing the evaluation, may validate the API key and/or the authentication token based on the comparison. Alternatively, the API management platform 106 may invalidate the API key and/or the authentication token based on the comparison. In some implementations, the request may include a header that indicates two-way secure socket layer (SSL) information associated with the user. The API management platform 106, when performing the evaluation, may validate the two-way SSL information based on the comparison. Alternatively, the API management platform 106 may invalidate the two-way SSL information based on the comparison. The two-way SSL information may indicate a common name (CN) whitelisted domain (WhiteList CN) associated with the user and a certification status (CertStatus) associated with the user. In some implementations, the request may indicate one or more registered endpoints associated with the user. Registered endpoint information may be taken from a URL associated with the request. The API management platform 106, when performing the evaluation, may validate the one or more registered endpoints based on the comparison. Alternatively, the API management platform 106 may invalidate the one or more registered endpoints based on the comparison.
In some implementations, in the API management platform 106, the unified API proxy may be developed to support the multiple users with different client certifications, different sets of registered endpoints, and/or different authentication tokens. In some implementations, the unified API proxy may be achieved by creating multiple API keys and authentication tokens. One API key and one authentication token may be created for each registered user. The API keys and the authentication tokens themselves may be different for registered users, but all of the API keys and the authentication tokens may be valid for the unified API proxy. A plurality of users (e.g., all users) with a unique set of API keys and authentication tokens may pass an API key validation and an authentication token validation of the unified API proxy.
In some implementations, the unified API proxy may be achieved by standardizing a URL pattern with a user name in a path immediately after a context path, followed by a service name and an endpoint name. The URL pattern may be standardized in accordance with https://<domain>/<contextpath>/<vendor-name>/<environment>/<serviceName>/<endpoint>. The URL pattern may be standardized for a request URL. As an example, a request URL for a user name of “15gifts”, a service name of “customer profile”, and an endpoint name of “customer” may be “https://<domain>/apis/vendors/15gifts/customerProfile/customer” in accordance with the URL pattern, where “/apis/vendors/” may be the context path.
In some implementations, the unified API proxy may be achieved by using a sequence of multiple shared flows. A first shared flow may be associated with an authentication validation (shared-flow-validate-oauth). A second shared flow may be associated with a two-way SSL validation (shared-flow-validate-2wayssl). A third shared flow may be associated with an endpoint validation (shared-flow-validate-endpoint). Based on the user name, the service name, and the endpoint name, the two-way SSL validation and the endpoint validation may be performed and a target URL may be generated. After all of the shared flows validate a request successfully, a target endpoint URL may be dynamically derived from the request URL for underlying services. An example of the first shared flow, the second shared flow, and the third shared flow is shown in
In some implementations, the first shared flow may serve to validate API keys and authentication tokens of registered users. A request with an invalid API key or an invalid authentication token may be blocked.
In some implementations, the second shared flow may include a KVM, where one section of the KVM may be for each registered user. When no entry is present for a user in the KVM, that user may not be registered. Each registered user's section may indicate a WhiteList CN and a CertStatus. A KVM for a registered user may be based on a particular structure (e.g., as shown in
In some implementations, a CN attribute from a header (e.g., header X=SUBJECT-NAME) may be compared with the WhiteList CN indicated in the KVM for the corresponding user. The two-way SSL validation may be completed when the CN attribute matches with the WhiteList CN indicated in the KVM. Further, a CertStatus from a header (e.g., header X-CLIENT-CERT-STATUS) may be compared with the CertStatus indicated in the KVM for the corresponding user. The two-way SSL validation may be completed when the CertStatus from the header matches with the CertStatus indicated in the KVM. A single API proxy using the second shared flow may handle two-way SSL validation for each registered user.
In some implementations, the third shared flow may allow only registered endpoints for corresponding users. Endpoint registration may be achieved using a KVM for registered endpoints (e.g., as shown in
As shown by reference number 206, the API management platform 106 may transmit, to the backend service 108, the request based on a validation of the request. The API management platform 106 may transmit the request to the backend service 108 when the request is accepted or allowed. The API management platform 106 may not transmit the request to the backend service 108 when the request is invalidated. In some implementations, the API management platform 106 may receive, from the backend service 108, a response. The response may be based on the request. The API management platform 106 may forward the response to the UE 102. Thus, communications between the UE 102 and the backend service 108 may be processed by the API management platform 106.
In some implementations, by implementing the unified API proxy for the multiple users, instead of having separate API proxies for separate users, infrastructure and computation resources may be saved. Further, by using the unified API proxy, users may added and/or removed with less overhead, which may reduce time needed to bring new changes to production.
As indicated above,
As shown in
As indicated above,
As shown in
As indicated above,
As shown in
As indicated above,
In some implementations, the first shared flow may block a request which does not have a valid API key and/or a valid authentication token. The second shared flow may block a request which does not have matching two-way SSL information of a user. The third shared flow may block a request which does not have a matching endpoint for the user. The first shared flow, the second shared flow, and the third shared flow may work together in a single unified API proxy in order to block an unauthenticated, unauthorized, and/or unregistered request.
In some implementations, a request with an invalid API key and an invalid authentication token may be blocked. In some implementations, a request with a valid API key but for an unregistered user may be blocked. In some implementations, a request with a valid API key for a registered user but with invalid two-way SSL information may be blocked. In some implementations, a request with a valid API key for a registered user and with valid two-way SSL information but with an unregistered endpoint may be blocked.
In some implementations, a first user may call an endpoint registered for a second user using a URL of “https://<domain>//apis/vendors/vendor2/service3/endpoint1”. The first user may be able to pass through the first shared flow because a request from the first user may have a valid API key and a valid authentication token, but the request may be blocked during the second shared flow because two-way SSL information associated with the first user may not be the same as two-way SSL information associated with the second user (e.g., a certificate of the first user does not match a certificate of the second user).
In some implementations, a first user may call an endpoint registered for a second user using a URL of “https://<domain>//apis/vendors/vendor1/service3/endpoint1”. In this case, the request may be blocked during the third shared flow because a service3/endpoint1 may not be registered as an endpoint of the first user. In other words, the first user may not be authorized to call service 3/endpoint1. In some implementations, a request with a valid API key for a registered user and with valid two-way SSL information and with a registered endpoint may be allowed. In some implementations, when the request is accepted, the registered endpoint may then make a request to a backend service using information from the request.
As shown in
As indicated above,
The UE 710 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with providing a unified API proxy that supports multiple endpoints for multiple users, as described elsewhere herein. The UE 710 may include a communication device and/or a computing device. For example, the UE 710 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device.
The API management platform 720 may include one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information associated with providing a unified API proxy that supports multiple endpoints for multiple users, as described elsewhere herein. The API management platform 720 may include a communication device and/or a computing device. For example, the API management platform 720 may include a server, such as an application server, a client server, a web server, a database server, a host server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system. In some implementations, the API management platform 720 may include computing hardware used in a cloud computing environment.
The backend service 730 may include one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information associated with providing a unified API proxy that supports multiple endpoints for multiple users, as described elsewhere herein. The backend service 730 may include a communication device and/or a computing device. For example, the backend service 730 may include a server, such as an application server, a client server, a web server, a database server, a host server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system. In some implementations, the backend service 730 may include computing hardware used in a cloud computing environment.
The network 740 may include one or more wired and/or wireless networks. For example, the network 740 may include a cellular network (e.g., a Fifth Generation (5G) network, a Fourth Generation (4G) network, a Long Term Evolution (LTE) network, a Third Generation (3G) network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, and/or a combination of these or other types of networks. The network 740 may enable communication among the one or more devices of environment 700.
The number and arrangement of devices and networks shown in
The bus 810 may include one or more components that enable wired and/or wireless communication among the components of the device 800. The bus 810 may couple together two or more components of
The memory 830 may include volatile and/or nonvolatile memory. For example, the memory 830 may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). The memory 830 may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection). The memory 830 may be a non-transitory computer-readable medium. The memory 830 may store information, one or more instructions, and/or software (e.g., one or more software applications) related to the operation of the device 800. In some implementations, the memory 830 may include one or more memories that are coupled (e.g., communicatively coupled) to one or more processors (e.g., processor 820), such as via the bus 810. Communicative coupling between a processor 820 and a memory 830 may enable the processor 820 to read and/or process information stored in the memory 830 and/or to store information in the memory 830.
The input component 840 may enable the device 800 to receive input, such as user input and/or sensed input. For example, the input component 840 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, a global navigation satellite system sensor, an accelerometer, a gyroscope, and/or an actuator. The output component 850 may enable the device 800 to provide output, such as via a display, a speaker, and/or a light-emitting diode. The communication component 860 may enable the device 800 to communicate with other devices via a wired connection and/or a wireless connection. For example, the communication component 860 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.
The device 800 may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 830) may store a set of instructions (e.g., one or more instructions or code) for execution by the processor 820. The processor 820 may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one or more processors 820, causes the one or more processors 820 and/or the device 800 to perform one or more operations or processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively, the processor 820 may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
The number and arrangement of components shown in
As shown in
As shown in
As shown in
Although
As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.
To the extent the aforementioned implementations collect, store, or employ personal information of individuals, it should be understood that such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information can be subject to consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as can be appropriate for the situation and type of information. Storage and use of personal information can be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information.
Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item.
When “a processor” or “one or more processors” (or another device or component, such as “a controller” or “one or more controllers”) is described or claimed (within a single claim or across multiple claims) as performing multiple operations or being configured to perform multiple operations, this language is intended to broadly cover a variety of processor architectures and environments. For example, unless explicitly claimed otherwise (e.g., via the use of “first processor” and “second processor” or other language that differentiates processors in the claims), this language is intended to cover a single processor performing or being configured to perform all of the operations, a group of processors collectively performing or being configured to perform all of the operations, a first processor performing or being configured to perform a first operation and a second processor performing or being configured to perform a second operation, or any combination of processors performing or being configured to perform the operations. For example, when a claim has the form “one or more processors configured to: perform X; perform Y; and perform Z,” that claim should be interpreted to mean “one or more processors configured to perform X; one or more (possibly different) processors configured to perform Y; and one or more (also possibly different) processors configured to perform Z.”
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.