A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Embodiments described herein are generally related to cloud computing environments, and are particularly directed to systems and methods for dynamic throttling of workflows based on integrated applications.
Generally described, an integration cloud platform enables the integration of various software products or applications, such as for example software-as-a-service (SaaS) applications and/or on-premises applications, which can then be made accessible to consumers via a cloud computing environment.
Some cloud environments offer an integration-platform-as-a-service (iPaaS) environment, and can support, for example, a web-based integration design tool for point-and-click configuration of the integration between the various applications, and a monitoring dashboard that provides real-time insight into transactions involving those applications. Such environments can be used to simplify the means by which a variety of otherwise different applications, technologies, and processes can be integrated to create business workflows that execute within the cloud.
An aspect of an iPaaS environment is the means by which external applications, technologies and processes can be integrated to create business workflows in the cloud. It is common for such participating external information systems to have some limits on their APIs to ensure fair usage. Typically these limits are decided based on customer subscription and complexity of API (e.g., time to process request, DB operations, CPU, memory), and once these limits are reached, these systems start to throttle incoming requests to protect from over-utilization.
In the context of iPaaS automated business workflows, this can lead to situations where the invocation of one of the external systems could fail when the system is throttling incomings requests, which results in the automated business workflows to be in an incomplete or sometimes inconsistent state. Such external systems may also impose penalties on the clients that are trying to send multiple requests, for example by imposing penalties such as blocking the user or the source application sending these requests. In the example of iPaaS automated business workflows, the source application may be the iPaaS environment itself which could face such penalties.
Embodiments described herein are generally related to cloud computing environments, and are particularly directed to systems and methods for dynamic throttling of workflows based on integrated applications. An integration cloud or platform-as-a-service (iPaaS) platform can enforce fixed limits for flows, and evolve the limits behavior based on responses from target applications, for example as sent within documented headers in their responses.
Embodiments described herein are generally related to cloud computing environments, and are particularly directed to systems and methods for dynamic throttling of workflows based on integrated applications. An integration cloud or platform-as-a-service (iPaaS) platform can enforce fixed limits for flows, and evolve the limits behavior based on responses from target applications, for example as sent within documented headers in their responses.
As illustrated in
In accordance with an embodiment, examples of such a cloud environment and cloud platform can include Oracle Cloud, and Oracle Integration Cloud (010)/Oracle Cloud Integration (OCI) respectively. In accordance with various embodiments, the teachings described herein can also be used with other types of cloud environment or cloud platform, including, for example, other types of platform-as-a-service (PaaS) cloud environments or cloud platforms.
In accordance with an embodiment, each of a plurality of tenants of the cloud environment, for example a first tenant A, can be associated with a tenant platform environment 110, for use with the tenant's computing environment 112, and with one or more cloud software application(s) 114, and/or integration(s) 116. For example in accordance with an embodiment, the tenant can access their platform environment from an on-premise computing environment via a cloud computing environment 130 or other type of network or communication environment.
In accordance with an embodiment, a tenant platform environment can utilize one or more cloud software applications or services 150, or third-party or other software applications or services 158, provided by the cloud platform. The cloud platform can orchestrate use by the tenant platform environment, or by software applications executing therein, of various lifecycle activities provided within cloud platform.
In accordance with an embodiment, the integration cloud service 107 can include a design console 109, an integration cloud service runtime 115, and adapters 113 that simplify the task of configuring connections to applications, and execute on an application server 117 within the cloud environment. As further described below, the integration cloud service design console can provide a design time environment that allows a user to design, activate, manage, and monitor a workflow (e.g., a business workflow) that uses integration artifacts (e.g., an integration flow), that can then be deployed and executed on the integration cloud service runtime.
For example, in accordance with an embodiment, a cloud platform operating as an integration platform can orchestrate various software applications and multiple modules working together, such as, for example, activation, connection test, metadata loading, invoke target endpoint, transformation, request received by an integration, or various other types of modules; and can provide various lifecycle activities performed by these modules. During runtime, the modules can then execute the lifecycle activities to address various use-cases.
In accordance with an embodiment, the various components and processes illustrated in
As described above, in accordance with an embodiment, the integration cloud service can include a design console, and an integration cloud service runtime, that together allow a user to design, activate, manage, and monitor a workflow that uses integration artifacts (e.g., an integration flow), that can then be deployed and executed on the integration cloud service runtime.
As illustrated in
In accordance with an embodiment, the integration cloud service design-time environment can be pre-loaded with connections to various SaaS applications or other applications, and can include a source component 124, and a target component 126. The source component can provide definitions and configurations for one or more source applications/objects; and the target component can provide definitions and configurations for one or more target applications/objects. The definitions and configurations can be used to identify application types, endpoints, integration objects and other details of an application/object.
As further illustrated in
In accordance with an embodiment, each of the above of the described components, as with the source and target components, can include design-time settings that can be persisted as part of an integration flow definition/configuration.
In accordance with an embodiment, an integration flow definition specifies the details of an integration cloud service integration flow; and encompasses both the static constructs of the integration flow (for example, message routers), and the configurable aspects (for example, routing rules). A fully configured flow definition and other required artifacts (for example, JCA and .WSDL files) in combination can be referred to as an integration project, or integration archive. An integration archive can fully define an integration flow, and can be implemented by an underlying implementation layer.
In accordance with an embodiment, a policies component 136 can include a plurality of policies that govern behaviors of the integration cloud service environment. For example, a polling policy can be configured for source-pull messaging interactions (i.e. query style integrations) for a source application, to invoke an outbound call to the source application via a time-based polling.
In accordance with an embodiment, other policies can be specified for security privileges in routing messages to a target application; for logging message payloads and header fields during an integration flow execution for subsequent analysis via a monitoring console; and for message throttling used to define a number of instances that an enterprise service bus (ESB) service can spawn to accommodate requests. In addition, policies can be specified for monitoring/tracking an integration flow at an integration flow level; and for validating messages being processed by the integration cloud service platform against a known schema.
In accordance with an embodiment, an integration developer can drag and drop a component on a development canvas 133 for editing and configuration, for use in designing an integration flow.
As further illustrated in
As illustrated in
For example, in accordance with an embodiment, the applications can include one or more enterprise cloud applications of 205, third-party cloud applications (for example, Salesforce) 203, and on-premises applications 219. The integration cloud service can expose simple object access protocol (SOAP) and representational state transfer (REST) endpoints to these applications for use in communicating with these applications.
In accordance with an embodiment, an integration cloud service integration flow can include a source connection, a target connection, and field mappings between the two connections. Each connection can be based on an application adapter, and can include additional information required by the application adapter to communicate with a specific instance of an application.
In accordance with an embodiment, an integration cloud service integration flow and a plurality of other required artifacts (for example, JCA and WSDL files) can be compiled into an integration archive, which can be deployed and executed in the integration cloud service runtime.
In accordance with an embodiment, a plurality of different types of integration flow patterns can be created using the web UI application, including data mapping integration flows, publishing integration flows, and subscribing integration flows.
For example, in accordance with an embodiment, to create a data mapping integration flow, an integration cloud service user can use an application adapter or an application connection to define a source application and a target application in the development interface, and define routing paths and data mappings between the source and target application. In a publishing integration flow, a source application or a service can be configured to publish messages to the integration cloud service through a predefined messaging service. In a subscribing integration flow, a target application or service can be configured to subscribe to messages from the integration cloud service through the messaging service.
As illustrated in
As further illustrated in
In accordance with an embodiment, the source and target connections can be further configured to include additional information. For example, the additional information can include types of operations to be performed on data associated with a request, and objects and fields against those operations.
In accordance with an embodiment, once the source and target connections are configured, mappings (mappers) between the two connections can be enabled, and mapper icons (for example, mapper icon A 317 and mapper icon B 318) can be displayed for use in opening the mappers, so that the user can define how information is transferred between a source and target data objects for both the request and response messages.
In accordance with an embodiment, the mappers can provide a graphical user interface for the user to map items (for example, fields, attributes, and elements) between the source and target applications by dragging a source item onto a target item. When a mapper for a request or response message in an ICS integration flow is opened, the source and target data objects can be automatically loaded using the source and target connections.
In accordance with an embodiment, lookups can be provided to facilitate the creation of mappings. As used herein, lookups are reusable mappings for different codes and terms used in applications to describe the same item. For example, one application uses a specific set of codes to describe countries, while another application uses a different set of codes to describe the same countries. Lookups can be used to map these different codes across the different applications.
As described above, development of an integration flow can be a complex effort requiring various components to be defined before the integration flow can be successfully deployed and executed. Some components within an integration flow are required to be defined while others are optional. Further complicating the development process is that defining optional components may lead to additional required components, and that the required components at any point in time during the development effort may vary, depending upon the order in which the integration components were defined.
In accordance with an embodiment, the systems and methods described herein can be used in determining workflow characteristics for use with an integration cloud service or other computing environment.
In accordance with an embodiment, an integration cloud service design console allows a user to design an integration flow that can then be deployed and executed by the integration cloud service runtime. A collection service can receive an indication of an integration archive that defines a business workflow or integration flow, wherein a characteristics server can then extract and persist the characteristics associated with the workflow, for subsequent use in providing a determination of workflows.
As described above, with a service-oriented design strategy, service reusability is a commonly-preferred design principle, and with such a design it may be beneficial to structure business workflows so that the workflows can be re-used across one or more business entities. To address this, some cloud vendors make available customer/consumer-agnostic or pre-built workflows to their platform consumers, through one or more paid or free channels.
Today's iPaaS providers are continuously striving to simplify how various applications, technologies, and processes can be integrated to create business workflows in the cloud. Increasingly, specialist third-party vendors are looking to provide similar pre-built workflows, under the general characterization of integration-software-as-a-service (iSaaS). Although some integration platforms provide tools for basic discoverability; a unified mechanism to discover workflows built by different individuals/companies, from different repositories, with a plurality of characteristics is not available.
In accordance with an embodiment, an integration platform that provides advanced discoverability of workflows provides an edge over other approaches, since it can allow consumers to search for workflows based on several dimensions, and additionally provide recommendations as to pre-built workflows, based on the existing characteristics of an integration platform consumer.
As illustrated in
In accordance with an embodiment, definitions of business workflows or integration flows can be stored or persisted as integration archives in various repositories, such as, for example, repositories of a tenant's own integration instance, repositories of an integration marketplace, and/or community repositories.
In accordance with an embodiment, the systems and methods described herein enable discoverability of such workflows, and provide integration cloud consumers with an ability to search for workflows based on several dimensions, thereby promoting reuse of pre-built workflows, and reducing the cost of development.
In accordance with an embodiment, the systems and methods can be used to allow an integration platform to recommend pre-built workflows based on the existing integration assets of an integration platform consumer.
An integration cloud service benefits when its applications, technologies, and processes, including the use of external information systems, can be optimally integrated to create business workflows. External information systems typically impose limits on their APIs, to ensure fair usage. Such limits may be dependent on a customer subscription and complexity of the API (for example, the time to process request, or database operations, CPU, or memory usage). Once these limits are reached, the system starts to throttle incoming requests, to protect from over-utilization of those external information systems.
Some external information systems impose penalties on clients that are trying to send multiple requests, for example by blocking the source application sending these requests. In the case of an iPaaS automated business workflow, the source application may be the cloud platform itself which would face such penalties. This can lead to situations whereby the invocation of a required external system may fail when the system is throttling incoming requests—which could result in the business workflow being in an incomplete or inconsistent state.
As illustrated in
In accordance with an embodiment, the integration cloud service, first external information system, and second external information system B, can each accept a particular rate or inflow of concurrent requests. The integration cloud service operates to enforce constraints on execution parameters, such as fixed limits for integration flows, and then evolve the limits behavior based on response messages from target applications, for example external information systems. Such applications or information systems often send documented headers in their responses. The system can include a throttling framework 300, by which the integration cloud service (in this example, Oracle Integration Cloud 310) operates to update and then use a peripheral flow metadata 312, based on information contained in such response headers, to control the execution of the integration flow.
For example, in accordance with an embodiment, the following information is typically available in response headers received from target applications, for example external information systems: (A) Total quota for window: a total number of requests allowed in a given window. (B) Available quota in current window: a number of requests that can be made in the current window. (C) Next reset interval: a time after which more tokens will be made available for request processing.
In accordance with an embodiment, as requests associated with an integration flow (e.g., requests to create accounts) are processed, and responses are received from target applications, for example external information systems, the integration cloud service reads the various response headers 340 from those applications or systems, and updates its periphery flow metadata with received values. The integration cloud service or other limits enforcement service provided at the periphery can update a limit for an activity 342 (such as account creation) and continue appropriately.
In accordance with an embodiment, the system can use application adapters that are responsible for discerning target application or system throttle parameters, if available. The adapters will update this information to the throttling framework so that effective limits can be imposed.
In accordance with an embodiment, on response from target system, the framework will read response code and headers as per target system throttle policy and notify a connectivity-guard to update the effective limits. At the periphery, limits check will now further check updated overrides from adapter. In this way adapting to limits and rejecting requests at periphery, the approach can protect an integration cloud platform from running business workflows that can land into a state which can cause issues as listed above.
In particular
In the illustrated example, the integration cloud service can accept an inflow of up to 100 concurrent requests, i.e., the integration flow can be triggered up to 100 times if 100 accounts are created in Oracle Service Cloud.
If we assume that Oracle Field Service Cloud can also accept up to 100 concurrent requests, while Oracle NetSuite can only accept up to 20 concurrent requests, beyond which it will start to throttle incoming requests; then in the event when 100 new accounts are created in Oracle Service Cloud, this will result in 100 concurrent invocations of the integration flow described above. However, only 20 of the 100 concurrent instances of the integration flow will be able to create a contact in Oracle NetSuite, after which NetSuite will start throttling incoming request to create additional accounts, resulting in the remaining (21+) instances failing during account creation.
To address the above scenario, in accordance with an embodiment, the integration cloud service operates to enforce constraints on execution parameters, such as fixed limits for integration flows, and then evolve the limits behavior based on response messages from target applications, for example external information systems. Such applications or information systems often send documented headers in their responses. The system can include a throttling framework 300, by which the integration cloud service (in this example, Oracle Integration Cloud 310) operates to update and then use a peripheral flow metadata 312, based on information contained in such response headers, to control the execution of the integration flow.
For example, in accordance with an embodiment, the following information is typically available in response headers received from target applications, for example external information systems: (A) Total quota for window: a total number of requests allowed in a given window. (B) Available quota in current window: a number of requests that can be made in the current window. (C) Next reset interval: a time after which more tokens will be made available for request processing.
In accordance with an embodiment, as requests associated with an integration flow (e.g., requests to create accounts) are processed, and responses are received from target applications, for example external information systems, the integration cloud service reads the various response headers 340 from those applications or systems, and updates its periphery flow metadata with received values. The integration cloud service or other limits enforcement service provided at the periphery can update a limit for an activity 342 (such as account creation) and continue appropriately.
In accordance with an embodiment, the effective limit for a particular integration flow can be derived through a function dependent on the limits associated with the target applications, for example external information systems, for example as min (fieldServiceLimit, netSuiteLimit, oicLimit). In the illustrated example, the effective limit would be min (100, 20, 100)=20. In this way the service cloud limit for the particular integration flow or activity is updated dynamically by the integration cloud service to satisfy the various limits of the applications 344, for example the external information systems.
By way of example, consider 200 integration flow requests are originally received, with an available quota of 10, and a next reset interval of 30 seconds; then a subsequent 10 requests will be allowed in 30 seconds for the workflow. A response header may indicate an available quota of 0, and a next reset interval of 30 seconds; in which case the integration cloud service will then reject subsequent invocations of the workflow for the next 30 seconds.
As illustrated in
At step 362, within the integration cloud service environment, an application integration flow is configured comprising a plurality of connections, wherein each connection is associated with an application or information system external to the integration cloud service environment.
At step 364, the process monitors, during an execution of the application integration flow, response messages from at least one or more applications or information systems external to the integration cloud service environment.
At step 366, based upon receiving a response message from the at least one or more applications or information systems external to the integration cloud service, during an execution of the application integration flow, the process modifies a constraint on execution parameters of the application integration flow, to control the execution of the integration flow.
In some embodiments, features of the present invention are implemented, in whole or in part, in a computer including a processor, a storage medium such as a memory and a network card for communicating with other computers. In some embodiments, features of the invention are implemented in a distributed computing environment in which one or more clusters of computers is connected by a network such as a Local Area Network (LAN), switch fabric network (e.g. InfiniBand), or Wide Area Network (WAN). The distributed computing environment can have all computers at a single location or have clusters of computers at different remote geographic locations connected by a WAN.
In some embodiments, features of the present invention are implemented, in whole or in part, in the cloud as part of, or as a service of, a cloud computing system based on shared, elastic resources delivered to users in a self-service, metered manner using Web technologies. There are five characteristics of the cloud (as defined by the National Institute of Standards and Technology: on-demand self-service; broad network access; resource pooling; rapid elasticity; and measured service. Cloud deployment models include: Public, Private, and Hybrid. Cloud service models include Software as a Service (SaaS), Platform as a Service (PaaS), Database as a Service (DBaaS), and Infrastructure as a Service (IaaS). As used herein, the cloud is the combination of hardware, software, network, and web technologies which delivers shared elastic resources to users in a self-service, metered manner. Unless otherwise specified the cloud, as used herein, encompasses public cloud, private cloud, and hybrid cloud embodiments, and all cloud deployment models including, but not limited to, cloud SaaS, cloud DBaaS, cloud PaaS, and cloud IaaS.
In some embodiments, features of the present invention are implemented using, or with the assistance of hardware, software, firmware, or combinations thereof. In some embodiments, features of the present invention are implemented using a processor configured or programmed to execute one or more functions of the present invention. The processor is in some embodiments a single or multi-chip processor, a digital signal processor (DSP), a system on a chip (SOC), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, state machine, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. In some implementations, features of the present invention may be implemented by circuitry that is specific to a given function. In other implementations, the features may be implemented in a processor configured to perform particular functions using instructions stored e.g., on a computer readable storage media.
In some embodiments, features of the present invention are incorporated in software and/or firmware for controlling the hardware of a processing and/or networking system, and for enabling a processor and/or network to interact with other systems utilizing the features of the present invention. Such software or firmware may include, but is not limited to, application code, device drivers, operating systems, virtual machines, hypervisors, application programming interfaces, programming languages, and execution environments/containers. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
In some embodiments, the present invention includes a computer program product which is a storage medium or computer-readable medium (media) having instructions stored thereon/in, which instructions can be used to program or otherwise configure a system such as a computer to perform any of the processes or functions of the present invention. The storage medium or computer readable medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. In particular embodiments, the storage medium or computer readable medium is a non-transitory storage medium or non-transitory computer readable medium.
The foregoing description is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Additionally, where embodiments of the present invention have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present invention is not limited to the described series of transactions and steps. Further, where embodiments of the present invention have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present invention.
Further, while the various embodiments describe particular combinations of features of the invention it should be understood that different combinations of the features will be apparent to persons skilled in the relevant art as within the scope of the invention such that features of one embodiment may incorporated into another embodiment. Moreover, it will be apparent to persons skilled in the relevant art that various additions, subtractions, deletions, variations, and other modifications and changes in form, detail, implementation and application can be made therein without departing from the spirit and scope of the invention. It is intended that the broader spirit and scope of the invention be defined by the following claims and their equivalents.