In distributed computing systems, for example, cloud or mobile computing systems, efficient resource usage is reached by analyzing load patterns and situations (for example, static, periodic, once-in-a-time, unpredictable or continuously changing workload). A common countermeasure is elasticity, which is the flexibility of entities (for example, system or component) to autonomously adapt its capacity to workload over time. Thereby the elasticity properties are bound to trade-offs “stateful versus stateless” components (that is, stateless is better suitable for elasticity), latency versus throughput, throughput versus stability and further for stateful “strict versus eventual consistent”. Since elasticity is crucial for environmental aspects of distributed computing systems (for example, energy efficiency or resource usage), much academic and industrial work has been done on an architectural system level. Treating elasticity on a system level is done based on system and usage statistics (for example, memory or CPU consumption, or a number of connections). When a certain threshold is reached (for example, resource limits), another processing node is started and the load is dispatched on a system/node level. Common techniques for elasticity on a system level are based on hybrid reactive and predictive schemes.
When treating elasticity and resource consumption on a system level, limits and capacities of resources on lower levels (for example, software module, sequence of modules, or external resource access) are not taken into account. This can lead to situations, in which the overall system threshold is not reached (that is, no countermeasure applied), however, the limits of the lower level resources (for example, content or external services) are at their peak (for example, a number of connections to external services, throughput limit of software module). A formal elasticity model for these artifacts and their limits is currently not available. Therefore, the limits of the lower level resources cannot be managed by the existing, system-level approaches/concepts and framework implementations. For instance, hypervisors or virtual machine monitors would not be able to optimize. An overall, combined, and optimal treatment of system and domain-level resources and elasticity has not been considered by existing approaches.
The present disclosure describes methods and systems, including computer-implemented methods, computer program products, and computer systems for integration scenario domain-specific and leveled resource elasticity and management.
In an implementation, system-level resource capacities and application-level resource capacities associated with an integration system in a distributed computing environment are determined, where the integration system includes an integration process. A workload associated with the integration system is identified based on the determined system-level capacities and application-level capacities. At least one constraint associated with the integration system is identified. A countermeasure is determined for resource elasticity and management based on the identified workload and constraint.
The above-described implementation is implementable using a computer-implemented method; a non-transitory, computer-readable medium storing computer-readable instructions to perform the computer-implemented method; and a computer-implemented system comprising a computer memory interoperably coupled with a hardware processor configured to perform the computer-implemented method/the instructions stored on the non-transitory, computer-readable medium.
The subject matter described in this specification can be implemented in particular implementations so as to realize one or more of the following advantages. First, the describe approach enables optimal resource elasticity by taking account of resource capacities at both system levels and lower levels such as application-specific resource limits. Second, the described approach can detect and predict a load situation in the system, and determine countermeasures based on the predicted load situation and constraints in the system. Third, the describe approach enables an optimal action plan for resource management by taking account of effectiveness of previous action plans. The described approach assesses its quality by monitoring decisions and action plans to optimally adapt to new situations over time. Other advantages will be apparent to those of ordinary skill in the art.
The details of one or more implementations of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Like reference numbers and designations in the various drawings indicate like elements.
The following detailed description describes integration scenario domain-specific and leveled resource elasticity and management and is presented to enable any person skilled in the art to make and use the disclosed subject matter in the context of one or more particular implementations. Various modifications to the disclosed implementations will be readily apparent to those of ordinary skill in the art, and described principles may be applied to other implementations and applications without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the described or illustrated implementations, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In distributed computing systems, for example, cloud or mobile computing systems, efficient resource usage is reached by analyzing load patterns and situations (for example, static, periodic, once-in-a-time, unpredictable, or continuously changing workload). A common countermeasure is elasticity, which is the flexibility of entities (for example, system or component) to autonomously adapt its capacity to workload over time. Thereby, the elasticity properties are bound to trade-offs; “stateful versus stateless” components (that is, stateless is better suitable for elasticity), latency versus throughput, throughput versus stability, and further for stateful “strict versus eventual consistent”. Since elasticity is crucial for environmental aspects of distributed computing systems (for example, energy efficiency or resource usage), much academic and industrial work has been done on an architectural system level. Treating elasticity on a system level is done based on system and usage statistics (for example, memory or CPU consumption, or a number of connections). When a certain threshold is reached (for example, resource limits), another processing node is started and the load is dispatched on a system/node level. Common techniques for elasticity on a system level are based on hybrid reactive and predictive schemes.
When treating elasticity and resource consumption on a system level, limits and capacities of resources on lower levels (for example, software module, sequence of modules, or external resource access) are not taken into account. This can lead to situations, in which the overall system threshold is not reached (that is, no countermeasure applied), however, the limits of the lower level resources (for example, content or external services) are at their peak (for example, a number of connections to external services, throughput limit of software module). A formal elasticity model for these artifacts and their limits is currently not available. Therefore, the limits of the lower level resources cannot be managed by the existing, system-level approaches/concepts and framework implementations. For instance, hypervisors or virtual machine (VM) monitors would not be able to optimize. An overall, combined, and optimal treatment of system and domain-level resources and elasticity has not been considered by existing approaches.
At a high-level, the described approach focuses on the integration domain, for example, integration process as sequence of adapters, operator modules, and service dependencies/resources (for example, CPU, memory, disk, database, queuing). The described approach also addresses the following levels (from high level to low level): system, integration process, endpoint/adapter, single operator module, and single service dependency/resource.
Most of the existing approaches simply try to scale as elasticity strategy, some even “scale-back” to free resources. The described approach focuses on the scale out and back based on a hybrid rule-based and predictive machine learning scheme. The described approach uses a meta-model for multi-level resource management and elasticity. The described approach employs the following machine learning components:
Efficient usage of computing resources on a system level has been well-addressed, by existing approaches, on a VM and system level for domains like hardware virtualization, database systems, and cloud computing by non-functional countermeasures like scalability (that is, vertical or horizontal scalability) and partially even the re-distribution of resources, when the load decreases below the system's capacity (that is, elasticity). The application integration domain leverages these results on the grain-granular system level for processing the increasing message workload generated by a growing number of applications (for example, business, cloud, or mobile applications), and Internet of Things (IoT) devices. This grain granular resource management by existing approaches works well for cases in which no additional cost constraints (for example, cost of VM or hardware) play a role or one integration scenario fully utilizes the resources. Typically, resources within integration scenarios can be a number of connections supported by integration adapters, capacity of the integration adapters, and capacity of the integration operations within the integration process.
The described approach, compared to existing approaches, enables a more fine-granular resource management approach on the integration process or even adapter and operation levels that targets several integration system limits (for example, bandwidth, capacity, and a number of connections). The described approach allows the integration system to react adequately on potential overload situations on a more fine-granular, domain-specific level (for example, content-level). The described approach includes:
Resources and Capacities
Definition of Classifiers
A classifier categorizes resource and load situations based on temporal variations of (discrete) capacity utilization in relation to defined thresholds. For different environments, the described approach defines a capacity derivation methodology. For example, integration throughput of an operation or adapter can be experimentally determined and a benchmark can be used so that the classifier can be learned. An algorithm for determining the message throughput capacity classification can include:
Other resource consumption can occur for external services (for example, database (DB), message queuing (MQ), and landscape directories). The resource consumption information of external services can be collected in a minimally intrusive way on the runtime system with adapted metrics for each service. As discussed below, the information about external services has to be taken into account before applying a countermeasure.
Load Situation Classifiers
The described approach derives load situation classifiers from common load situations as patterns. The load situations can include (as shown in
Definition of Load Patterns
In the described approach, load patterns denote a set of metrics capturing the usage statistics that match with systems' resources, limits, and thresholds. Hereby, usage statistics use system resources like message throughput per time, service usage statistics, etc. As discussed above, capacity is the maximal processable load and limit as specific maximum load lower than the capacity (that is, which would trigger a change).
Scale Micro-Load Classifier: Cases Where Changes Make Sense
State Changes
The described approach limits the responses of the system to state changes. State changes lead to actions that shall be triggered. Therefore, the described approach differentiates between urgencies for actions. The following urgencies for actions can be used: high (stability or limits in danger, and immediate action for no time to lose), medium (predicted thresholds show a need for actions, and do something without hurry), low (actions can be done later). The micro-load patterns that involve state changes are (as shown in
No State Changes (Urgency Always Low or None)
The described approach also introduces “none” as urgency, which means that the situation is within normal parameters and no action required. The micro-load patterns that involve no state changes (that is, urgency always low or none) are (as shown in
State Changes in Cases when Capacity could be Reduced
So far, the describe approach only considers scale out cases. However, when the resources are not used any more, a reduction of the resources makes sense (for example, for cost reasons). The micro-load patterns, in which state changes and capacity can be reduce can include:
Macro-Level Classifier (Combined Micro Classifier)
The previously discussed classifiers consider a micro-level time window on the current load situation (that is, local view). However, there are effects that one might want to avoid, which are not trackable on a local view only. Therefore, macro-level classifiers that capture these situations are defined (as shown in
Hence a macro-classifier is learned and used to detect this alternating load pattern and take a decision coordinating the micro-classifier evaluation. For instance, in the first special case:
A third special case is once-in-a-lifetime overload. While a micro-classifier, would react immediately, a macro classifier could deal with it by not rushing one optimization after the other to try to catch the peak, just to reduce it afterwards to the normal level. However, application specific peak loads, for example, at the beginning of the month, can be handled by the macro classifier as well.
Summary: Micro-Level Scale Up and Down
The described approach defines resources and capacities, limits/thresholds, and classifiers for all potential workload situations (for example, overload implies urgent situations) as base class of all resource constraints (for example, message throughput, number of connections, and size of memory):
Countermeasure/Patterns
The described approach identifies the following countermeasure categories:
Date-Centric
The data-centric countermeasure patterns for message-based integration target any kinds of data and data flow aspects of integration. Tables 1.1 and 1.2 describe data-centric countermeasures. Note that Table 1.2 is a continuation of Table 1.1, where Tables 1.1 and 1.2 together form a complete table describing data-centric countermeasures. When Tables 1.1 and 1.2 form into a complete table, the column of consequences in Table 1.2 is adjacent to the column of on which resources in Table 1.1. In other words, each data-centric countermeasure is described by columns in order of name, known implementations, covered by current EIPs, effect, on which resources, consequences, configuration, and expected time to effect.
The data-centric countermeasures can perform:
Scaling
The scaling countermeasure patterns for message-based integration target adding or reducing resources used by integration content. Tables 2.1 and 2.2 describe scaling countermeasures. Note that Table 2.2 is a continuation of Table 2.1, where Tables 2.1 and 2.2 together form a complete table describing scaling countermeasures. When Tables 2.1 and 2.2 forming into a complete table, the column of consequences in Table 2.2 is adjacent to the column of on which resources in Table 2.1. In other words, each scaling countermeasure is described by columns in order of name, known implementations, covered by current EIPs, effect, on which resources, consequences, configuration, and expected time to effect.
The scaling countermeasures can perform:
Constraints
The content, as well as the consumed services underlie certain constraints. For instance, to be able to use streaming and micro-batching the operations and the integration processing technology have to be able to handle streams and batches of messages. Another example of constraint is whether the integration process can lose data or not. Likewise, the states have an impact on the allowed countermeasures:
Some of these optimizations can be applied on different levels: from single operations, processes up to whole integration scenarios for the content, as well as on VM-level and for external service configurations. The optimizations and their constraints have interdependencies that have to be respected that negatively impact their composition, for example:
Examples: Integration Content Scaling
Through the classifiers, the current load situations can be identified and assessed. Now, let us go through the different levels that can be improved, which are defined as scaling patterns. The integration system and its parts require resources that they consume as services: service scaling.
Scalable Adapter
The integration system (intra VM) has adapters that can be scaled (for example, adapter scaling on content level) on the sender and receiver side.
Re-Order Message Processor
The message processors or operations can be re-ordered, for example, for better performance. The re-ordering possibilities are limited by the dependencies, states (that is, stateless processors can be re-arranged better) and control flow constraints (for example, first decrypt message then map).
Scalable Message Processor
The message processors can also be scaled on an instance level. This again uses a load balancing and join routing. For instance, it can be used for “bottleneck” operations.
Scalable Sub-Process
Sub-processes are an ordered set of message processors. Scaling of sub-processes can be performed for the cases with or without adapters.
Scalable Integration Process
Scaling the whole integration process can be done by copying it to several processing nodes. This could leverage VM-scaling, for example, multiple VMs with content.
Countermeasures Applied
The countermeasures discussed above can be brought into context with the categorized load situations.
As shown in
Based on the decision tree in
The rules look like the tuple: observable/state and action/countermeasure. For instance, the following example denotes a rule that translates to a scale out action plan in three iterations until the situation is under control:
Iteration 1:
Iteration 2:
Iteration 3:
In a typical implementation, a load pattern is first observed. An action is determined and applied based on the observed load pattern. After applying the action, the system continues to monitor the load pattern and takes appropriate actions to avoid overload.
Multi-Level Resource Management and Elasticity Model
One of the general countermeasure variants in case of critical messaging and resource situations is elasticity. While this has been analyzed on a VM-level already by existing approaches, the described approach focuses on the outlined issues within the integration domain (including systems and capacities).
System Design
To illustrate the feasibility of this design,
General Setup
The system comprises an integration system 1502 with an integration engine 1504 (that is, the runtime) and an operational store 1506. The system already has load-balancing capabilities on different levels.
Multi-Level Machine Learning
An ML approach is used to determine two things during the execution of the system:
For the micro classifier, learning an example neuronal network with five output states (according to the cases discussed above) is implemented. Table 3 shows the performance of the neuronal network including the number of training data, the number of misjudged load situations as error and the error ratio. Fifteen input data points are sufficient for this case because the performance does not improve when the input data points are increased to 700. As shown in Table 3, most of the cases can be recognized correctly. Only for the constant load case (that is, case 1) the noise on the data (no straight line, but small ups and downs) leaves the network uncertain about the current situation. Hence the neuronal network gets all cases with a similar value. When recognizing this situation as the constant case, the error ratio of this case is close to zero errors.
21%
The sample implementation includes the following aspects:
Hypervisor Extensions
For the execution of the actions, the describe approach extends the hypervisor to execute actions based on the action plans it gets from the system. It does not contain any additional logic about the decisions made, however, uses its existing primitives like create VM and additional ones like scale IS operation or scale sub-process according to the action plan.
Guiding Example
Following is a guiding example including:
The computer 1602 can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. The illustrated computer 1602 is communicably coupled with a network 1630. In some implementations, one or more components of the computer 1602 may be configured to operate within environments, including cloud-computing-based, local, global, or other environment (or a combination of environments).
At a high level, the computer 1602 is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer 1602 may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, or other server (or a combination of servers).
The computer 1602 can receive requests over network 1630 from a client application (for example, executing on another computer 1602) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer 1602 from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.
Each of the components of the computer 1602 can communicate using a system bus 1603. In some implementations, any or all of the components of the computer 1602, both hardware or software (or a combination of hardware and software), may interface with each other or the interface 1604 (or a combination of both) over the system bus 1603 using an application programming interface (API) 1612 or a service layer 1613 (or a combination of the API 1612 and service layer 1613). The API 1612 may include specifications for routines, data structures, and object classes. The API 1612 may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer 1613 provides software services to the computer 1602 or other components (whether or not illustrated) that are communicably coupled to the computer 1602. The functionality of the computer 1602 may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer 1613, provide reusable, defined functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or other suitable format. While illustrated as an integrated component of the computer 1602, alternative implementations may illustrate the API 1612 or the service layer 1613 as stand-alone components in relation to other components of the computer 1602 or other components (whether or not illustrated) that are communicably coupled to the computer 1602. Moreover, any or all parts of the API 1612 or the service layer 1613 may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.
The computer 1602 includes an interface 1604. Although illustrated as a single interface 1604 in
The computer 1602 includes a processor 1605. Although illustrated as a single processor 1605 in
The computer 1602 also includes a database 1606 that can hold data for the computer 1602 or other components (or a combination of both) that can be connected to the network 1630 (whether illustrated or not). For example, database 1606 can be an in-memory, conventional, or other type of database storing data consistent with this disclosure. In some implementations, database 1606 can be a combination of two or more different database types (for example, a hybrid in-memory and conventional database) according to particular needs, desires, or particular implementations of the computer 1602 and the described functionality. Although illustrated as a single database 1606 in
The computer 1602 also includes a memory 1607 that can hold data for the computer 1602 or other components (or a combination of both) that can be connected to the network 1630 (whether illustrated or not). For example, memory 1607 can be random access memory (RAM), read-only memory (ROM), optical, magnetic, and the like storing data consistent with this disclosure. In some implementations, memory 1607 can be a combination of two or more different types of memory (for example, a combination of RAM and magnetic storage) according to particular needs, desires, or particular implementations of the computer 1602 and the described functionality. Although illustrated as a single memory 1607 in
The application 1608 is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer 1602, particularly with respect to functionality described in this disclosure. For example, application 1608 can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application 1608, the application 1608 may be implemented as multiple applications 1607 on the computer 1602. In addition, although illustrated as integral to the computer 1602, in alternative implementations, the application 1608 can be external to the computer 1602.
There may be any number of computers 1602 associated with, or external to, a computer system containing computer 1602, each computer 1602 communicating over network 1630. Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer 1602, or that one user may use multiple computers 1602.
Described implementations of the subject matter can include one or more features, alone or in combination.
For example, in a first implementation, a computer-implemented method comprising: determining system-level resource capacities and application-level resource capacities associated with an integration system in a distributed computing environment, the integration system including an integration process; identifying a workload associated with the integration system based on the determined system-level capacities and application-level capacities; identifying at least one constraint associated with the integration system; and determining a countermeasure for resource elasticity and management based on the identified workload and constraint.
The foregoing and other described implementations can each optionally include one or more of the following features:
A first feature, combinable with any of the following features, wherein the system-level resource capacities include at least one of a resource capacity of CPU, memory, disk input/output, or network bandwidth.
A second feature, combinable with any of the previous or following features, wherein the application-level capacities include at least one of a limit of throughput, message size, number of messages, number of connections.
A third feature, combinable with any of the previous or following features, identifying the workload includes identifying at least one of a micro-load pattern, a macro-load pattern, or an urgency of performing resource optimization.
A fourth feature, combinable with any of the previous or following features, wherein the identified workload includes at least one of constant overload, steadying overload, approaching overload, increasing overload, constant free capacity, approaching equal capacity, approaching free capacity, or increasing free capacity.
A fifth feature, combinable with any of the previous or following features, wherein the constraint includes at least one of whether the integration process is stateless or stateful, whether the integration process can lose data, whether the integration process can handle streaming, or whether the integration process can handle micro-batching.
A sixth feature, combinable with any of the previous or following features, wherein when the identified workload is constant free capacity, the countermeasure includes at least one of early projection, early selection, steaming, or micro-batching.
A seventh feature, combinable with any of the previous or following features, wherein when the identified workload is one of constant overload, steadying overload, approaching overload, or increasing overload and the constraint is that the integration process can lose data, the countermeasure includes at least one of a message rejecter or a message sampler.
An eighth feature, combinable with any of the previous or following features, wherein when the identified workload is one of constant overload, steadying overload, approaching overload, or increasing overload and the constraint is that the integration process cannot lose data, the countermeasure includes at least one of a message splitter or scaling out.
A ninth feature, combinable with any of the previous or following features, wherein the scaling out includes at least one of scaling an adaptor, scaling a message processor, scaling a sub-process, scaling an integration process, or scaling an integration flow.
A tenth feature, combinable with any of the previous or following features, the method further comprising: evaluating effectiveness of the countermeasure; and storing information of the effectiveness of the countermeasure.
An eleventh feature, combinable with any of the previous or following features, the method further comprising identifying an action plan based on the countermeasure and historical effectiveness of the countermeasure.
In a second implementation, a non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising: determining system-level resource capacities and application-level resource capacities associated with an integration system in a distributed computing environment, the integration system including an integration process; identifying a workload associated with the integration system based on the determined system-level capacities and application-level capacities; identifying at least one constraint associated with the integration system; and determining a countermeasure for resource elasticity and management based on the identified workload and constraint.
The foregoing and other described implementations can each optionally include one or more of the following features:
A first feature, combinable with any of the following features, wherein the system-level resource capacities include at least one of a resource capacity of CPU, memory, disk input/output, or network bandwidth.
A second feature, combinable with any of the previous or following features, wherein the application-level capacities include at least one of a limit of throughput, message size, number of messages, number of connections.
A third feature, combinable with any of the previous or following features, wherein the identified workload includes at least one of constant overload, steadying overload, approaching overload, increasing overload, constant free capacity, approaching equal capacity, approaching free capacity, or increasing free capacity.
A fourth feature, combinable with any of the previous or following features, wherein the constraint includes at least one of whether the integration process is stateless or stateful, whether the integration process can lose data, whether the integration process can handle streaming, or whether the integration process can handle micro-batching.
A fifth feature, combinable with any of the previous or following features, comprising one or more instructions to: evaluate effectiveness of the countermeasure; and store information of the effectiveness of the countermeasure.
A sixth feature, combinable with any of the previous or following features, comprising one or more instructions to identify an action plan based on the countermeasure and historical effectiveness of the countermeasure.
In a third implementation, a computer-implemented system comprising a computer memory and a hardware processor interoperably coupled with the computer memory and configured to perform operations comprising: determining system-level resource capacities and application-level resource capacities associated with an integration system in a distributed computing environment, the integration system including an integration process; identifying a workload associated with the integration system based on the determined system-level capacities and application-level capacities; identifying at least one constraint associated with the integration system; and determining a countermeasure for resource elasticity and management based on the identified workload and constraint.
Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer programs, that is, one or more modules of computer program instructions encoded on a tangible, non-transitory, computer-readable computer-storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially generated propagated signal, for example, a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer-storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of computer-storage mediums.
The term “real-time,” “real time,” “realtime,” “real (fast) time (RFT),” “near(ly) real-time (NRT),” “quasi real-time,” or similar terms (as understood by one of ordinary skill in the art), means that an action and a response are temporally proximate such that an individual perceives the action and the response occurring substantially simultaneously. For example, the time difference for a response to display (or for an initiation of a display) of data following the individual's action to access the data may be less than 1 ms, less than 1 sec., less than 5 secs., etc. While the requested data need not be displayed (or initiated for display) instantaneously, it is displayed (or initiated for display) without any intentional delay, taking into account processing limitations of a described computing system and time required to, for example, gather, accurately measure, analyze, process, store, or transmit the data.
The terms “data processing apparatus,” “computer,” or “electronic computer device” (or equivalent as understood by one of ordinary skill in the art) refer to data processing hardware and encompass all kinds of apparatus, devices, and machines for processing data, including by way of example, a programmable processor, a computer, or multiple processors or computers. The apparatus can also be or further include special purpose logic circuitry, for example, a central processing unit (CPU), an FPGA (field programmable gate array), or an ASIC (application-specific integrated circuit). In some implementations, the data processing apparatus or special purpose logic circuitry (or a combination of the data processing apparatus or special purpose logic circuitry) may be hardware- or software-based (or a combination of both hardware- and software-based). The apparatus can optionally include code that creates an execution environment for computer programs, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of execution environments. The present disclosure contemplates the use of data processing apparatuses with or without conventional operating systems, for example LINUX, UNIX, WINDOWS, MAC OS, ANDROID, IOS, or any other suitable conventional operating system.
A computer program, which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, for example, one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, for example, files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. While portions of the programs illustrated in the various figures are shown as individual modules that implement the various features and functionality through various objects, methods, or other processes, the programs may instead include a number of sub-modules, third-party services, components, libraries, and such, as appropriate. Conversely, the features and functionality of various components can be combined into single components as appropriate. Thresholds used to make computational determinations can be statically, dynamically, or both statically and dynamically determined.
The methods, processes, logic flows, etc. described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The methods, processes, logic flows, etc. can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, for example, a CPU, an FPGA, or an ASIC.
Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors, both, or any other kind of CPU. Generally, a CPU will receive instructions and data from a read-only memory (ROM) or a random access memory (RAM), or both. The essential elements of a computer are a CPU, for performing or executing instructions, and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to, receive data from or transfer data to, or both, one or more mass storage devices for storing data, for example, magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, for example, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or a portable storage device, for example, a universal serial bus (USB) flash drive, to name just a few.
Computer-readable media (transitory or non-transitory, as appropriate) suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, for example, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks, for example, internal hard disks or removable disks; magneto-optical disks; and CD-ROM, DVD+/−R, DVD-RAM, and DVD-ROM disks. The memory may store various objects or data, including caches, classes, frameworks, applications, backup data, jobs, web pages, web page templates, database tables, repositories storing dynamic information, and any other appropriate information including any parameters, variables, algorithms, instructions, rules, constraints, or references thereto. Additionally, the memory may include any other appropriate data, such as logs, policies, security or access data, reporting files, as well as others. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, implementations of the subject matter described in this specification can be implemented on a computer having a display device, for example, a CRT (cathode ray tube), LCD (liquid crystal display), LED (Light Emitting Diode), or plasma monitor, for displaying information to the user and a keyboard and a pointing device, for example, a mouse, trackball, or trackpad by which the user can provide input to the computer. Input may also be provided to the computer using a touchscreen, such as a tablet computer surface with pressure sensitivity, a multi-touch screen using capacitive or electric sensing, or other type of touchscreen. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, for example, visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
The term “graphical user interface,” or “GUI,” may be used in the singular or the plural to describe one or more graphical user interfaces and each of the displays of a particular graphical user interface. Therefore, a GUI may represent any graphical user interface, including but not limited to, a web browser, a touch screen, or a command line interface (CLI) that processes information and efficiently presents the information results to the user. In general, a GUI may include a plurality of user interface (UI) elements, some or all associated with a web browser, such as interactive fields, pull-down lists, and buttons. These and other UI elements may be related to or represent the functions of the web browser.
Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, for example, as a data server, or that includes a middleware component, for example, an application server, or that includes a front-end component, for example, a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of wireline or wireless digital data communication (or a combination of data communication), for example, a communication network. Examples of communication networks include a local area network (LAN), a radio access network (RAN), a metropolitan area network (MAN), a wide area network (WAN), Worldwide Interoperability for Microwave Access (WIMAX), a wireless local area network (WLAN) using, for example, 802.11 a/b/g/n or 802.20 (or a combination of 802.11x and 802.20 or other protocols consistent with this disclosure), all or a portion of the Internet, or any other communication system or systems at one or more locations (or a combination of communication networks). The network may communicate with, for example, Internet Protocol (IP) packets, Frame Relay frames, Asynchronous Transfer Mode (ATM) cells, voice, video, data, or other suitable information (or a combination of communication types) between network addresses.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular implementations of particular inventions. Certain features that are described in this specification in the context of separate implementations can also be implemented, in combination, in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations, separately, or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
Particular implementations of the subject matter have been described. Other implementations, alterations, and permutations of the described implementations are within the scope of the following claims as will be apparent to those skilled in the art. While operations are depicted in the drawings or claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed (some operations may be considered optional), to achieve desirable results. In certain circumstances, multitasking or parallel processing (or a combination of multitasking and parallel processing) may be advantageous and performed as deemed appropriate.
Moreover, the separation or integration of various system modules and components in the implementations described above should not be understood as requiring such separation or integration in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Accordingly, the above description of example implementations does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure.
Furthermore, any claimed implementation below is considered to be applicable to at least a computer-implemented method; a non-transitory, computer-readable medium storing computer-readable instructions to perform the computer-implemented method; and a computer system comprising a computer memory interoperably coupled with a hardware processor configured to perform the computer-implemented method or the instructions stored on the non-transitory, computer-readable medium.