The present disclosure generally relates to identifying the relevance of process inputs, and more particularly, to computing the relevance of inputs tracked by sensors in a system to identify an importance of each sensor in the sensor array and how the sensor affects the process output.
Predictive models may be used to analyze existing systems and processes and make prediction about future outcomes. The predictive models may be generated using collected and aggregated observations to make predictions. One or more dependent (output or response) variables and numerous independent (input or explanatory) variables may be present in a collection of observations. One or more of the independent variables may have more influence on the dependent variable than the remaining independent variables.
According to an embodiment of the present disclosure, a method is disclosed. The method comprises computing an input relevance measure of process inputs by obtaining from a sensor array sensor measurements that relate process inputs to a process output, computing from the sensor measurements, a number of data partitions, building for each data partition of the number of data partitions a corresponding stochastic gradient boosting model, and computing for each process input, a number of partial dependency plots with each partial dependency plot being based on the corresponding stochastic gradient boosting model. The input relevance measure is then computed for each process input based on the number of partial dependency plots of the process input to estimate a degree of change in the process output obtained by varying the process input.
In an aspect, process inputs with a zero or negative input relevance measure may be discarded.
In one aspect, the input relevance measure is computed from the number of partial dependency plots by computing a difference between a maximum confirmed process output of the number of partial dependency plots and a minimum confirmed process output of the number of partial dependency plots.
According to an embodiment of the present disclosure, a system is disclosed. A system includes a sensor array and a processor that is configured to obtain from the sensor array sensor measurements that relate a plurality of process inputs to a process output and compute from the sensor measurements, a number of data partitions. The system then builds for each data partition a corresponding stochastic gradient boosting model and generates for each process input a number of partial dependency plots with each partial dependency plot being based on the corresponding stochastic gradient boosting model. The system then computes for each process input, an input relevance measure that estimates a degree of change in the process output obtained by varying the process input.
According to an embodiment of the present disclosure, a non-transitory computer readable storage medium is disclosed to store a program which, when executed by a computer system, causes the computer system to perform any of the methods described herein.
To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent that the present teachings may be practiced without such details. In other instances, well-known methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.
The illustrative embodiments are related to computing relevance inputs to a process performed with the aid of a sensor array. In various industrial processes, it may be beneficial to identify the relevant inputs to ensure that the output of a process is within desired bounds. For example, it may be beneficial to identify which operating parameters of a gas turbine may be used to minimize the environmental impact of the turbine in terms of NOx (nitric oxide, NO and nitrogen dioxide, NO2, the nitrogen oxides that are most relevant for air pollution) emissions. In another example, it may be beneficial to identify which operating parameters of a gas turbine may be used to minimize CO (carbon monoxide) emissions. Generally, identifying which sensors may be monitored in real time to prevent an industrial process from going out of normal operating limits may be not only beneficial but a significantly arduous task due to random noise inputs.
The illustrative embodiments recognize that predictive analytics methods may be utilized to generate models that capture dependency between process inputs and output/response based on historical data. In particular, stochastic gradient boosting models, due to their data driven nature, may capture the nature of the dependencies. Generally, stochastic gradient boosting models may display the nature of univariate and bivariate dependencies as partial dependence plots which may be used to identify which inputs of the process achieve desired objectives. However, due to the “greedy” nature of such models, they may accommodate both signal and random noise inputs, thus, introducing hazards including exaggeration of a degree of change expected from controlling a specific process input as well as the suggested nature of dependencies on the given input being a meaningless random pattern.
The illustrative embodiments identify a relevant subset of influential process inputs and provide realistic (conservative) estimates of the degree of change expected in the process output due to varying the inputs. The illustrative embodiments compute an input relevance measure of process inputs by obtaining from a sensor array, sensor measurements that relate the process inputs to a process output. The illustrative embodiments compute from the sensor measurements, a number of data partitions, building for each data partition of the number of data partitions a corresponding stochastic gradient boosting model, and computing for each process input, a number of partial dependency plots with each partial dependency plot being based on the corresponding stochastic gradient boosting model. The illustrative embodiments then compute an input relevance measure for each process input, using the number of partial dependency plots of the process input, to estimate a degree of change in the process output obtained by varying the process input.
The methods described herein are significantly beneficial in their estimation richness, veracity, and noise impact reduction, especially for their elimination or at least reduction of data-induced uncertainty inherent to Stochastic Gradient Boosting models due to arbitrariness of a single round of train-test partition. An embodiment can be implemented as a software and/or hardware application. The application implementing an embodiment can be configured as a modification of an existing system, as a separate application that operates in conjunction with an existing system, a standalone application, or some combination thereof.
This manner of computing input relevance measures is unavailable in the presently available methods in the technological field of endeavor pertaining to processes involving the real-time measurement and monitoring of sensor arrays such as manufacturing and other industrial applications. A method of an embodiment described herein, when implemented to execute on a device or data processing system, comprises substantial advancement of the computational functionality of that device or data processing system in configuring the performance of a monitoring platform.
The illustrative embodiments are described with respect to certain types of machines developing statistical and predictive analytic models based on data records obtained from sensor measurements or data. The illustrative embodiments are also described with respect to other scenes, subjects, measurements, devices, data processing systems, environments, components, and applications only as examples. Any specific manifestations of these and other similar artifacts are not intended to be limiting to the invention. Any suitable manifestation of these and other similar artifacts can be selected within the scope of the illustrative embodiments.
Furthermore, the illustrative embodiments may be implemented with respect to any type of data, data source, or access to a data source over a data network. Any type of data storage device may provide the data to an embodiment of the invention, either locally at a data processing system or over a data network, within the scope of the invention. Where an embodiment is described using a mobile device, any type of data storage device suitable for use with the mobile device may provide the data to such embodiment, either locally at the mobile device or over a data network, within the scope of the illustrative embodiments.
The illustrative embodiments are described using specific surveys, code, hardware, algorithms, designs, architectures, protocols, layouts, schematics, and tools only as examples and are not limiting to the illustrative embodiments. Furthermore, the illustrative embodiments are described in some instances using particular software, tools, and data processing environments only as an example for the clarity of the description. The illustrative embodiments may be used in conjunction with other comparable or similarly purposed structures, systems, applications, or architectures. For example, other comparable devices, structures, systems, applications, or architectures therefor, may be used in conjunction with such embodiment of the invention within the scope of the invention. An illustrative embodiment may be implemented in hardware, software, or a combination thereof.
The examples in this disclosure are used only for the clarity of the description and are not limiting to the illustrative embodiments. Additional data, operations, actions, tasks, activities, and manipulations will be conceivable from this disclosure and the same are contemplated within the scope of the illustrative embodiments.
Any advantages listed herein are only examples and are not intended to be limiting to the illustrative embodiments. Additional or different advantages may be realized by specific illustrative embodiments. Furthermore, a particular illustrative embodiment may have some, all, or none of the advantages listed above.
With reference to the figures and in particular with reference to
Clients or servers are only example roles of certain data processing systems connected to network 102 and are not intended to exclude other configurations or roles for these data processing systems. Server 104 and server 106 couple to network 102 along with storage unit 108. Software applications may execute on any computer in data processing environment 100. Client 110, client 112, client 114 are also coupled to network 102. A data processing system, such as server 104 or server 106, or clients (client 110, client 112, client 114) may contain data and may have software applications or software tools executing thereon. Server 104 may include one or more GPUs (graphics processing units) for training one or more models.
Only as an example, and without implying any limitation to such architecture,
Device 120 is an example of a device described herein. For example, device 120 can take the form of a smartphone, a special purpose fabrication platform, a tablet computer, a laptop computer, client 110 in a stationary or a portable form, a wearable computing device, or any other suitable device. Any software application described as executing in another data processing system in
Input relevance engine 128 may execute as part of sensor recommender system 124, client application 122, server application 116 or on any data processing system herein. Input relevance engine 128 may also execute as a cloud service communicatively coupled to system services, hardware resources, or software elements described herein. Input relevance engine 128 may be operable to compute an input relevance measure of process inputs in a desired process. Sensor recommender system 124 may recommend one or more sensors of a sensor array 126 to monitor in real time to prevent an industrial process from going out of normal/desired operating limits. Database 118 of storage unit 108 stores one or more measurements or data in repositories for computations herein.
Server application 116 implements an embodiment described herein. Server application 116 can use data from storage unit 108 for computations herein. Server application 116 can also obtain data from any client for computations. Server application 116 can also execute in any of data processing systems (server 104 or server 106, client 110, client 112, client 114), such as client application 122 in client 110 and need not execute in the same system as server 104.
Server 104, server 106, storage unit 108, client 110, client 112, client 114, device 120 may couple to network 102 using wired connections, wireless communication protocols, or other suitable data connectivity. Client 110, client 112 and client 114 may be, for example, personal computers or network computers.
In the depicted example, server 104 may provide data, such as boot files, operating system images, and applications to client 110, client 112, and client 114. Client 110, client 112 and client 114 may be clients to server 104 in this example. Client 110, client 112 and client 114 or some combination thereof, may include their own data, boot files, operating system images, and applications. Data processing environment 100 may include additional servers, clients, and other devices that are not shown. Server 104 includes a server application 116 that may be configured to implement one or more of the functions described herein in accordance with one or more embodiments.
Server 106 may include a configuration to gather sensor array measurements and store the measurements in database 118 for automatic computation of input relevance measures.
An operator of the sensor recommender system 124 can include individuals, computer applications, and electronic devices. The operators may employ the input relevance engine 128 of the sensor recommender system 124 to make predictions or decisions. An operator may desire that the input relevance engine 128 perform methods to satisfy a predetermined evaluation/relevance criteria. The operator may be a processor or a person or both.
The data processing environment 100 may also be the Internet. Network 102 may represent a collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) and other protocols to communicate with one another. At the heart of the Internet is a backbone of data communication links between major nodes or host computers, including thousands of commercial, governmental, educational, and other computer systems that route data and messages. Of course, data processing environment 100 also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN).
Among other uses, data processing environment 100 may be used for implementing a client-server environment in which the illustrative embodiments may be implemented. A client-server environment enables software applications and data to be distributed across a network such that an application functions by using the interactivity between a client data processing system and a server data processing system. Data processing environment 100 may also employ a service-oriented architecture where interoperable software components distributed across a network may be packaged together as coherent business applications. Data processing environment 100 may also take the form of a cloud, and employ a cloud computing model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service.
With reference to
Data processing system 200 is also representative of a data processing system or a configuration therein, such as device 120 in
In the depicted example, data processing system 200 employs a hub architecture including North Bridge and memory controller hub (NB/MCH) 202 and South Bridge and input/output (I/O) controller hub (SB/ICH) 204. Processing unit 206, main memory 208, and graphics processor 210 are coupled to North Bridge and memory controller hub (NB/MCH) 202. Processing unit 206 may contain one or more processors and may be implemented using one or more heterogeneous processor systems. Processing unit 206 may be a multi-core processor. Graphics processor 210 may be coupled to North Bridge and memory controller hub (NB/MCH) 202 through an accelerated graphics port (AGP) in certain implementations.
In the depicted example, local area network (LAN) adapter 212 is coupled to South Bridge and input/output (I/O) controller hub (SB/ICH) 204. Audio adapter 216, keyboard and mouse adapter 220, modem 222, read only memory (ROM) 224, universal serial bus (USB) and other ports 232, and PCI/PCIe devices 234 are coupled to South Bridge and input/output (I/O) controller hub (SB/ICH) 204 through bus 218. Hard disk drive (HDD) or solid-state drive (SSD) 226a and CD-ROM 230 are coupled to South Bridge and input/output (I/O) controller hub (SB/ICH) 204 through bus 228. PCI/PCIe devices 234 may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not. Read only memory (ROM) 224 may be, for example, a flash binary input/output system (BIOS). Hard disk drive (HDD) or solid-state drive (SSD) 226a and CD-ROM 230 may use, for example, an integrated drive electronics (IDE), serial advanced technology attachment (SATA) interface, or variants such as external-SATA (eSATA) and micro-SATA (mSATA). A super I/O (SIO) device 236 may be coupled to South Bridge and input/output (I/O) controller hub (SB/ICH) 204 through bus 218.
Memories, such as main memory 208, read only memory (ROM) 224, or flash memory (not shown), are some examples of computer usable storage devices. Hard disk drive (HDD) or solid-state drive (SSD) 226a, CD-ROM 230, and other similarly usable devices are some examples of computer usable storage devices including a computer usable storage medium.
An operating system runs on processing unit 206. The operating system coordinates and provides control of various components within data processing system 200 in
Instructions for the operating system, the object-oriented programming system, and applications or programs, such as server application 116 and client application 122 in
Furthermore, in one case, code 226b may be downloaded over network 214a from remote system 214b, where similar code 214c is stored on a storage device 214d in another case, code 226b may be downloaded over network 214a to remote system 214b, where downloaded code 214c is stored on a storage device 214d.
The hardware in
In some illustrative examples, data processing system 200 may be a personal digital assistant (PDA), which is generally configured with flash memory to provide non-volatile memory for storing operating system files and/or user-generated data. A bus system may comprise one or more buses, such as a system bus, an I/O bus, and a PCI bus. Of course, the bus system may be implemented using any type of communications fabric or architecture that provides for a transfer of data between different components or devices attached to the fabric or architecture.
A communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter. A memory may be, for example, main memory 208 or a cache, such as the cache found in North Bridge and memory controller hub (NB/MCH) 202. A processing unit may include one or more processors or CPUs.
The depicted examples in
Where a computer or data processing system is described as a virtual machine, a virtual device, or a virtual component, the virtual machine, virtual device, or the virtual component operates in the manner of data processing system 200 using virtualized manifestation of some or all components depicted in data processing system 200. For example, in a virtual machine, virtual device, or virtual component, processing unit 206 is manifested as a virtualized instance of all or some number of hardware processing units 206 available in a host data processing system, main memory 208 is manifested as a virtualized instance of all or some portion of main memory 208 that may be available in the host data processing system, and Hard disk drive (HDD) or solid-state drive (SSD) 226a is manifested as a virtualized instance of all or some portion of Hard disk drive (HDD) or solid-state drive (SSD) 226a that may be available in the host data processing system. The host data processing system in such cases is represented by data processing system 200.
Turning now to
More specifically, an operator may desire to identify relevant inputs to ensure that the output of a process is within the predetermined bounds. Further, the operator may desire to identify limits in which process inputs 308 may be pushed taking into consideration the effect on a corresponding process output 310. The system 302 may be used by the operator to compute an estimate of the degree of change expected in the process output 310 from varying the set of process inputs 308. The system 302 of
The application 402 comprises an aggregator 312, a partition module 406, a model generator 408, a partial dependency plot module 410, and an input relevance measure module 412. The application 402 may be configured to employ the sensor array 126, and a processor to obtain from the sensor array 126 a plurality of sensor measurements that relate the process inputs 308 to the process output 310. More specifically, the application 402 may collect, via aggregator 312 a working set (WorkingSet) of historical process measurements, including J process inputs 308 (ProcessInputj) and a process output 310 (ProcessOutput). The individual measurements may be obtained from various sensors 304 of the sensor array 126, as well as stored sensor measurements such as stored electronic records, archived documents, and other tangible records.
Responsive to obtaining the measurements, the partition module 406 may be operable to create a first number, P, of Train/Test random partitions (Partitionp) of the WorkingSet. The random partitions may be, for example, P 50/50 random partitions (i.e., each partition has 50% of the data as part of a training dataset and 50% as part of a test subset), each of the P 50/50 random partitions using all of the data in the measurement data/WorkingSet. For example, thirty 50/50 random partitions of the measurement data may be generated with each random partition being randomly different from one another. Of course, this is not meant to be limiting as other examples such 70/30 and 80/20, etc may be obtained in view of the descriptions herein. Further each random partition is also unique. In an aspect herein, the data partitioning may be performed using random split into two mutually exclusive Train and Test samples. Alternatively, bootstrap sampling with replacement can be used to obtain Train (in bag) and Test (out of bag) samples.
Upon obtaining the data partitions, the model generator 408 may be operable to generate, for each partition of the P partitions, a stochastic gradient boosting model Mp to predict the process output 310 (ProcessOutput) using Trainp as the training sample, Testp as the test sample, and predictors Xj, j=1, . . . , J, wherein J is the number of process inputs 308. Thus, P stochastic gradient boosting models may be obtained. The illustrative embodiments recognize that the stochastic gradient boosting algorithms, which are data driven, have the ability to capture dependencies and display the nature of the dependencies as partial dependence plots. Stochastic gradient boosting is a powerful machine learning technique that may be used for regression and classification tasks. It is an ensemble learning method that combines the predictions of multiple weak learners, typically decision trees, to create a strong predictive model. By introducing randomness into the training process, improvements may be observed in both the accuracy and efficiency of the model. Generally, stochastic gradient boosting comprises gradient boosting which may builds an ensemble model by sequentially adding “weak learners” to correct the errors made by the previous ones. The technique minimizes a loss function (typically mean squared error for regression or logistic loss for binary classification) by iteratively fitting new weak learners to the residuals of the previous ones. Stochastic gradient boosting may also comprise stochastic gradient descent (SGD) which determines the minimum of a loss function. This introduces randomness into the training process. In stochastic gradient boosting, the stochasticity of SGD may be applied to the gradient boosting algorithm. During each iteration of the boosting process, a random subset of the training data and/or available predictors is sampled to train the weak learner. This randomness may help prevent overfitting and improve statistical performance of the resulting model.
In an aspect herein, to differentiate between signals and noise further computations may be performed as described herein. The partial dependency plot module 410 may be operable to compute for each process input 308 (ProcessInputj) P partial dependency plots (Plotsj). More specifically, the partial dependency plot module 410 may extract for each process input 308 a matrix of plots Plotjp, j=1, . . . , J, p=1, . . . P, wherein “Plotjp” represents each partial dependency plot of predictor Xj generated by model Mp. The input relevance measure for each process input 308 is then computed from the matrix of plots of each process input 308 by computing a difference between a maximum confirmed process output and a minimum confirmed process output using the matrix of plots.
The input relevance measure module 412 may be operable to compute values of input relevance measures of each process input 308. More specifically, for predictor Xj, an upper bound and a lower bound may be computed from the P partial dependency plots corresponding to the predictor Xj and a difference between the upper bound and lower bound determined to depict the input relevance measure. For each predictor Xj, the upper bound may be computed as:
Further, for each predictor Xj, the lower bound may be computed as:
The difference, Δj=U_boundj−L_boundj, and this represents the input relevance measure that estimates a degree of change in the process output obtained by varying the corresponding process input. In other words, U_boundj may be the maximum of the minimum values of the P partial dependency plots, i.e., the maximum confirmed output. L_boundj may be the minimum of the maximum values of the P partial dependency plots, i.e., the minimum confirmed output. Due to the bounds being confirmed, the difference between the two signifies a conservative estimate, and thus a high degree of reliability regarding how a change in the process input 308 changes the process output 310.
In as aspect, responsive to the computed input relevance measure being positive, it may be concluded that Predictor Xj can account for at least Δj difference in the model response. Further, responsive to the computed input relevance measure being zero or negative, it may be concluded that there is no evidence that the predictor Xj makes a significant difference in the model response. Even further, the higher the value of the computed input relevance measure, the higher the influence of the Predictor Xj on the model response.
From the single partial dependency plots shown in
While a first operator may suspect that inputs X5-X10 are not relevant, there may be no definitive confirmation of this hypothesis. A second operator may conclude erroneously that input X5 can be used to meaningfully control the output.
In an aspect herein a plurality of partial dependency plots may be computed for each input using the 10,000 observations from the industrial application which were partitioned into thirty 50/50 simple random partition iterations. Thus, thirty partial dependency plots were obtained for each input X1-X10 as shown in
By employing the input relevance engine 128 to compute the input relevance measure for each input using superimposed process outputs shown in the plots, a more accurate degree of change attributable to the inputs was obtained. Table. 2 compares the Response Range of Change based on the single partial dependency plots to the input relevance measures based on the plurality of partial dependency plots generated using the plurality of random partitions.
As can be seen in Table. 2, the input relevance measures for X1-X4 are positive and non-zero whereas the input relevance measures for X5-X10 are zero and or negative. Thus, unlike the Response Range of Change obtained using the single model, the input relevance engine 128 may be operable to obtain more accurate and realistic (conservative) estimates of the expected degree of change associated with each influential input. The sensor recommender system 124 then recommends sensors 1-4 of a sensor array 126 as the sensors to monitor in real time to prevent an industrial process from going out of normal/desired operating limits. The sensor recommender system 124 may alternatively recommend sensor 1 as the sensor to monitor in real time due to the corresponding input relevance measure being the highest. Generally, a predetermined threshold may be used to select which sensors to monitor. Even further, knowing the input relevance measures may enable an operator to determine the extent to which process inputs may be altered without causing the process output to exceed a predetermined range.
In an aspect, process input 308 with a zero or negative input relevance measure may be disregarded. In another aspect, the first number, P is 30. Of course, this is not meant to be limiting as other number of partitions may be obtained, such as a number between 10 and 100 with a computational being directly proportional to the number. In another aspect, the routine 700 may be performed automatically. The routine 700 may be applicable to various industrial processes, manufacturing processes, and in general processes involving integrated sensors used for measuring and monitoring the inputs and outputs of the process. For example, the routine 700 may be used to identify which operating conditions can be focused on to reduce the NOx and CO emissions and to what extent. In another example, the routine 700 may be used to for predictive maintenance such as to predict equipment failures, forecast energy consumption in energy-intensive industries and predict the effectiveness of potential drug candidates by analyzing chemical properties and biological data to accelerate drug discovery.
Any specific manifestations of these and other similar example processes are not intended to be limiting to the invention. Any suitable manifestation of these and other similar example processes can be selected within the scope of the illustrative embodiments.
Thus, a computer implemented method, system or apparatus, and computer program product are provided in the illustrative embodiments for computing a relevance of process inputs and other related features, functions, or operations. Where an embodiment or a portion thereof is described with respect to a type of device, the computer implemented method, system or apparatus, the computer program product, or a portion thereof, are adapted or configured for use with a suitable and comparable manifestation of that type of device.
Where an embodiment is described as implemented in an application, the delivery of the application in a Software as a Service (Saas) model is contemplated within the scope of the illustrative embodiments. In a SaaS model, the capability of the application implementing an embodiment is provided to a user by executing the application in a cloud infrastructure. The user can access the application using a variety of client devices through a thin client interface such as a web browser, or other light-weight client-applications. The user does not manage or control the underlying cloud infrastructure including the network, servers, operating systems, or the storage of the cloud infrastructure. In some cases, the user may not even manage or control the capabilities of the SaaS application. In some other cases, the SaaS implementation of the application may permit a possible exception of limited user-specific application configuration settings.
The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on a dedicated system or user's computer, partly on the user's computer or dedicated system, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server, etc. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
All features disclosed in the specification, including the claims, abstract, and drawings, and all the steps in any method or process disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in the specification, including the claims, abstract, and drawings, can be replaced by alternative features serving the same, equivalent, or similar purpose, unless expressly stated otherwise.