This disclosure relates to computing systems, and more specifically, to techniques for training a supervised learning model.
Supervised machine learning involves a computing system learning a function that maps inputs to outputs based on a set of labeled training examples. Often, the function operates differently depending on various parameters. Finding the function that operates best for a given purpose may involve optimizing parameters that define the operation of the function. Such parameters may include weights, biases, regularization coefficients, and others. The process of finding the optimal values for parameters is sometimes referred to as calibration or parameter optimization.
Surrogate models are simplified representations of a complex system that can approximate the input-output behavior of the complex system. Surrogate models are sometimes used when the original complex system is too expensive, time-consuming, or impractical to evaluate directly. For example, surrogate models are helpful when performing repeated simulations, where the number of simulations is large enough to make using the original complex system too costly or time-consuming.
This disclosure describes applying differential machine learning techniques to calibrate parameters for a complex model, where that complex model that may be costly and/or time-consuming to evaluate. Such techniques involve, for example, parametric calculation of output values through least squares regression regularized with derivatives of those output values with respect to parameters and state as conditional expectations.
This disclosure also describes a framework (or in some examples, a “simulation and calibration framework”) that is capable of transforming a simple model script into a set of calibrated parameters to be used with the complex model, with little or no additional input required from a user or model designer. In some examples, the simulation and calibration framework interprets and/or parses the script, uses characterization of the values of interest as conditional expectations, and performs regularization on simulations of the complex model to obtain surrogate models representing an approximation of that more complex model. Those surrogate models are designed to be capable of approximating the values of interest that would be generated by the more complex model, yet are fast enough for use in parametric calibration analysis.
This disclosure describes operations performed by a computing system. In one specific example, this disclosure describes a method comprising identifying, by a framework and based on a textual description, a model that generates an output based on a set of inputs, wherein the inputs include a plurality of parameters; selecting, by the framework, a first plurality of parameter values; assembling, by the framework, a set of training samples by observing outputs generated by the model in response to each of the first plurality of parameter values; training, by the framework and based on the set of training samples, a surrogate model, wherein the surrogate model is trained to predict outputs of the model; generating, by the framework and using the surrogate model, predicted outputs of the model, wherein each of the predicted outputs of the model is based on a different parameter value in a second plurality of parameter values; selecting, by the framework and based on the predicted outputs of the model, a desired parameter value; and applying the model, using the desired parameter value, to predict a value of interest for an input value.
In another example, this disclosure describes a system comprising a storage system and processing circuitry having access to the storage system, wherein the processing circuitry is configured to carry out operations described herein. In yet another example, this disclosure describes a computer-readable storage medium comprising instructions that, when executed, configure processing circuitry of a computing system to carry out operations described herein.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description herein. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
Differential Machine Learning (DML) is an approach to regularization that can be used to prevent overfitting when training a machine learning model. DML makes use of path-wise differentials of Monte Carlo samples in addition to path-wise Monte Carlo samples in training neural networks. In other words, DML involves not only calculating output values “y” for a given input value “x,” but DML also makes use of the gradient of y with respect to x, which might be expressed as dy/dx or δy/δx.
Training samples used to train a machine learning model (e.g., a deep neural network) might therefore include not only values of “y” and “x,” as is conventional, but also values of δy/δx. These path-wise differentials help neural networks learn the loss function better and thereby improve convergence, often enabling convergence to occur faster and/or with less samples. Derivatives for sensitivities or gradients can often be computed quickly through techniques like algorithmic differentiation or adjoint algorithmic differentiation.
The differential machine learning approach is to minimize least squares subject to regularization by derivatives as follows:
As described herein, the X vector can contain model and contract parameters in addition to state vectors.
In a finance application, for example, one primary objective might be to approximate price as conditional expectation e(X)=E[Y\X] as a function of X, where:
In the above formulas, Y is a vector of discounted or undiscounted payoffs defined on risk factors modeled by some stochastic process. For example, in the following equation, X could be a vector consisting of initial state S(0), intermediate state S(t), model parameters (ρ, σ), and/or payoff parameters (K):
Preferably, parameter(s) are chosen so that they are “optimal” in some sense. For example, an optimal parameter value might mean that it tends to maximize Y, such as where Y is a vector of discounted or undiscounted payoffs (e.g., defined on risk factors modeled by some stochastic process). In another example, “optimal” might mean that an optimal parameter is one that causes Y to be minimized (e.g., where Y is a weight value in an avionics application). In other examples, an optimal parameter may cause Y to be generated having some characteristic, such as exceeding a threshold, or representing sufficient strength (e.g., in a physics application).
Differential machine learning techniques can be applied in a number of ways, but as described herein, differential machine learning techniques can be used for calibration or model fitting to optimize selection of parameters for a model of interest. When applied to payoffs from an options contract, for example, differential machine learning can be used to optimize parameters where the processes and functionals for the options contract depend on model parameters (e.g., underlying financial instrument, variance, volatility) and contract parameters (e.g., strike price), respectively. Further information about how differential machine learning can be extended to settings with parameters can be found in Polala & Hientzsch, “Parametric Differential Machine Learning for Pricing and Calibration,” Feb. 6, 2023 (SSRN-id4358439) (hereinafter “Polala & Hientzsch”), which is hereby incorporated by reference.
Calibration or model fitting generally corresponds to finding the best model parameters according to some objective function. Often, calibration is a challenge because performing basic sensitivity analysis, evaluation, and/or testing to optimize parameter values might require a large number of simulations performed by the model. The large number of simulations enable an assessment of how parameters change outputs of the model. Such computer simulations can be expensive, however, both in terms of time (e.g., simulations are time consuming) and cost (e.g., performing a simulation within a reasonable amount of time can require significant computing resources). Accordingly, even basic activities like design exploration, sensitivity analysis, and what-if analysis tend to be impractical when using the full model.
One approach to overcoming these cost and time constraints involves using an approximation model or a “surrogate” model that is a simplified version of the actual or full model. The surrogate model is designed to estimate the output values or other values of interest that the full model would generate if the full model were provided with the same input values and parameters. Importantly, however, the surrogate model is also designed to provide output values or other values of interest much faster than the actual model. Accordingly, the surrogate model tries to approximate the input-output relationship of the true model as closely as possible, but with less computational cost. Yet since the surrogate model is much faster than the full model, it can be used to perform sensitivity analysis, parameter optimization, and other tasks without the time and cost limitations associated with running simulations on the full model. The surrogate model might not provide perfectly accurate predictions of the values of interest, but if the surrogate model provides sufficiently accurate predictions, the surrogate model can be used for calibration effectively.
In the past, approaches to calibration using surrogates have been model-specific, requiring complicated processes to conceive of simplified surrogate models capable of quickly generating approximate output values. Effectively, prior approaches required derivation of simplified surrogate models, generating approximate solutions, implementing fast approximate solvers, and using those solvers wherever needed. Such an approach tends to be time-consuming and complicated, and often requires significant expertise and domain-specific knowledge. Such an approach may also be difficult to accomplish correctly.
A different approach to calibration of parameters is described herein, and involves characterization of values of interest as conditional expectations and regularization on simulations of the full model to obtain surrogate models. In some examples, the surrogate models are deep neural networks that are able to quickly and efficiently produce an approximated quantity of interest, where that approximated quantity of interest is sufficiently accurate for the purposes of calibration of parameters.
Further, techniques are described herein that enable a low-code or no code process of identifying or selecting optimized parameter values based simply on a model script or textual description of the underlying model. The textual description or model script, as described and illustrated herein, may specify various mathematical equations (formatted as simple text), and those equations may include both ordinary and stochastic differential equations. Generally, the model script specifies in text the equations underlying the model, where the equations have a nearly one-to-one correspondence to the mathematical notation familiar to model practitioners. Some domain-specific knowledge may be required to generate the model script, but once it is prepared, techniques described herein can perform most or all of the remaining tasks necessary to generate a set of optimal parameters for the underlying model. Accordingly, in at least some examples, the framework or simulation and calibration framework generates appropriate surrogate models based on the model script, analyzes the surrogate models to evaluate the effect of various parameters, and generates appropriate and/or optimal parameters that can be used with the full model.
Although described herein principally in a financial context, these calibration and differential machine learning techniques can be applied to other fields beyond finance. For example, extending differential machine learning techniques to settings with parameters may be a technique that could also apply to any field in which the value of interest is a conditional expectation of a model, process, or black box that can be simulated. Such scenarios are common in the fields of epidemiology, biology, medicine, physics, finance, and other fields.
Model 125 may be a relatively high-fidelity model, requiring significant processing resources to generate output values of interest in response to input and parameters. Accordingly, using model 125 to generate such values may be costly and/or time-consuming. As described herein, modeling platform 155 of simulation and calibration framework 140 may interact with model 125 to generate training samples 116 and surrogates 126.
In an example that can be described in the context of
Modeling platform 155 may execute and/or process code 115 to generate training samples 116. For instance, continuing with the example being described in the context of
While generating training samples 116, modeling platform 155 may, in some examples, apply certain methodologies when selecting parameters for the training samples 116. For example, while modeling platform 155 may randomly or arbitrarily select samples of design parameters from the parameter space, other methods may be more effective. For instance, it may be preferable to have training samples 116 that include parameters that are spread evenly across the parameter space, thereby ensuring a more representative sample of input-output values from all regions of the parameter space. Space-filling sampling schemes can be used for this purpose.
In other examples, different schemes for selecting samples may be used. For instance, samples may be selected and/or adjusted based on how the magnitude of the conditional expectation of an output value Y, given a set of parameters, varies across different parameters. Specifically, it turns out that trained surrogates tend to approximate the conditional expectations well for parameter settings for which the conditional expectations have the largest magnitude but not for parameters where the conditional expectations have smaller magnitudes. This is often not desirable because in calibration and other parametric pricing settings, it is important to achieve a certain relative accuracy for model parameters across the parameter range.
Accordingly, an adaptive sampling scheme may be employed to address this issue. For instance, in one such approach, modeling platform 155 first parametrically or non-parametrically estimates the magnitude of the conditional expectation of Y across the parameter domain. Modeling platform 155 then constructs marginal (or bivariate) parameter distributions resulting in an approximately constant expectation of Y across the parameter domain. For model parameters ‘a’ and ‘b,’ for example, the E[Y| a, b] may vary significantly over the parameter range of ‘b’ and whereas across parameter ‘a’ it does not vary significantly. To mitigate the impact of this magnitude difference, modeling platform 155 constructs an adaptive sampling distribution P for parameter ‘b’ such that P is directly proportional to 1/E[.|b]. In this example, modeling platform 155 can approximate the marginal expectations E[.|b] by binning the samples by the b parameter and computing bin averages. Using these averages, modeling platform 155 computes a cubic spline fit to get a smooth approximation of 1/E[.|b]. Finally, modeling platform 155 scales the cubic spline fit appropriately so that it integrates to one and can be used as sampling distribution.
Using the adaptive sampling scheme described above, modeling platform 155 is able to sample parameter sets in a way that leads to smaller conditional expectations more often. Modeling platform 155 can then approximate those parameter regions better than with a uniform distribution. This will also lead to fewer samples in those parameter regions leading to larger conditional expectations, which may lead to some degradation of approximation quality in those areas of the parameter space. In general, however, the approach described above can be used to achieve good quality surrogates with less sampling.
Once sufficient training samples 116 are generated, simulation and calibration framework 140 may use training samples 116 to generate surrogates 126. For instance, returning again to the example being described in the context of
Simulation and calibration framework 140 may use surrogates 126 to generate optimized parameters 119. For instance, again with reference to the example being described, optimizer 157 executes a large number of simulations using surrogates 126, and observes and records the outputs. Optimizer 157 performs a sensitivity analysis to assess how differing parameters change outputs of model 125. Eventually, and based on the analysis of the outputs of model 125, optimizer 157 identifies one or more parameters that seem to be most effective for a given function. Optimizer 157 selects one or more of such parameters as optimized parameters 119. Optimizer 157 within simulation and calibration framework 140 outputs optimized parameters 119. Optimized parameters 119 may thereafter be used by model 125 when generating values of interest.
In examples where modeling platform 155 generates multiple surrogates 126, optimizer 157 may run simulations in parallel. For instance, work to be performed by surrogates 126 may be divided among multiple surrogate 126 runs, thereby potentially enabling assessments to be performed in less time.
Optimizer 157 may also run simulations in parallel to address the effect that randomness may have on each of the multiple surrogates 126. In other words, since there is some element of randomness to generating surrogates 126, that element of randomness may have an impact on how various surrogates 126 may perform. For example, when simulation and calibration framework 140 calibrates with surrogates 126, random numbers might be used at several points during the process. First, modeling platform 155 consumes random numbers during the parameter and Monte Carlo sampling to generate X, Y, and derivates. Then modeling platform 155 initializes the deep neural networks (surrogates 126) according to some random initializations (in a typical deep neural network architecture, these would be the weights and biases). Then, modeling platform 155 trains surrogates 126 by stochastic gradient descent methods with Adaptive Moment Optimization, which might consume additional random numbers. Once the surrogates have been constructed, optimizer 157 may use a global optimization method, which might also consume random numbers (global optimization may involve use of random numbers to construct a new population of parameter sets from the old one).
All these random numbers could be generated from either a single stream or from several streams with their own seed(s). A different choice of such seeds will, in general, lead to variations in results and thus training several surrogates 126 in parallel and optimizing against them may lead to different optimized parameters 119. Each of these sets of optimized parameters 119 might correspond to a differently accurate surrogate 126, leading to optimized parameter sets of different accuracies.
Accordingly, running independent constructions of surrogates 126 and optimizing over them starting with different seeds on different instances can be done in parallel. Given the multi-core nature of common commodity CPUs, modeling platform 155 one can obtain 5-10 (or any number of) replications at essentially the same running time as one replication; and on elastic compute units or larger server farms, this scales to even higher number of replications. Based on these metrics, simulation and calibration framework 140 can select a best seed or set of best seeds to define some robustification to the optimization, which may correspond to the robustness of the output values predicted by the corresponding surrogate model. That robustification will have results less dependent on any particular seed and likely perform better than an optimization run with only one particular seed.
Simulation and calibration framework 140 may employ a number of calibration approaches using multiple surrogates 126, including either of two possible robust calibration approaches: (1) a best seed approach, and (2) an ensemble approach. In the best seed approach, the best seed is selected based on some criteria, and the optimized parameters 119 are those corresponding to the surrogate 126 with the best seed. In the ensemble approach, simulation and calibration framework 140 forms an ensemble of surrogates 126, and the ensemble is used for analysis and selection of optimized parameters 119. Further details about the impact of randomness on surrogates 126 can be found in Polala & Hientzsch.
Techniques described herein may provide certain technical advantages. For instance, by applying differential machine learning to selection and/or optimization of optimal parameters using surrogates 126, simulation and calibration framework 140 may be able to identify optimal parameters while performing fewer computer simulations of model 125. Fewer simulations of model 125 translates into less use of computing resources and power consumption.
Further, by applying differential machine learning approaches to optimizing parameters, such as described in Polala & Hientzsch, it may be possible to train surrogates 126 to an appropriate level of accuracy with fewer training samples. Accordingly, techniques described herein are more efficient, less time-consuming, consume fewer computing resources and power than conventional techniques for optimizing parameters for use with a model.
Still further, by automating at least some aspects of the process of optimizing parameters for use with model 125, optimal parameters can be determined with less interaction with and/or involvement from domain experts, and through a more consistent, accurate, and efficient process. Accordingly, models that use parameters selected as described herein will operate more efficiently, effectively, and accurately than models using conventional processes for selecting optimal parameters.
Note that model script 211B of
Each of model scripts 211A, 211B, and 211C (“model scripts 211”) may be specific examples of model script 111 illustrated as input to simulation and calibration framework 140 in
Conventional techniques for simulating stochastic processes and functionals tended to require implementations of new classes. Such processes added complexity and maintenance burden. Simulation and calibration framework 140 of
In some examples, a commercially-available parser (e.g., the Python parser) may be used to selectively generate appropriate abstract syntax trees and generate from the model script efficient implementations for TensorFlow and Numpy. Modeling platform 155 of simulation and calibration framework 140 (see
Also illustrated in
For ease of illustration, computing system 340 is depicted in
In
Power source 349 of computing system 340 may provide power to one or more components of computing system 340. One or more processors 343 of computing system 340 may implement functionality and/or execute instructions associated with computing system 340 or associated with one or more modules illustrated herein and/or described below. One or more processors 343 may be, may be part of, and/or may include processing circuitry that performs operations in accordance with one or more aspects of the present disclosure.
One or more communication units 345 of computing system 340 may communicate with devices external to computing system 340 by transmitting and/or receiving data, and may operate, in some respects, as both an input device and an output device. In some or all cases, communication unit 375 may communicate with other devices or computing systems over a network (not shown in
One or more input devices 346 may represent any input devices of computing system 340 not otherwise separately described herein, and one or more output devices 347 may represent any output devices of computing system 340 not otherwise separately described herein. Input devices 346 and/or output devices 347 may generate, receive, and/or process output from any type of device capable of outputting information to a human or machine. For example, one or more input devices 346 may generate, receive, and/or process input in the form of electrical, physical, audio, image, and/or visual input (e.g., peripheral device, keyboard, microphone, camera). Correspondingly, one or more output devices 347 may generate, receive, and/or process output in the form of electrical and/or physical output (e.g., peripheral device, actuator).
One or more storage devices 350 within computing system 340 may store information for processing during operation of computing system 340. Storage devices 350 may store program instructions and/or data associated with one or more of the modules described in accordance with one or more aspects of this disclosure. One or more processors 343 and one or more storage devices 350 may provide an operating environment or platform for such modules, which may be implemented as software, but may in some examples include any combination of hardware, firmware, and software. One or more processors 343 may execute instructions and one or more storage devices 350 may store instructions and/or data of one or more modules. The combination of processors 343 and storage devices 350 may retrieve, store, and/or execute the instructions and/or data of one or more applications, modules, or software. Processors 343 and/or storage devices 350 may also be operably coupled to one or more other software and/or hardware components, including, but not limited to, one or more of the components of computing system 340 and/or one or more devices or systems illustrated or described as being connected to computing system 340.
Parser module 351 and/or compiler module 352 may perform functions relating to translating model script 111 into code (e.g., code 115) that can be used by modeling platform module 355 to perform functions and/or the operations described herein. In some examples, parser module 351 and/or compiler module 352 may generate syntax trees and/or computational graphs (e.g., for use by TensorFlow). In general, parser module 351 and compiler module 352 perform functions corresponding to parser 151 of
Modeling platform module 355 may perform functions relating to executing simulations, generating samples, training machine learning models and/or neural networks, and generating surrogate models. Modeling platform module 355 may perform such functions based on code 115 generated by compiler module 352. Modeling platform module 355 may include or have access to one or more machine leaning model platforms or libraries, including TensorFlow and PyTorch. Modeling platform module 355 may perform functions corresponding to modeling platform 155 of
Optimizer module 357 may perform functions relating to evaluating surrogates 126 to generate optimized or appropriate parameter values. Optimizer module 357 may, in general, perform functions corresponding to optimizer 157 of
Data store 359 of computing system 340 may represent any suitable data structure or storage medium for storing information relating to model description text, parsed text, syntax trees, code, computational graphs, training samples, differentials, derivative values with respect to parameters, state, and/or time, parameter values, and optimized parameter values. The information stored in data store 359 may be searchable and/or categorized such that one or more modules within computing system 340 may provide an input requesting information from data store 359, and in response to the input, receive information stored within data store 359. Data store 359 may be primarily maintained by modeling platform module 355.
In an example that can be described in the context of
Computing system 340 may use code 115 to instantiate model 125 and to select design parameters. For instance, continuing with the example being described in the context of
Computing system 340 may use the selected parameters to create training samples 116. For instance, still continuing with the example, modeling platform module 355 causes model 125 to predict an output or other quantity of interest based on the selected parameters. Modeling platform module 355 repeats the process of selecting parameters and using the selected parameters to generate an output from model 125. Eventually, after repeated simulations performed by model 125, modeling platform module 355 assembles a set of training samples 116. Each of training samples 116 include an output from model 125 (e.g., a payoff for a caplet or other financial instrument) and the input values that caused model 125 to generate the output. In some examples, training samples 116 might also include derivatives of the output values with respect to parameters and/or state.
Computing system 340 may generate one or more surrogates 126. For instance, again continuing with the example being described in the context of
Computing system 340 may use surrogate 126 to select optimized parameters. For instance, still with reference to
Computing system 340 may apply model 125 using the optimized parameters. For instance, once again with reference to
Modules illustrated in
Although certain modules, data stores, components, programs, executables, data items, functional units, and/or other items included within one or more storage devices may be illustrated separately, one or more of such items could be combined and operate as a single module, component, program, executable, data item, or functional unit. For example, one or more modules or data stores may be combined or partially combined so that they operate or provide functionality as a single module. Further, one or more modules may interact with and/or operate in conjunction with one another so that, for example, one module acts as a service or an extension of another module. Also, each module, data store, component, program, executable, data item, functional unit, or other item illustrated within a storage device may include multiple components, sub-components, modules, sub-modules, data stores, and/or other components or modules or data stores not illustrated.
Further, each module, data store, component, program, executable, data item, functional unit, or other item illustrated within a storage device may be implemented in various ways. For example, each module, data store, component, program, executable, data item, functional unit, or other item illustrated within a storage device may be implemented as a downloadable or pre-installed application or “app.” In other examples, each module, data store, component, program, executable, data item, functional unit, or other item illustrated within a storage device may be implemented as part of an operating system executed on a computing device.
Controller 450 represents a system that receives an output from model 125 and acts on the output. In some examples, controller 450 controls the operation of another system, such as one or more downstream systems 460. Downstream system 460 may represent a system that operate based on inputs from controller 450. In some cases, one or more of such downstream systems 460 may be controlled, configured, or operated by controller 450 based on information controller 450 receives (e.g., model output 409) from model 125. Accordingly, one or more of downstream systems 460 may represent systems that perform operations or carry out commands issued by controller 450.
In operation, in the example of
Controller 450 may use model output 409 to control downstream system 460. For instance, in an example that can be described in the context of
In an example that may apply in a medical application, controller 450 may interpret model output 409 and determine that image processing should be performed on an image captured by a medical device. In such an example, downstream system 460 may represent an image processing system. Accordingly, in this example, controller 450 outputs signals over network 405, which downstream system 460 interprets as commands to control the operation of the downstream system 460. Based on the commands, downstream system 460 performs image processing on the image captured by the medical device. In this way, controller 450 controls the operation of downstream system 460 by issuing commands to downstream system 460 that cause downstream system 460 to process the medical device image in an appropriate way.
In the process illustrated in
Framework 140 may select a first plurality of parameter values (502). For example, modeling platform 155 applies a parameter selection scheme to identify a set of parameter values. In some examples, modeling platform 155 applies an adaptive sampling scheme to select the first plurality of parameter values.
Framework 140 may assemble a set of training samples by observing outputs generated by the model in response to each of the first plurality of parameter values (503). For example, modeling platform 155 performs simulations using model 125 to generate output values and derivatives of those output values with respect to the identified set of parameter values. Modeling platform 155 generates training samples 116, where each sample includes inputs (including parameter values), at least one output value, and derivatives of output values with respect to the parameters.
Framework 140 may train, based on the set of training samples, a surrogate model (504). For example, modeling platform 155 uses training samples 116 to train surrogate 126 to predict outputs of 125. In some examples, modeling platform 155 trains surrogate 126 as a deep neural network in TensorFlow using computational graphs generated by modeling platform 155 or included within code 115.
Framework 140 may generate, using the surrogate model, predicted outputs of the model (505). For example, optimizer 157 of framework 140 performs simulations using surrogate 126 and records how predicted outputs of model 125 change in response to various parameter values in a second plurality of parameter values.
Framework 140 may select, based on the predicted outputs of the model, a desired parameter value (506). For example, optimizer 157 evaluates how the outputs of surrogate 126 change in response to the parameter values in the second plurality of parameter values. Based on the evaluation, optimizer 157 selects a desired or most desired or optimal parameter or set of parameters.
Framework 140 may apply the model, using the desired parameter value, to predict a value of interest for an input value (507). For example, once one or more desired parameters are identified, parameter calibration may be considered complete. Thereafter, model 125 may be used in production by applying model 125 to an input (along with the desired parameter(s)) to cause model 125 to generate a quantity of interest.
For processes, apparatuses, and other examples or illustrations described herein, including in any flowcharts or flow diagrams, certain operations, acts, steps, or events included in any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, operations, acts, steps, or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially. Further certain operations, acts, steps, or events may be performed automatically even if not specifically identified as being performed automatically. Also, certain operations, acts, steps, or events described as being performed automatically may be alternatively not performed automatically, but rather, such operations, acts, steps, or events may be, in some examples, performed in response to input or another event.
The disclosures of all publications, patents, and patent applications referred to herein are hereby incorporated by reference. To the extent that any such disclosure material that is incorporated by reference conflicts with the present disclosure, the present disclosure shall control.
For ease of illustration, only a limited number of devices (e.g., computing system 340, controller 450, downstream system 460, as well as others) are shown within the Figures and/or in other illustrations referenced herein. However, techniques in accordance with one or more aspects of the present disclosure may be performed with many more of such systems, components, devices, modules, and/or other items, and collective references to such systems, components, devices, modules, and/or other items may represent any number of such systems, components, devices, modules, and/or other items.
The Figures included herein each illustrate at least one example implementation of an aspect of this disclosure. The scope of this disclosure is not, however, limited to such implementations. Accordingly, other example or alternative implementations of systems, methods or techniques described herein, beyond those illustrated in the Figures, may be appropriate in other instances. Such implementations may include a subset of the devices and/or components included in the Figures and/or may include additional devices and/or components not shown in the Figures.
The detailed description set forth above is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a sufficient understanding of the various concepts. However, these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in the referenced figures in order to avoid obscuring such concepts.
Accordingly, although one or more implementations of various systems, devices, and/or components may be described with reference to specific Figures, such systems, devices, and/or components may be implemented in a number of different ways. For instance, one or more devices illustrated herein as separate devices may alternatively be implemented as a single device; one or more components illustrated as separate components may alternatively be implemented as a single component. Also, in some examples, one or more devices illustrated in the Figures herein as a single device may alternatively be implemented as multiple devices; one or more components illustrated as a single component may alternatively be implemented as multiple components. Each of such multiple devices and/or components may be directly coupled via wired or wireless communication and/or remotely coupled via one or more networks. Also, one or more devices or components that may be illustrated in various Figures herein may alternatively be implemented as part of another device or component not shown in such Figures. In this and other ways, some of the functions described herein may be performed via distributed processing by two or more devices or components.
Further, certain operations, techniques, features, and/or functions may be described herein as being performed by specific components, devices, and/or modules. In other examples, such operations, techniques, features, and/or functions may be performed by different components, devices, or modules. Accordingly, some operations, techniques, features, and/or functions that may be described herein as being attributed to one or more components, devices, or modules may, in other examples, be attributed to other components, devices, and/or modules, even if not specifically described herein in such a manner.
Although specific advantages have been identified in connection with descriptions of some examples, various other examples may include some, none, or all of the enumerated advantages. Other advantages, technical or otherwise, may become apparent to one of ordinary skill in the art from the present disclosure. Further, although specific examples have been disclosed herein, aspects of this disclosure may be implemented using any number of techniques, whether currently known or not, and accordingly, the present disclosure is not limited to the examples specifically described and/or illustrated in this disclosure.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored, as one or more instructions or code, on and/or transmitted over a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (e.g., pursuant to a communication protocol). In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can include RAM, ROM, EEPROM, or optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection may properly be termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a wired (e.g., coaxial cable, fiber optic cable, twisted pair) or wireless (e.g., infrared, radio, and microwave) connection, then the wired or wireless connection is included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the terms “processor” or “processing circuitry” as used herein may each refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described. In addition, in some examples, the functionality described may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, a mobile or non-mobile computing device, a wearable or non-wearable computing device, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperating hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.