Not Applicable.
Not Applicable.
This disclosure relates to the field of seismic exploration. More specifically, although not exclusively, the disclosure relates to methods for processing seismic data using inversion to obtain a set of parameters deemed most likely to represent structure and composition of formations in a region of interest in the Earth's subsurface.
A specific challenge in the oil and gas industry is determining the Earth's subsurface properties from data obtained by seismic reflection surveying. In seismic reflection surveying, one or more seismic energy sources are used to impart seismic energy into the Earth, which then propagates as a seismic wavefield. The seismic wavefield interacts with elastic heterogeneities in the subsurface. As a result, some of the seismic energy returns to one or more receivers, which measure properties, e.g., pressure or particle motion, of the seismic wavefield as a function of time at which the seismic source is actuated and location of the receivers. The seismic wavefield can contain a superposition of waves caused by different types of wavefield interaction with features in the subsurface, such waves caused by reflections, refractions, and mode conversion. Combinations of these types of waves are used in many different known processing techniques to help estimate the Earth's subsurface properties and their spatial distribution. Examples of these estimated properties include, without limitation, seismic velocity, seismic anisotropy, acoustic impedance, seismic amplitude versus offset/angle (AVO/A) and reflectivity. Interpretation of acquired seismic data may provide a perceived spatial distribution of such properties in the Earth. Such perceived distribution is referred to as a “model” of the subsurface.
A typical seismic survey provides significantly more acquired data over the surveyed region of interest in the subsurface than there are model parameters. Such acquired data provides that the problem of determining a model of the subsurface (e.g., a spatial distribution of physical properties estimated from the acquired seismic data) is overdetermined. That is, it is generally impossible to find a set of model parameter values that will completely explain all aspects of the collected data. While it is impossible in these circumstances to directly obtain a true solution, that is, a model that correctly represents such subsurface spatial distribution, an estimate of the true solution can be obtained through a process known as inversion which uses assessments of how well the model properties fit with the observed data. See, Vozoff et al., 1975.
The process of inversion uses a forward modelling operator, L, which synthesizes, for seismic data, the wavefield that would have been detected by the seismic receivers (i.e., observed) if the Earth's subsurface had formations and their properties spatially distributed as described by an input model of the subsurface. The modelling operator may be, for seismic surveying, some appropriate form of a wave equation. The resulting synthetic (“modelled”) data are then compared with the observed (measured) data and any differences between the two are assumed to be due to errors in the model, whether values of the properties, their spatial distribution or both. The difference between the modelled and observed data is termed the “residual” vector. In order to improve the fit between the modelled and observed data, and thus reduce the errors in the model, a quantitative measure of misfit between the modelled and observed data is required. This quantity is often called the “cost” or “objective” function. To illustrate this disclosure, we will define a “residual vector” as the difference between the modelled and observed data, and choose as the objective function the square of the “L2 norm” of this residual vector. Methods based upon the L2-norm are often referred to as “least squares” methods because they seek to minimize the sum of the squared errors. However, it is to be understood that other measures of the misfit also fall within the scope of this disclosure and that use of the L2-norm to illustrate the technique is not intended to be a limitation on the scope of the present disclosure . . . . The goal of inversion processing is to find model properties which minimize the cost or objective function. When the objective function is minimized, the modelled data matches the recorded data in a least squares sense. If the model properties are not yet optimal, then it is necessary to change them in a way that improves the L2 norm of the residual vector. Known inversion methods obtain such results by applying the “adjoint state” method to the modelling operator, which allows determination of the gradient of the objective function with respect to values of the model parameters. The gradient describes how incremental changes in the model parameter values affect the value of the objective function, and thus permits selection of a direction in which one or more of the model parameters needs to be adjusted or changed to improve the fit of the modelled data to the observed data. The precise set of new model properties that will improve the fit may be calculated by an optimization algorithm such as steepest descent or L-BFGS (the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm). These steps are repeated until the L2 norm is “sufficiently small” (e.g., is below a selected threshold) or some other inversion stopping criterion is satisfied. Formally, the objective function may be given by the expression:
J=∥L(m)−d∥22 (1)
where J is the objective function, L is the modelling operator which describes how to synthesize data from the model properties, m, and d represents the observed data.
The term “parameter class” may be used to signify a particular type of model parameter, such as velocity or density. Inversion, where more than one parameter class is sought, is commonly referred to as “multi-parameter” inversion. Multi-parameter inversion presents an additional challenge if there is coupling between two or more of the parameter classes, resulting in what is termed “cross-talk” (Operto et al., 2013). Under these circumstances the inversion may attribute some of the energy in the residual vector to one or more of the wrong parameter classes. This occurs because perturbations in parameters of different classes can have a similar effect on the modelled data. For example, consider a hypothetical experiment in which an anisotropy parameter class is already correctly specified and that an inversion is then performed in which both velocity and seismic anisotropy are sought. Since it is common for the wavefield arrival time to be influenced by both velocity and seismic anisotropy, time differences between the modelled and observed data can be ascribed to both velocity and seismic anisotropy. The foregoing hypothetical inversion could adjust originally correct (best fitting) seismic anisotropy parameters so that they are no longer correct, while insufficient modification may have been ascribed to velocity. The inversion may not be able to recover from these conditions without further operator/user intervention.
When both model parameters in
A second consideration in multi-parameter inversion is where different parameter classes have different scales. For example, in seismic exploration, compressional (p-wave) velocity will be of the order of 1500-6000 meters per second (m/s), whereas common anisotropy values are of the order of 0.01-0.2. This difference in scale is 4-5 orders of magnitude between model parameter classes, which means that a change in the value of parameters in some classes produces a larger variation in the objective function than a similar-magnitude change in parameters of other classes. The same issue arises in single parameter class inversion if the model properties have a wide range of sensitivities. For example, seismic waves may illuminate some regions of the Earth's subsurface much more than others, which leads to the weakly-illuminated regions having much less influence on the objective function. Poorly scaled parameters and weak “illumination” typically result in slow inversion convergence rates because the direction of descent will be biased towards some parameters and have elongated objective functions like those shown in
There are many optimization algorithms that can be used to solve inversion problems, each with attendant strengths and weaknesses. For example, first order optimization methods like “steepest descent” only make use of the gradient and the model parameters from the previous iteration in a set of iterations. For a single iteration, this is simple and computationally cheap. However, because it suffers from parameter cross-talk and scale differences, convergence is slow and requires many iterations. The convergence rate of the steepest descent method can be improved by exploiting a more extensive history of previous steps, rather than simply relying on the last step. Examples of this are “momentum” techniques, “adaptive gradient” approaches (sometimes called “AdaGrad”, which may refer to Duchi et al., 2011, or the closely related Shampoo technique (Gupta et al., 2018)) and RMSprop (Hinton, 2018). However, for complex non-linear problems, higher order optimization schemes may perform better than first order optimization.
Second order optimization approaches, for example, make use of the gradient (which, as described above, is the first derivative of the objective function with respect to the model parameters) and second derivative information. The second derivative may be described by the Hessian matrix (sometimes simply “Hessian”), which succinctly provides information about the curvature of the objective function. “Newton's method” (Newton, 1711; Raphson, 1690) is an example of a second order optimization scheme using the Hessian. As a result, it converges much faster than steepest descent and is relatively immune from cross-talk and scaling issues. However, implementation of Newton's method on large problems with billions of unknowns is not practical because it requires that the Hessian matrix be formed explicitly in the computer memory and that the inverse Hessian, H−1, be well defined. Consequently, approaches that approximate the inverse Hessian matrix at each iteration, known as “quasi-Newton” methods, have been developed e.g., the BFGS method described above. The BFGS method uses the complete history of all iterations in an inversion to build an approximate inverse Hessian matrix. However, for such large problems, even using the BFGS approach, it is prohibitive to store, e.g., in computer memory, the necessary information from all previous iterations. A modification of this approach, limited memory BFGS Broyden-Fletcher-Goldfarb-Shanno or “L-BFGS optimization”, uses a diagonal-plus-low-rank approximation of the inverse Hessian matrix constructed using only a few of the most recent iterations, instead of the complete history. L-BFGS is practical for large-scale problems, and converges faster than steepest descent. However, because of the nature of the L-BGFS approximation, it does not adequately deal with cross-talk and parameter scaling.
Even though a good estimate of the inverse Hessian matrix is important for multi-parameter inversion, it is also important for single parameter class inversion. The sensitivity of the model parameters varies for several reasons, including the geometry of acquisition and the parameter (e.g., velocity) spatial distribution of the Earth. Unless all parts of any particular region of interest in the Earth were adequately illuminated by seismic waves, there will be weak, or no, information about those parts of the Earth. As a result, convergence will be slow. If the full Hessian could be computed and it was sufficiently well-conditioned to be inverted, then the inverse Hessian would fully compensate for illumination and cross-talk. Unfortunately, lesser approximations used in quasi-Newton methods have more limited ability to provide compensation for illumination and resolve cross-talk. As a result, it is common practice to additionally include “preconditioners” to assist in aspects such as illumination compensation and to generally help accelerate convergence in quasi-Newton inversion schemes.
An alternative method to estimate the inverse Hessian efficiently involves the use of “matching filters” (Guitton, 2017). This method makes use of the prior knowledge that the effect of model parameter changes on the gradient (which is the information contained in the Hessian matrix) is approximately localized around the position of the modified model parameter.
There continues to be a need for inversion processing methods which better address scaling and cross-talk.
One aspect of the present disclosure is a method for determining spatial distribution of properties of formations in a subsurface region of interest using geophysical sensor signals detected proximate the region of interest. The method includes inversion of an initial model of the spatial distribution. The inversion comprises at least second order optimization. The second order optimization comprises substitution of a scaled identity matrix in limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization with an alternative scaled matrix βM with the values comprising the matrix M being derived from a combination of data obtained at previous iteration steps and prior knowledge of the nature of the inversion problem, and using this substituted matrix M to improve the inversion.
Another aspect of this disclosure is a computer program stored in a non-transitory computer readable medium. The program has logic operable to cause a programmable computer to perform actions for determining spatial distribution of properties of formations in a region of interest in the Earth's subsurface using geophysical sensor signals detected proximate the region of interest. The actions include accepting as input to the computer the geophysical sensor signals and inversion of an initial model of the spatial distribution. The inversion comprises at least second order optimization. The at least second order optimization comprises substitution of a scaled identity matrix in limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization with an alternative scaled matrix βM with values comprising the alternative matrix M being derived from a combination of data obtained at previous iteration steps and prior knowledge of the nature of the inversion problem, and using the alternative scaled matrix βM to improve the inversion. The initial model is finalized when a value of an objective function in the inversion processing is minimized.
Another aspect of this disclosure relates to a method for determining spatial distribution of properties of formations in a region of interest in the subsurface using geophysical sensor signals detected proximate the region. The method according to this aspect includes inversion processing an initial model of the spatial distribution. The inversion processing comprises at least second order optimization. The at least second order optimization comprises calculating an estimate of an inverse Hessian as a convolutional operator (C), and using the estimated inverse Hessian matrix in a modified limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. The modified L-BFGS optimization is used to optimize the inversion processing.
A computer program stored in a non-transitory computer readable medium according to another aspect of this disclosure includes logic operable to cause a programmable computer to perform actions including the following. An initial model of spatial distribution of properties of formations in a region below the Earth's surface is entered into the computer, along with measurements relating to the properties. The initial model is inversion processed. The inversion processing comprises at least second order optimization. The at least second order optimization comprises calculating an estimate of an inverse Hessian as a convolutional operator (C), and using the estimated inverse Hessian in a modified limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. The modified L-BFGS optimization is used to optimize the inversion processing. For any one or more of the foregoing aspects of this disclosure, the following may apply.
Inversion processing in the various aspects of the present disclosure includes calculating expected geophysical sensor signals using an initial model of the spatial distribution. The expected geophysical sensor signals are compared to the detected signals. In the various aspects of the present disclosure, the optimized inversion provides as an output a spatial distribution, i.e., an updated model thereof, for which calculated expected geophysical sensor signals most closely match the detected signals.
In some embodiments, the geophysical sensor signals comprise seismic signals.
In some embodiments, values comprising an alternative scaled matrix M are derived from previous iteration steps using an adaptive gradient—(AdaGrad) type scheme.
In some embodiments, off-diagonal adaptive gradient terms are included to improve estimation of the inverse Hessian matrix.
In some embodiments, the alternative scaled matrix M is described as a non-stationary convolutional operator (C) or linear combination of convolutional operators or a product of convolutional operators or a combination of products of convolutional operators and linear combinations of convolutional operators.
In some embodiments, the convolutional operators are derived from previous iteration steps using match filtering (F).
In some embodiments, off-diagonal terms in the estimated inverse Hessian matrix are modified to preserve a positive definite property.
In some embodiments, the match filtering is performed in a transformed domain comprising at least one of curvelet, Fourier, radon or wavelet domain.
In some embodiments, the match filtering is applied in at least one dimension.
In some embodiments, match filtering is applied in overlapping windows.
In some embodiments, the adaptive gradient type scheme is regularized and/or constrained.
In some embodiments, the estimated inverse Hessian matrix (βM) is obtained using a combination of data obtained at previous iteration steps.
Other aspects and possible advantages will be apparent from the description and claims that follow.
Embodiments of a method according to the present disclosure may have particular application in and are described with reference to processing seismic data, however, such processing is not intended to limit the scope of the present disclosure. Moreover, without limiting the generality of uses to which methods according to this disclosure may apply, such methods may be used with any form of geophysical sensor data or geophysical sensor signals, that is, any data or signals obtained by one or more sensors in response to naturally occurring phenomena such as natural gamma radiation or voltage impressed on an electrode in a well (spontaneous potential), or induced in the sensor in response to imparting energy into the earth, such as electromagnetic energy to measure resistivity, acoustic energy to measure acoustic velocity, or nuclear radiation to measure density or neutron porosity.
In some embodiments, only one source vessel may be used. According to the present disclosure, any number of source vessels may be used and the following description is not intended to limit the scope of the present disclosure. The source vessels move along the surface 16A of a body of water 16 such as a lake or the ocean. In the present example, a vessel referred to as a “primary source vessel” 10 may include equipment, shown generally at 14, that comprises components or subsystems (none of which is shown separately) for navigation of the primary source vessel 10, for actuation of seismic energy sources and for retrieving and processing seismic signal recordings. The primary source vessel 10 is shown towing two, spaced apart seismic energy sources 18, 18A.
The equipment 14 on the primary source vessel 10 may be in signal communication with corresponding equipment 13 (including similar components to the equipment on the primary source vessel 10) disposed on a vessel referred to as a “secondary source vessel” 12. The secondary source vessel 12 in the present example also tows spaced apart seismic energy sources 20, 20A near the water surface 16A. In the present example, the equipment 14 on the primary source vessel 10 may, for example, send a control signal to the corresponding equipment 13 on the secondary source vessel 12, such as by radio telemetry, to indicate the time of actuation (firing) of each of the sources 18, 18A towed by the primary source vessel 10. The corresponding equipment 13 may, in response to such signal, actuate the seismic energy sources 20, 20A towed by the secondary source vessel 12.
The seismic energy sources 18, 18A, 20, 20A may be, for example and without limitation, air guns, water guns, marine vibrators, or arrays of such devices. The seismic energy sources are shown as discrete devices in
In
Although the description of acquiring signals explained with reference to
Having explained an example method for acquiring seismic data, the present disclosure will now explain example embodiments of data processing methods. In general, data processing according to the present disclosure comprises inversion processing. Seismic data, which represent recorded seismic signals made by a plurality of spaced apart seismic sensors with respect to time of actuation of one or more seismic sources, e.g., as in
In this disclosure, at least second order optimization is performed. In the present example embodiment, a novel combination of limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization with a general adaptive gradient (AdaGrad) type scheme may be performed, along with an optional extension to include the use of match filtering. The extension to include match filtering can be used with or without the inclusion of the adaptive gradient (AdaGrad) type scheme. A method according to the present disclosure to estimate the inverse Hessian matrix in L-BFGS optimization may improve the inversion in the presence of cross-talk, may apply suitable parameter scaling and may accelerate convergence to a solution (the recalculated earth model that results in minimized objective function). Methods according to the present disclosure are well suited to not only multi-parameter inversion, but also to single parameter class inversion. The disclosed method(s) may eliminate the need for illumination compensation preconditioners in inversion processing.
L-BFGS is a second order quasi-Newton optimization method that has as an objective of approximating or estimating the inverse Hessian matrix (Ĥ−1) with a (computer) memory efficient diagonal-plus-low-rank representation. An important step in this approach, which is known in the art, estimates the diagonal part of the approximation from a scaled version of the identity matrix, Ĥ−1=αl, where the scalar α is based on the curvature along the most recent search direction (Nocedal and Wright, 2006) and I is the identity matrix. The foregoing may be expressed as:
where y and s are the change in gradient and the change in the model, respectively, since the previous inversion iteration. L-BFGS requires that the approximated or estimated inverse Hessian matrix is “positive definite” (that is, xTH−x>0 for an arbitrary vector, x). This criterion must be met in order to obtain a decrease in the objective function at each iteration. To be positive definite requires that the scalar is positive, that is, α>0 in Eq. (2). Such condition results because the Wolfe conditions (Wolfe, 1969) determine that sTy >0 and so it must also be the case that yTy>0. Using alpha (α) derived from Eq. (2) results in the unit step length being likely to satisfy the Wolfe conditions. The foregoing scalar, α, can be thought of as a constant stretching or squeezing of the objective function space.
In an inversion where the inverted parameters are of different scales, a scalar multiple of the identity matrix is inappropriate because the single scalar will not compensate for the difference in sensitivities between or among the parameters. A single scalar also fails to take account of the range of possible scales that might exist in a single parameter class due to variations in illumination.
A possible improvement disclosed herein is to choose a plurality of scalars, q=pm scalars, where p is the number of parameter classes and m is the number of elements in each parameter class. These scalars can be obtained by using an AdaGrad type method, and this appears to help improve the L-BFGS inverse Hessian estimation. It is proposed initially to modify αI to become βDiagMDiag, in which the new quantity βdiag is provided by the expression:
with the diagonal matrix, MDiag given by the expression:
Here, n is the number of iterations, g is the gradient and ε is a small number to stabilize the division. Note that like α, it is guaranteed that βDiag>0, which is necessary to ensure that the positive definite property required by L-BFGS is preserved. As there are now many scalars as opposed to just one, the functional space can be stretched or squeezed by different amounts in different directions. The inversion is free to warp (but not rotate) the functional space to provide a more optimal decrease in the objective function in each iteration.
This approach is not limited to just improving the diagonal estimate of the inverse Hessian matrix as shown above, but off-diagonal terms can be included to help reduce parameter cross-talk too. For example, using:
where S is some sparsity imposing matrix (which could be as simple as the identity matrix, yielding the scheme described above) and “∘” denotes Hadamard (element-by-element) product. S controls which components of Σi=0ngigiT are used to calculate MS. For example, selection of the diagonal and additional off-diagonal elements can be used to describe illumination compensation and coupling between model parameter classes. This rotates as well as stretches/squeezes the objective function. There is no guarantee that the inclusion of off-diagonal elements will automatically satisfy the positive definite requirement. However, Schur's product theorem guarantees that the Hadamard product of two positive definite matrices is also positive definite. Therefore, so long as both Σi=0ngigiT and S are positive definite then the positive definite requirement is guaranteed to be met.
An improvement on the estimate of the inverse Hessian can also be achieved using non-stationary convolutional operators, such as match filters. In this case, an improvement of the estimate of the inverse Hessian in L-BFGS is sought by finding a matching filter operator, F, such that:
Fy≅s (6)
This approach can be combined with the AdaGrad type scheme in the following way:
subject to the positive definite requirement that yTβFMy>0. Note that M can be any implementation mentioned above. Indeed, M could also represent any combination of data obtained at previous iteration steps and prior knowledge of the nature of the inversion problem. Recall, s is the change in the model parameter(s) due to the most recent inversion iteration and y is the change in gradient of the objective function due to the most recent iteration. Both are symbolically ordered as vectors, but in reality, they may be thought of as 3 or 4-dimensional volumes of parameter classes that are the same size as the model, m. Although the match filter operator bears similarities to Guitton's match filters, the present match filters are designed on different inputs and applied in a substantially different way. The matching filter F represents a filtering operator over n dimensions which can be constructed using linear or non-linear least squares. Although n may include 1 to 3 spatial dimensions, for example Cartesian coordinates (x, y, z), even higher dimensions are possible. Indeed, the domain in which the matched filters are designed and/or applied is not limited to the conventional space domain of (x, y, z), but can include, but is not limited to, transform domains such as Fourier, Radon, curvelet and wavelet transforms. The matching filters are non-stationary and may be computed in an overlapping windowed scheme. It is important to note that the matching filters may enhance the estimate of both the diagonal and off-diagonal terms of the inverse Hessian, which reduces the number of iterations required to obtain a satisfactory convergence.
F need not be restricted to match filtering or limited to a single type of operation. Generally, providing that the positive definite requirement is met, any number of operators, Ck, j, can be used to better approximate the L-BFGS inverse Hessian. These operators may, for example, be some mixture of products and linear combinations, according to the following expression:
βΣj=1j=p(Πk=1k32 wk=Ck,j)My=s (8)
where wp is the total number of operators. These additional operators may be used flexibly to represent such things as useful a priori knowledge about the inverse Hessian to better condition the optimization. Although this has been written above in a form for use with an AdaGrad-type scheme it may also be used without AdaGrad by setting M=I and β=1.
Using the above formulation, initially each parameter class, and preferably every element in the model will receive an equal weight in the inversion. With each subsequent iteration every element in the model will receive updated weights that modify the sensitivity of the objective function to each element of the model, so that the relative sensitivities and coupling of the various parameters are compensated. This compensates for any natural parameter bias and variations in illumination. As a result, it is not necessary to use preconditioners to compensate for illumination deficiencies. The original L-BFGS formulation, which uses a multiple of the identity matrix as part of its inverse Hessian estimation, has been replaced with quantities that can vary adaptively between parameter classes for each element in the model, significantly accelerating convergence.
Because real data contains noise as well as signal, regularization may be included to reduce the influence of noise on the inversion process.
A high-level flowchart illustrating the present example embodiment of a method is shown in
In some embodiments, at 42, the scaled identity matrix αI is replaced by non-stationary match filters, F using Eq. (6), to directly obtain an improved inverse Hessian, as shown at 45.
In an example using an embodiment of a method according to the present disclosure, a marine 3D survey with 2 sources and 12 streamers has been used, e.g., as explained with reference to
A new optimization scheme combining a quasi-Newton L-BFGS optimizer with an adaptive gradient type approach to improve inverse Hessian estimation has been disclosed. It has also been disclosed that application of convolution filters, such as matching filters, or more generally any suitable operator can be accommodated into the structure of this scheme so long as they improve the estimate of the inverse Hessian. The improved estimate of the inverse Hessian helps to mitigate cross-talk between parameter classes in multi-parameter inversion and to scale the sensitivity of the objective function to each model element. This is achieved by modifying the diagonal and off-diagonal elements of the inverse Hessian which rotates and stretches/squeezes the objective function. As a result, illumination compensation occurs more naturally without the need for preconditioners. The convergence rate of this new approach on large scale problems is significantly faster than the standard L-BFGS approach.
Although this new approach has been disclosed in relation to seismic exploration, the new method has considerable general application in many other areas of endeavor that could have otherwise used the standard L-BFGS approach.
All of the above calculations may be performed in any general purpose or purpose specific computer or processor.
The processor(s) 104 may also be connected to a network interface 108 to allow the individual computer system 101A to communicate over a data network 110 with one or more additional individual computer systems and/or computing systems, such as 101B, 101C, and/or 101D (note that computer systems 101B, 101C and/or 101D may or may not share the same architecture as computer system 101A, and may be located in different physical locations, for example, computer systems 101A and 101B may be at a well drilling location, while in communication with one or more computer systems such as 101C and/or 101D that may be located in one or more data centers on shore, aboard ships, and/or located in varying countries on different continents).
A processor may include, without limitation, a microprocessor, microcontroller, processor module or subsystem, programmable integrated circuit, programmable gate array, or another control or computing device.
The storage media 106 may be implemented as one or more computer-readable or machine-readable storage media. Note that while in the example embodiment of
It should be appreciated that computing system 100 is only one example of a computing system, and that any other embodiment of a computing system may have more or fewer components than shown, may combine additional components not shown in the example embodiment of
Further, the acts of the processing methods described above may be implemented by running one or more functional modules in information processing apparatus such as general purpose processors or application specific chips, such as ASICs, FPGAs, PLDs, GPUs, coprocessors or other appropriate devices. These modules, combinations of these modules, and/or their combination with general hardware are all included within the scope of the present disclosure.
In light of the principles and example embodiments described and illustrated herein, it will be recognized that the example embodiments can be modified in arrangement and detail without departing from such principles. The foregoing discussion has focused on specific embodiments, but other configurations are also contemplated. In particular, even though expressions such as in “an embodiment,” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the disclosure to particular embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments. As a rule, any embodiment referenced herein is freely combinable with any one or more of the other embodiments referenced herein, and any number of features of different embodiments are combinable with one another, unless indicated otherwise. Although only a few examples have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible within the scope of the described examples. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.
Continuation of International Application No. PCT/US2022/022169 filed on Mar. 28, 2022. Priority is claimed from U.S. Provisional Application No. 63/167,332 filed on Mar. 29, 2021. Each of the foregoing applications is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63167332 | Mar 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2022/022169 | Mar 2022 | US |
Child | 18476877 | US |