This disclosure relates generally to the field of neural networks, and more particularly to resilience determination and damage recovery in machine learning models such as neural networks.
Despite the importance of resilience in technology applications, the resilience of artificial neural networks is poorly understood, and autonomous recovery algorithms have yet to be developed. There is a need to endow artificial systems with resilience and rapid-recovery routines to enable their deployment for critical applications.
Disclosed herein include methods for updating weights of a neural network. In some embodiments, a method for updating weights of a neural network is under control of a processor (e.g., a hardware processor or a virtual processor) and comprises: (a) providing (or receiving) a neural network comprising a plurality of weights. The method can comprise: (b) determining one or more weights of the plurality of weights of the neural network are damaged. The method can comprise: (c) determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The method can comprise: (d) updating the weights that are undamaged with the first updated weights to generate a first updated neural network.
Disclosed herein include methods of for updating weights of a neural network. In some embodiments, a method for updating weights of a neural network is under control of a processor (e.g., a hardware processor or a virtual processor) and comprises: (a) providing (or receiving) a neural network comprising a plurality of weights. One or more weights of the plurality of weights of the neural network can be damaged. The method can comprise: (c) determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The method can comprise: (d) updating the weights that are undamaged with the first updated weights to generate a first updated neural network.
Disclosed herein include methods of for updating weights of a neural network. In some embodiments, a method for updating weights of a neural network is under control of a processor (e.g., a hardware processor or a virtual processor) and comprises: (a) providing (or receiving) a neural network comprising a plurality of weights. One or more first weights of the plurality of weights of the neural network can be damaged. The method can comprise: (c) determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The method can comprise: (d) updating the weights of the neural network that are undamaged with the first updated weights to generate a first updated neural network. Subsequent to (d), second weights of the plurality of weights of the first updated neural network may be damaged. The method can comprise: (c2) determining second updated weights corresponding to one or more weights of the plurality of weights of the first updated neural network that are undamaged subsequent to (d) using a geodesic path in the weight space. The method can comprise: (d2) updating the weights of the first updated neural network that are undamaged with the second updated weights to generate a second updated neural network.
In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises determining the geodesic path using a geodesic equation. In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises determining an approximation of the geodesic path using an approximation of the geodesic equation. The approximation of the geodesic equation can comprise a first order expansion of a loss function, optionally wherein the first order expansion comprises a Taylor expansion. Determining the first updated weights (or any updated weights of the present disclosure) can comprises determining the approximation of the geodesic equation using a metric (or a metric tensor). The metric can comprise a Riemannian metric, a pseudo-Riemannian metric, or a non-Euclidean metric. The combination of the weight space and the metric can comprise a Riemannian manifold or a pseudo-Riemannian manifold. The metric can comprise a positive semi-definite, symmetric matrix or a positive definite, symmetric matrix. The metric tensor can comprise a symmetric matrix, wherein the metric tensor is definite or semi-definite, wherein the metric is bilinear, and/or wherein the metric tensor is positive, or a combination thereof. The weight space can comprise a manifold, wherein the weight space comprises a smooth manifold, and/or wherein the weight space is homeomorphic to a Euclidean space.
In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises: determining a plurality of approximations of the geodesic path using an approximation of the geodesic equation. Determining the first updated weights (or any updated weights of the present disclosure) can comprise: selecting one of the plurality of approximations of the geodesic path as a best approximation of the geodesic path. The best approximation of the geodesic path can have a shortest total length amongst the plurality of approximations of the geodesic path to a damage hyperplane.
In some embodiments, the method comprises, prior to determining the one or more weights are damaged: receiving a first input. The method can comprise: determining a first output from the first input using the neural network. In some embodiments, determining the first output from the first input using the neural network (or any output from any input using any neural network of the present disclosure)) corresponds to a task. The task comprises a computation processing task, an information processing task, a sensory input processing task, a storage task, a retrieval task, a decision task, an image recognition task, and/or a speech recognition task. In some embodiments, the first input comprises an image. The task can comprise an image recognition task.
In some embodiments, the method comprises, subsequent to updating the weights that are undamaged with the first updated weights: receiving a second input. The method can comprise: determining a second output from the second input using the first updated neural network.
In some embodiments, determining the first updated weights and updating the weights that are undamaged with the first updated weights are performed iterative for at least two iterations. In some embodiments, the method comprises, subsequent to subsequent to updating the weights that are undamaged with the first updated weights: (c2) determining second updated weights corresponding to second weights of the plurality of weights of the neural network that are undamaged using the geodesic path in the weight space. The method can comprise: (d2) updating the second weights that are undamaged with the second updated weights to generate a second updated neural network. In some embodiments, the second updated neural network is on a damage hyperplane. In some embodiments, the first updated neural network is on a damage hyperplane. In some embodiments, the method comprises, subsequent to updating the second weights that are undamaged with the second updated weights: receiving a third input. The method can comprise: determining a third output from the third input using the second updated neural network.
In some embodiments, the neural network when provided comprises no weight that is damaged. In some embodiments, the neural network when provided comprises at least one weight that is damaged. In some embodiments, one or more of the one or more weights have values other than zeros when undamaged. In some embodiments, one or more the one or more weights have values of zeros when damaged. In some embodiments, the method comprises setting the weights that are damaged to values of zeros.
In some embodiments, an accuracy of the neural network comprising no weight that is damaged is at least 90%. In some embodiments, an accuracy of the neural network comprising the weights that are damaged is at most 80%. In some embodiments, an accuracy of the neural network comprising the weights that are damaged is at most 90% of an accuracy of the neural network comprising no weight that is damaged. In some embodiments, an accuracy of the first updated neural network is at least 85%. In some embodiments, an accuracy of the neural network comprising the weights that are damaged is at most 90% of an accuracy of the first updated neural network. In some embodiments, an accuracy of the first updated neural network is at most 99% of an accuracy of the second updated neural network. In some embodiments, the weights of the plurality of weights of the neural network that are damaged comprises at least 5% of the plurality of weights of the neural network.
In some embodiments, the neural network comprises at least 100 weights. In some embodiments, the neural network comprises at least 25 nodes. In some embodiments, the neural network comprises at least 2 layers. In some embodiments, the neural network comprises a convolutional neural network (CNN), a deep neural network (DNN), a multilayer perceptron (MLP), or a combination thereof
Disclosed herein include systems or devices. In some embodiments, a system or a device comprises non-transitory memory configured to store executable instructions and a neural network of the present disclosure. The system can comprise a processor (e.g., a hardware processor or a virtual processor) programmed by the executable instructions to perform: determining one or more weights of the plurality of weights of the neural network are damaged. The processor can be programmed by the executable instructions to perform: determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The processor can be programmed by the executable instructions to perform: updating the weights that are undamaged with the first updated weights to generate a first updated neural network. The non-transitory memory can be configured to store the first updated neural network.
Disclosed herein include systems or devices. In some embodiments, a system or a device comprises non-transitory memory configured to store executable instructions and a neural network of the present disclosure. One or more first weights of the plurality of weights of the neural network can be damaged. The system can comprise a processor (e.g., a hardware processor or a virtual processor) programmed by the executable instructions to perform: determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The processor can be programmed by the executable instructions to perform: updating the weights of the neural network that are undamaged with the first updated weights to generate a first updated neural network. Second weights of the plurality of weights of the first updated neural network may be damaged subsequent to the first updated weights are determined. The processor can be programmed by the executable instructions to perform: determining second updated weights corresponding to one or more weights of the plurality of weights of the first updated neural network, that are undamaged subsequent to the first updated weights are determined, using a geodesic path in the weight space. The processor can be programmed by the executable instructions to perform: updating the weights of the first updated neural network that are undamaged with the second updated weights to generate a second updated neural network.
In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises determining the geodesic path using a geodesic equation. In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises determining an approximation of the geodesic path using an approximation of the geodesic equation. The approximation of the geodesic equation can comprise a first order expansion of a loss function, optionally wherein the first order expansion comprises a Taylor expansion. Determining the first updated weights (or any updated weights of the present disclosure) can comprises determining the approximation of the geodesic equation using a metric (or a metric tensor). The metric can comprise a Riemannian metric, a pseudo-Riemannian metric, or a non-Euclidean metric. The combination of the weight space and the metric can comprise a Riemannian manifold or a pseudo-Riemannian manifold. The metric can comprise a positive semi-definite, symmetric matrix or a positive definite, symmetric matrix. The metric tensor can comprise a symmetric matrix, wherein the metric tensor is definite or semi-definite, wherein the metric is bilinear, and/or wherein the metric tensor is positive, or a combination thereof. The weight space can comprise a manifold, wherein the weight space comprises a smooth manifold, and/or wherein the weight space is homeomorphic to a Euclidean space.
In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises: determining a plurality of approximations of the geodesic path using an approximation of the geodesic equation. Determining the first updated weights (or any updated weights of the present disclosure) can comprise: selecting one of the plurality of approximations of the geodesic path as a best approximation of the geodesic path. The best approximation of the geodesic path can have a shortest total length amongst the plurality of approximations of the geodesic path to a damage hyperplane.
In some embodiments, the processor is programmed by the executable instructions to perform, prior to determining the one or more weights are damaged: receiving a first input. The processor can be programmed by the executable instructions to perform: determining a first output from the first input using the neural network. In some embodiments, determining the first output from the first input using the neural network (or any output from any input using any neural network of the present disclosure)) corresponds to a task. The task comprises a computation processing task, an information processing task, a sensory input processing task, a storage task, a retrieval task, a decision task, an image recognition task, and/or a speech recognition task. In some embodiments, the first input comprises an image. The task can comprise an image recognition task.
In some embodiments, the processor is programmed by the executable instructions to perform: subsequent to updating the weights that are undamaged with the first updated weights: receiving a second input. The processor can be programmed by the executable instructions to perform: determining a second output from the second input using the first updated neural network.
In some embodiments, determining the first updated weights and updating the weights that are undamaged with the first updated weights are performed iterative for at least two iterations. In some embodiments, the processor can be programmed by the executable instructions to perform, subsequent to subsequent to updating the weights that are undamaged with the first updated weights: (c2) determining second updated weights corresponding to second weights of the plurality of weights of the neural network that are undamaged using the geodesic path in the weight space. the processor can be programmed by the executable instructions to perform: (d2) updating the second weights that are undamaged with the second updated weights to generate a second updated neural network. In some embodiments, the second updated neural network is on a damage hyperplane. In some embodiments, the first updated neural network is on a damage hyperplane. In some embodiments, the processor is programmed by the executable instructions to perform: subsequent to updating the second weights that are undamaged with the second updated weights: receiving a third input. The processor can be programmed by the executable instructions to perform: determining a third output from the third input using the second updated neural network.
In some embodiments, the neural network when provided comprises no weight that is damaged. In some embodiments, the neural network when provided comprises at least one weight that is damaged. In some embodiments, one or more of the one or more weights have values other than zeros when undamaged. In some embodiments, one or more the one or more weights have values of zeros when damaged. In some embodiments, the processor is programmed by the executable instructions to perform: setting the weights that are damaged to values of zeros.
In some embodiments, an accuracy of the neural network comprising no weight that is damaged is at least 90%. In some embodiments, an accuracy of the neural network comprising the weights that are damaged is at most 80%. In some embodiments, an accuracy of the neural network comprising the weights that are damaged is at most 90% of an accuracy of the neural network comprising no weight that is damaged. In some embodiments, an accuracy of the first updated neural network is at least 85%. In some embodiments, an accuracy of the neural network comprising the weights that are damaged is at most 90% of an accuracy of the first updated neural network. In some embodiments, an accuracy of the first updated neural network is at most 99% of an accuracy of the second updated neural network. In some embodiments, the weights of the plurality of weights of the neural network that are damaged comprises at least 5% of the plurality of weights of the neural network.
In some embodiments, the neural network comprises at least 100 weights. In some embodiments, the neural network comprises at least 25 nodes. In some embodiments, the neural network comprises at least 2 layers. In some embodiments, the neural network comprises a convolutional neural network (CNN), a deep neural network (DNN), a multilayer perceptron (MLP), or a combination thereof
In some embodiments, the system comprises is comprised an edge device, an internet of things (IoT) device, a real-time image analysis system, a real-time sensor analysis system, an autonomous driving system, an autonomous vehicle, a robotic control system, a robot, or a combination thereof. In some embodiments, the processor comprises a neuromorphic processor.
Disclosed herein includes computer readable media. In some embodiments, a computer readable medium comprises executable instructions, when executed by a hardware processor of a computing system or a device, cause the hardware processor, to perform any method disclosed herein.
Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Neither this summary nor the following detailed description purports to define or limit the scope of the inventive subject matter.
Throughout the drawings, reference numbers may be re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate example embodiments described herein and are not intended to limit the scope of the disclosure.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments can be utilized, and other changes can be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein and made part of the disclosure herein.
All patents, published patent applications, other publications, and sequences from GenBank, and other databases referred to herein are incorporated by reference in their entirety with respect to the related technology.
Disclosed herein include methods for updating weights of a neural network. In some embodiments, a method for updating weights of a neural network is under control of a processor (e.g., a hardware processor or a virtual processor) and comprises: (a) providing (or receiving) a neural network comprising a plurality of weights. The method can comprise: (b) determining one or more weights of the plurality of weights of the neural network are damaged. The method can comprise: (c) determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The method can comprise: (d) updating the weights that are undamaged with the first updated weights to generate a first updated neural network.
Disclosed herein include methods of for updating weights of a neural network. In some embodiments, a method for updating weights of a neural network is under control of a processor (e.g., a hardware processor or a virtual processor) and comprises: (a) providing (or receiving) a neural network comprising a plurality of weights. One or more weights of the plurality of weights of the neural network can be damaged. The method can comprise: (c) determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The method can comprise: (d) updating the weights that are undamaged with the first updated weights to generate a first updated neural network.
Disclosed herein include methods of for updating weights of a neural network. In some embodiments, a method for updating weights of a neural network is under control of a processor (e.g., a hardware processor or a virtual processor) and comprises: (a) providing (or receiving) a neural network comprising a plurality of weights. One or more first weights of the plurality of weights of the neural network can be damaged. The method can comprise: (c) determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The method can comprise: (d) updating the weights of the neural network that are undamaged with the first updated weights to generate a first updated neural network. Subsequent to (d), second weights of the plurality of weights of the first updated neural network may be damaged. The method can comprise: (c2) determining second updated weights corresponding to one or more weights of the plurality of weights of the first updated neural network that are undamaged subsequent to (d) using a geodesic path in the weight space. The method can comprise: (d2) updating the weights of the first updated neural network that are undamaged with the second updated weights to generate a second updated neural network.
Disclosed herein include systems or devices. In some embodiments, a system or a device comprises non-transitory memory configured to store executable instructions and a neural network of the present disclosure. The system can comprise a processor (e.g., a hardware processor or a virtual processor) programmed by the executable instructions to perform: determining one or more weights of the plurality of weights of the neural network are damaged. The processor can be programmed by the executable instructions to perform: determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The processor can be programmed by the executable instructions to perform: updating the weights that are undamaged with the first updated weights to generate a first updated neural network. The non-transitory memory can be configured to store the first updated neural network.
Disclosed herein include systems or devices. In some embodiments, a system or a device comprises non-transitory memory configured to store executable instructions and a neural network of the present disclosure. One or more first weights of the plurality of weights of the neural network can be damaged. The system can comprise a processor (e.g., a hardware processor or a virtual processor) programmed by the executable instructions to perform: determining first updated weights corresponding to one or more weights of the plurality of weights of the neural network that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the neural network. The processor can be programmed by the executable instructions to perform: updating the weights of the neural network that are undamaged with the first updated weights to generate a first updated neural network. Second weights of the plurality of weights of the first updated neural network may be damaged subsequent to the first updated weights are determined. The processor can be programmed by the executable instructions to perform: determining second updated weights corresponding to one or more weights of the plurality of weights of the first updated neural network, that are undamaged subsequent to the first updated weights are determined, using a geodesic path in the weight space. The processor can be programmed by the executable instructions to perform: updating the weights of the first updated neural network that are undamaged with the second updated weights to generate a second updated neural network.
Disclosed herein include systems or devices. In some embodiments, a system or a device comprises non-transitory memory configured to store executable. The system can comprise a processor (e.g., a hardware processor or a virtual processor) programmed by the executable instructions to perform: any method of the disclosure. Disclosed herein includes computer readable media. In some embodiments, a computer readable medium comprises executable instructions, when executed by a hardware processor of a computing system or a device, cause the hardware processor, to perform any method disclosed herein.
Biological neural networks have evolved to maintain performance despite significant circuit damage. To survive damage, biological network architectures have both intrinsic resilience to component loss and also activate recovery programs that adjust network weights through plasticity to stabilize performance. Despite the importance of resilience in technology applications, the resilience of artificial neural networks is poorly understood, and autonomous recovery algorithms have yet to be developed. The present disclosure provides is a mathematical framework to analyze the resilience of artificial neural networks through the lens of differential geometry. The geometric language disclosed herein provides natural algorithms that identify local vulnerabilities in trained networks as well as recovery algorithms that dynamically adjust networks to compensate for damage. The present disclosure shows striking weight perturbation vulnerabilities in common image analysis architectures, including Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs) trained on MNIST and CIFAR-10 respectively. Methods to uncover high-performance recovery paths that enable the same networks to dynamically re-adjust their parameters to compensate for damage are provided. The present disclosure provides methods that endow artificial systems with resilience and rapid-recovery routines to enable their deployment for critical applications.
Brains are remarkable machines whose computational capabilities have inspired many breakthroughs in machine learning. However, the resilience of the brain, its ability to maintain computational capabilities in harsh conditions and following circuit damage, remains poorly developed in current artificial intelligence paradigms. Biological neural networks are known to implement redundancy and other architectural features that allow circuits to maintain performance following loss of neurons or lesion to sub-circuits. In addition to architectural resilience, biological neural networks execute recovery programs that allow circuits to repair themselves through the activation of network plasticity following damage. For example, recovery algorithms reestablish olfactory and visual behaviors in mammals following sensory specific cortical circuit lesions. Through resilience and recovery mechanisms, biological neural networks can maintain steady performance in the face of dynamic challenges like changing external environments, cell damage, partial circuit loss as well as catastrophic injuries like the loss of large sections of the cortex.
Like brains, artificial neural networks must increasingly execute critical applications that require robustness to both hardware component damage and memory errors that could corrupt network weights. Network robustness to soft errors that can lead to weight corruption and network failure is important in applications including (i) decision-making in the healthcare industry, (ii) image and sensor analysis in self-driving cars and (iii) robotic control systems. Errors in dynamic access memory can occur due to malicious attacks (the RowHammer), but a particular focus has been on errors induced by high energy particles that occur at surprising rates. Further, the rising implementation of neural networks on physical hardware (like neuromorphic, edge devices), where networks can be disconnected from the internet and are under control of an end user, necessitates the need for damage-resilient and dynamically recovering artificial neural networks.
The resilience of living neural networks motivates theoretical and practical efforts to understand the resilience of artificial neural networks and to design new algorithms that reverse engineer resilience and recovery into artificial systems. Studies have demonstrated empirically that MLP and CNN architectures can be surprisingly robust to large scale node deletion. However, there is currently little understanding of the empirically observed resilience or what ultimately causes networks to fail. Mathematical frameworks are important for understanding the resilience neural networks and for developing recovery methods that can maintain network performance during damage.
A mathematical framework grounded in differential geometry is disclosed herein for studying the resilience and the recovery of artificial neural nets. Damage/response behavior is formalized as dynamic movement on a curved pseudo-Riemannian manifold. Geometric language provides new methods for identifying network vulnerabilities by predicting local perturbations that adversely impact the functional performance of the network. Further, it is demonstrated that geodesics, minimum length paths, on the weight manifold provide high performance recovery paths that the network can traverse to maintain performance while damaged. The algorithms disclosed herein allow networks to maintain high-performance during rounds of damage and repair through computationally efficient weight-update algorithms that do not require conventional retraining. In some embodiments, the present disclosure provides methods that help endow artificial systems with resilience and autonomous recovery policies to emulate the properties of biological neural networks.
Analyzing Network Resilience with Differential Geometry
A geometric framework is disclosed herein for understanding how artificial neural networks (or machine learning models in general) respond to damage using differential geometry to analyze changes in functional performance given changes in network weights. Layered neural networks have intrinsic robustness properties. A geometric approach is provided herein for understanding robustness as arising from underlying geometric properties of the weight manifold that are quantified by the metric tensor. The geometric approach allows for identification of vulnerabilities in common neural network architectures as well as defines new strategies for repairing damaged networks.
A feed-forward neural network can be represented as a smooth, ∞ function f (x, w), that maps an input vector, x ∈ k, to an output vector, f (x, w)=y ∈ m. A ∞ function is a function that is differentiable for all degrees of differentiation. The function, f (x, w), is parameterized by a vector of weights, w ∈ n, that are typically set in training to solve a specific task. W is referred to as the weight space (W) of the network, and F=m is referred to as the functional manifold. In addition to f, in some embodiments, a loss function is of interest, L: m×→, that provides a scalar measure of network performance for a given task (
It may be asked how the performance of a trained neural network, wt, will change when subjected to weight perturbation, shifting wtrained→wdamaged. Differential geometry can be used to develop a mathematical theory, rooted in a functional notion of distance, to analyze how arbitrary weight perturbations wt→wd impact functional performance of a network. Specifically, a local distance metric is constructed, g, that can be applied at any point in W to measure the functional impact of an arbitrary network perturbation.
To construct a metric mathematically, the input, x, into a network is fixed and it is asked how the output of the network, f (w, x), moves on the functional manifold, F, given an infinitesimal weight perturbation, du, in W where wd=wt+du. For an infinitesimal perturbation du,
where Jw
evaluated at wt. The change in functional performance given du is measured as the mean squared error
where gw
Explicitly,
where the partial derivatives
measure change in functional output of a network given a change in weight. The Additional Details section below describe extension of the metric formulation to cases where a set is considered, X, of training data and view g as the average of metrics derived from individual training examples. The metric, g, provides a local measure of functional distance on the pseudo-Riemmanian manifold (W, g). At each point in weight space, the metric defines the length, du, duw, of a local perturbation by its impact on the functional output of the network (
Globally, the metric can be used to determine the functional performance change across a path connected set of networks. Mathematically, the metric changes as one moves in W due to the curvature of the ambient space that reflects changes in the vulnerability of a network to weight perturbation (
is the infinitesimal functional change accrued while traversing path γ(t) ∈ W.
In what follows, the resilience of neural networks was studied by analyzing the structure of the metric tensor along paths in weight space. The metric tensor can be used to develop recovery methods by finding ‘geodesic paths’, minimum length paths, in the pseudo-Riemannian manifold that allow networks to respond to damage while suffering minimal performance degradation.
In some embodiments, the mathematical framework can be first applied to analyze the response of trained neural networks to small, local weight perturbations. Trained networks are often robust to small, local weight perturbation. Local resilience can be connected to the spectral properties of the metric tensor, g, at a given position, wt, in weight space. As described herein, networks are typically robust to random local weight perturbations but also have catastrophic vulnerabilities to specific low magnitude weight perturbations that dramatically alter network performance.
To understand local damage, a trained network is considered, wt, and the network is subjected to an infinitesimal weight perturbations in a direction du=cidwi yielding the perturbed weights w′=wt+du. dwi is used to indicate an infinitesimal displacement vector in the direction wi. Formally, du is viewed as a vector in the tangent space of W at wt, Tw
As a positive semi-definite, symmetric matrix, g (evaluated at wt) has an orthonormal eigenbasis {vi} with eigenvalues λi, λj≥0. The eigenvalue A locally determines how a perturbation along the eigenvector vi will alter functional performance. Expanding an arbitrary perturbation, du in the basis {v1}, as du=Σicivi, the functional performance change of the network is
where ci=du, vi quantifies the contribution of vector vi to du. Thus, the performance change, d(wt, wt+du), incurred by a network, following perturbation du is determined by the magnitude of each λi and the projection of du onto vi. The eigenvalues λ convert weight changes into change in functional performance and so have units of
A network will be resilient to weight perturbations directed along eigenvectors, vi, with small eigenvalues (λi<10−3). Alternately, networks are vulnerable to perturbations along directions with larger eigenvalues (λi>10−3). The definition of resilient directions, λi<10−3, is an operational direction that selects directions where a unit of weight change will produce a performance change of less than 10−3 or 0.1%.
Mathematically, the resilience of networks can be understood to randomly distributed weight perturbations by calculating the average response of a network to Gaussian weight perturbations, du˜P(du), where P(dui)=(0, σ/d) (n=dim(W) and [∥du∥2]=σ). The expectation of the induced performance change for such a Gaussian perturbation is
<σρλ1, (10)
where ρ indicates the fraction of vulnerable directions, and λ1 is the largest eigenvalue of g.
Empirically, trained networks were found to be, perhaps as expected, robust to ‘random’ local perturbation (
Consistent with their eigenspectra (VGG-11: p<10−4, MLP-1, MLP-2: ρ<10−3), both MLP and CNN architectures exhibit minimal performance degradation for unit-ball perturbations (unit-ball perturbations, due to the high dimensionality of the space, induce an average weight change of <10−6 for individual weights) (σ=1,
Resilience to such small local perturbations might be expected, but the present framework also exposes hidden catastrophic vulnerabilities to perturbations of the same order in both networks. By designing adversarial weight perturbations to lie along the ‘vulnerable’ eigenvectors of g (vi with large λi), sharp performance declines can be induced across architectures (
In some embodiments, after a neural network is trained, its resiliency (e.g., the existence and/or the number of adversarial perturbations, the effects of adversarial perturbations) can be determined. If the resiliency of the neural network is not satisfactory, another neural network model can be retrained. The process can be repeated until a neural network with satisfactory resiliency is obtained. In some embodiments, a system or a device (e.g., an edge device, an internet of things (IoT) device, a real-time image analysis system, a real-time sensor analysis system, an autonomous driving system, an autonomous vehicle, a robotic control system, a robot, or a combination thereof) can comprise a neural network with satisfactory resiliency. In some embodiments, a neural network with satisfactory resiliency can be used to perform a task. The task can comprise a computation processing task, an information processing task, a sensory input processing task, a storage task, a retrieval task, a decision task, an image recognition task, and/or a speech recognition task.
Trained MLP's and CNN's can be surprisingly robust to much more profound global damage including large scale node deletion. In this section, a concept of break-down acceleration is developed using the covariant derivative of a network along paths connecting the trained network and damaged network in W. Break-down acceleration predicts failure points that emerge in weight space through rapid changes in the curvature of the weight space, and ultimately allows methods to be developed and described herein to thwart break-down by avoiding acceleration.
Mathematically, global damage can be represented as a path in weight space, γ(t) ∈ W with t ∈ [0,1], that connects a trained network, γ(0)=wt, to its damaged counterpart γ(1)=wd (
Along a path, γ(t) ∈ W the velocity vector,
quantities the change in the functional performance of a network per unit time. Mathematically, the break-down speed (s) of a network along a path in weight space is defined as the norm of the network's velocity vector computed using the metric tensor
Non-linear break-down points emerge along paths in W when break-down speed undergoes a rapid acceleration, so that
The break-down speed and acceleration can be calculated explicitly for a network following simple straight or Euclidean path from a trained to damaged configuration.
Taking wd=0, γ(t)=wt(1−t) and
this gives
where gij is evaluated along γ(t). The change in the metric tensor
along a path γ(t), thus, determines whether performance decays at a constant
or at an accelerating
For curved paths break-down acceleration can be analyzed using an object known as the covariant derivative, ∇65 (t)νv(t) (see the Additional Details section below).
In practice, calculation of the break-down acceleration identifies, damage failure points in real neural networks. For example, both MLP-1 and VGG-11 architectures tolerate considerable node deletion (
Thus, globally, network break-down occurs along a damage path in W due to abrupt changes in the curvature of the underlying functional landscape that result in abrupt change in the metric. Disclosed herein is a method for designing recovery protocols that can adapt a neural network's weights to compensate for damage based on the mathematical connection between break-down and curvature. Recovery mechanisms exist in neuroscience that compensate for damage by altering the weights of undamaged nodes. The concept of break-down acceleration can be applied to develop recovery methods for artificial neural networks that compensate for damage through continuous adjustment of the undamaged weights by minimizing the acceleration along the path.
Mathematically, minimum acceleration paths in weight space are known as geodesic paths. Geodesic paths, by definition, provide both minimum length and minimum acceleration paths in weight space. Specifically, a trained network is considered, w, subjected to weight damage that zeros a subset of weights, wi=0, for i ∈ ndamaged. The method responds to damage by adjusting undamaged weights, wi for i ∉ ndamaged to maximize network performance by moving the network along a geodesic in W. Geodesic paths can be computed directly using the metric g and also represent the minimum distance paths (with distance defined in Equation 6) between two points on W. Geodesic paths can be typically calculated using the geodesic equation (see the Additional Details section below) an ordinary, differential equation that uses derivatives of the metric tensor to identify minimum acceleration paths in a space given an initial velocity. However, solutions to the geodesic equation are computationally prohibitive for large neural networks as they require evaluation of the Christoffel symbols which scale as a third order polynomial of the number of parameters in the neural network ((n3)).
Therefore, an approximation to the geodesic equation was developed using a first order expansion of the loss function. Given a trained network, the method updates the weights of the network to optimize performance given a direction of damage. To discover a geodesic path γ(t), the method begins at a trained network and iteratively solves for the tangent vector, 0(w), at every point, w=γ(t), along the path, starting from wt and terminating at the damage hyperplane, Wd. The damage hyperplane is the set of all networks, w ∈ W, such that wi=0, for i ∈ ndamaged. Specifically, the following is solved
argmin
θ(w)
θ(w),θ(w))w−βθ(w)Tνw subject to: θ(w)Tθ(w)≤0.01. (12)
The tangent vector θ(w) is obtained by simultaneously optimizing two objective functions: (1) minimizing the increase in functional distance along the path measured by the metric tensor (gw) [min: (0(w), 0(w)w)−(0(w)Tgw0(w))] and (2) maximizing the dot-product between the tangent vector and vw, vector pointing in the direction of the hyperplane [max: (θ(w)Tνw)] to enable movement towards the damage hyperplane. By finding geodesic paths to the damage hyperplane, the method can find weight adjustments that can be made within a network during damage to maintain performance (
The optimization method can be described as a quadratic program that trades off, through the hyper-parameter β, motion towards the damage hyper-plane and the maximization of the functional performance of the intermediate networks along the path (the optimization method elaborated in the Additional Details section below). The method discovers multiple paths from the trained network wt to Wd, damage hyper-plane, (depicted as path-1 to path-4 in
The geodesic method enables damage compensation through continuously updating weights in the network. The geodesic method can be applied to discover recovery paths from a trained network (VGG-11) to a pre-defined damage hyperplane (
While high-performance paths can also be discovered through heuristic fine-tuning, the geodesic method is both rational and computationally efficient. Specifically, an iterative prune-train cycle achieved through structured pruning of a single node at a time, coupled with stochastic gradient descent (SGD) retraining (
Additionally, the same geodesic method enables one to dynamically shift networks between different weight configurations (e.g., from a dense to sparse or vice-versa) while maintaining performance (
Neural networks incorporated in IoT devices or networks used for critical applications need to maintain a very high functional performance always (during the lifetime of the device). That is, it is desirable that these networks to be robust to local and global damage (perturbation). This section shows that by endowing networks with the ability to self-recover rapidly (within a single epoch, at times), networks that constantly compensate for vulnerabilities and damage can endure a lot more damage than those that are not equipped with recovery procedures. In
A mathematical framework has been established to analyze resilience of neural networks through the lens of differential geometry. Disclosed herein include A functional distance metric on a Riemannian weight manifold is disclosed herein. The metric tensor, covariant derivative, and the geodesic can be applied to predict the response of networks to local and global damage. Mathematically, the present disclosure forms new connections between machine learning and differential geometry. The new methods described herein can be used for (i) identifying vulnerabilities in neural networks and (ii) compensating for network damage in real-time through computationally efficient weight updates, enabling their rapid recovery. In some embodiments, these methods could be useful in a variety of practical applications as neural networks are increasingly deployed on edge devices with increased susceptibility to damage.
The field of artificial intelligence (AI) has grown by leaps and bounds in the last few years. As a result, AI is increasingly being built into many critical applications across the society. Additionally, to cater to the rising need of AI systems for real-time applications, AI systems have been transitioning from cloud-implementation to edge devices and neuromorphic hardware. Some of the real-time critical applications that have actively adopted AI systems include (1) decision making in the health-care industry, (2) real-time image and sensor analysis in self-driving cars, (3) incorporation into IoT sensors and devices installed in most households and (4) robotic control systems.
The failure of AI in any of these applications could be catastrophic. For instance, errors committed by AI systems while classifying radiology reports in the health-care industry, or the faulty real-time analysis of stream of images being processed by AI systems in self-driving cars could lead to human casualties. Hence, it has become extremely important to understand how neural network architectures (performing critical applications) react to perturbations, that could arise from many sources. AI implemented on the cloud are a victim of DRAM (dynamic random-access memory) errors that can occur at surprising rates, either due to malicious attack or induced by high energy particles. Additionally, the growing implementation of AI networks on physical hardware (for instance, neuromorphic, edge devices) has made the need for discovering damage-resilient networks and rapidly recovery damaged networks a necessity.
The present disclosure lays down the mathematical framework to study resilience and robustness of neural networks to damage and proposes algorithms to rapidly recover networks experiencing damage. In some embodiments, the methods and frameworks described herein can be extremely important for AI systems implemented across many applications, as damage of systems is inevitable and needs to be protected against. Although resilience and robustness of AI systems is very important, there were not many principled studies on the same. To reduce the gap in knowledge on the resilience of AI, a principled framework is disclosed herein to understand the vulnerabilities of AI networks. Exemplary applications include the design of damage-resilient networks and rapid recovery algorithms implemented on neuromorphic hardware. The methods and frameworks of the present disclosure can be important as neural networks are becoming ubiquitous across many applications, ranging from rovers sent to mars to radiology applications.
This section provides more detailed construction of the mathematical framework, the geodesic path optimization method, and information on the neural network architectures used in the numerical experiments performed and described herein. In the first sections the definition of a Riemannian manifold (W, g) and several technical aspects of the metric are described. An issue is the extension of the construction to multiple input data points and the impact of this extension on the metric. Then, the tangent space, formalize the covariant derivative, and geodesic damage compensation algorithm are discussed. In the last section, details are provided about the MLP and CNN neural networks used in numerical experiments.
Mathematical tools from differential geometry are applied to study the response of neural networks to weight perturbation. The fundamental construction is that a weight space, W, is considered to be a smooth manifold endowed with a Riemannian metric, g, so that the pair (W, g) is a Riemannian manifold. Following this construction, the analysis of local and global damage follows by using standard tools from differential geometry including the tangent space, the covariant derivative and the geodesic to analyze damage.
An aspect of this construction is that the user proceeds by considering the weight space itself to be the manifold, and pull-back a functional metric onto W. The construction allows isolating mathematical complexity concerning the definition of the neural network within the construction of the metric itself. Following the construction of the metric, network damage can be analyzed by applying the non-Euclidean metric tensor to calculate distances within W, where W is homeomorphic to standard Euclidean space. In what follows the construction of the Riemannian manifold and how the mathematical properties of g as a positive (semi)-definite bilinear form arise are discussed.
A Riemannian manifold consists of a smooth topological manifold endowed with a Riemannian metric. A smooth topological manifold, M, is a locally Euclidean space. By locally Euclidean, it is meant that around every point, p ∈ M there is a function, 4), that maps a neighborhood of M, U where p ∈ U ⊂ M, to n (ϕ: U→n) so that the collection {(Uα, ϕα)} known as an atlas, covers M. In the general case, many different open sets Uα to cover M may be needed. The case of weight space is quite convenient in that a single map, the identity map, gives an atlas for W. For a smooth manifold, each a homeomorphism and so be continuous, locally one-to-one and have a continuous inverse. The weight space W is homeomorphic (and diffeomorphic) to n by the identity map, and so therefore W is a smooth manifold. The simplicity of the manifold gives the present methods much of its practical power.
Now, a metric is introduced onto W that endows the manifold with a notion of distance that encapsulates the function properties of the underlying neural network. Intuitively, W can be thought of as becoming a curved space due to the influence of the functional properties of the neural network on the local structure of space. The approach has analogies with physical models where the path of a particle through an ambient space can be influenced by a metric which is the manifestation of a physical force like gravity. Neural networks can be viewed as dynamically moving along a smooth manifold whose notion of distance is functional.
Specifically, a neural network is considered to be a smooth, ∞ function f(x, w), that maps an input vector, x ∈ k, to an output vector, f(x, w)=y ∈ m. The function, f(x, w), is parameterized by a vector of weights, w ∈ n, that are typically set in training to solve a specific task. In general, several popular neural network functions like the rectified linear unit (ReLU) are not actually ∞ (do not have continuous derivatives of all orders). For example, the ReLU function h(x)=max(x, 0) has a discontinuity at h′(0). However, the function is commonly approximated by the soft-plus function h(x)=log(1+exp(x)) which is ∞, and so there is not an issue.
The training data itself has an interesting and more subtle impact on the metric. To construct a metric on W, first, consider the map generated by the network f given a fixed data point x
This map is called the functional map. A specific example of such a map is that x could be a specific vector of image data from MNIST, and f maps this data to a m=10 dimensional space that scores the image for each of the 10 possible handwritten digits. Globally, it is noted that f in general will not be one-to-one or onto.
Locally, it is asked how the output of f changes for an infinitesimal weight change
where dw1 is taken to be standard basis vectors in W, and
is the Jacobian matrix of partial derivatives. In general J will be an n×m matrix and, therefore, rank(J)≤min(n, m), so that the rank of J is determined by the number of weights and the number of output functions. A key difference between the present framework and classical settings in which differential geometry is applied is that, here, n≠m. In fact, it will be a very special case that achieves an equality of weights and output function.
To construct the metric, mean squared error may be used to measure the distance between functional outputs generated by the unperturbed and perturbed networks as
d(w, w+dw)=|f(x, w)−f(x, w+dw)|2=dwT(JTJ)dw=dwTg dw, (15)
where the local notion of distance is used to derive a metric, g, that converts local weight perturbations into a distance. J and g are fields that vary across W. The metric can be evaluated at a position location in space or as the method moves along a path through weight space.
Formally, the metric can be thought of as providing an inner product at every location in weight space. For general manifolds, the mathematical construction is to consider a tangent plane or tangent space at each point p ∈ W, and to imagine a plane that locally approximates a curved manifold at each point. In this case, the metric tensor provides a local inner product and hence a local notion of distance.
Therefore, an inner product on the tangent space at any point p ∈ W can be defined as
where u, v are taken to be vectors in the tangent space, and gp is used to indicate the metric evaluated at the point p. Formally, tangent vectors can be typically constructed as local differential operators, but can be viewed intuitively as small arrows at p.
A Riemannian metric is an inner product that satisfies a set of conditions. The inner product must be symmetric, bilinear and positive definite. The positive definite condition can be relaxed through construction of a pseudo-metric. The inner product provides the familiar notions of distance that exist in classical Euclidean spaces.
In general, the notion of a metric is separate from its representation as a matrix, but there is a natural map between inner products and matrices that may be exploited. The metric satisfies symmetry and linearity through the definition of the metric as a product of the Jacobian matrix and its transpose. Linearity is a natural consequence of standard matrix operations. In the case of symmetry, (u, ν)p=uTJTJν=(Jν)T(uTJT)T=νTJTJu=(ν, u)p. Therefore, the metric is, in general, both symmetric and linear in its arguments.
However, the positive definiteness of the metric is determined by the rank of the Jacobian matrix, J. In the typical case n>m, and the rank of the Jacobian matrix will be limited by m. The metric, g when viewed as a local bilinear form or as an n×n matrix will not be full rank and will be a pseudo-metric. The metric can be analyzed by considering its representation as a matrix, and, thus, apply tools from linear algebra. In general a matrix A ∈ n×n is positive definite if xTAx>0∀x ∈ n, x≠0 or equivalently eigenvalues of A λi>0 ∀i. Alternately, a positive semi-definite matrix A has xTAx≥0 ∀x∈ n and A λi≥0 ∀i.
Since g is the product JTJ, g has λi≥0 as can be seen simply by considering the singular value decomposition of J. However, the matrix rank of g at a point on the manifold is similarly bounded by the rank of J, and rank(g)=rank(J). Therefore, g can have k eigenvalues that are identically zero λi=0 where k=n−rank(J)<n−m , so that, in general, a metric constructed based on a single training example is not positive definite but positive semi-definite. The key results can be applied to both Riemannian manifolds as well as pseudo-Riemannian manifolds. However, the formal derivation of the geodesic equation requires calculation of an inverse gij of the metric. The geodesic equation is not explicitly used, here, but can be of interest in applying the framework to construct a positive definite metric.
The rank of the metric can be increased by extending construction to multiple data points. A set of data examples, X, can be considered so that xi ∈ X. For a single example, the neural network function f generates an output f(x,w) ∈ m. The output space for a single example is called, x
Each Fx
The construction generalizes the notion of functional distance, so that now functional distance involves a sum over all xi ∈ X as
where the sum is performed over a set of input vectors xi ∈ X and over all components j of the output.
The form of the metric tensor also has a natural generalization to the case of multiple input data points, and simply becomes a sum
where each gx
The result can be important in applications because the rank of g is influenced by both inherent properties of the neural network at a point in weight space and the number of training examples. When n>m×p, the Jacobian matrix is not full rank, and so the rank of the metric is data limited. When m×p>n, the Jacobian matrix can still contain degenerate directions due to the geometry of the function f. In some embodiments, it is the curvature of f that is examined, and so the option of saturating the rank of metric is needed. Numerically, an example is shown in
A central insight in differential geometry is that the structure of a manifold can be analyzed by considering the tangent space at each point on the manifold and as well as the properties of the Riemannian metric when restricted to that tangent space. Intuitively, the tangent space is a local linear approximation of a manifold at a single point. The Riemannian metric yields an inner product that allows for calculation of the length of tangent vectors within the tangent plane or space. By calculating inner productions within weight space W the functional response of a network to a local weight perturbation can be determined.
The tangent space of W, TW (W) at a point, w, can be constructed by considering a set of local tangent vectors at a point. Tangent spaces carry algebraic structures of vector spaces. Tangent vectors can be intuitively viewed as tangent arrows anchored at a specific point on the manifold, but are formally defined as a set of local differential operators. For the weight space W, the set of local operators
is a basis for the tangent space. The differential operators provide a
These local differential operators can be be thought of as local perturbation operators which carry information about how infinitesimal weight changes impact the functional performance of a network. Formally, the Riemannian metric, g, then, defines an inner, ei, ejg
The inner product of a tangent vector eti, ejg
To better describe the geometric objects on a manifold, the concept of differentiation on a manifold that is in independent of local charts is developed, that is, a derivative operator whose components transform like tensor is developed. In order to define a derivative operator, it is desirable to be able to compare vectors and tensors based at different points on the manifold. The machinery that is used is called “affine connection.”
The affine connection ∇ is a differential operator that allows the following to be defined:
A connection on a differentiable manifold can be be defined.
Connection:
Let E→M be a smooth vector bundle over a differentiable manifold M. Denote the space of smooth sections of E by Γ(E). A connection of E is a linear map
∇: Γ(E)→Γ(E⊗T*(M)) (23)
such that
∇(σf)=(∇σ)f+σ⊗df. (24)
Affine Connection:
Let M be a smooth manifold and let Γ(TM) be the space of vector fields on M. The affine connection on M is a bilinear map
Γ(TM)×Γ(TM)→Γ(TM), (25)
which maps
(X, Y)∇XY. (26)
Tangent Bundle:
The tangent bundle of M is defined as the union of all tangent spaces:
A tangent vector ν plays the role of a directional derivative, with νf meaning the derivative of a smooth function f along the direction ν. A smooth vector field X is defined as a cross-section of the tangent bundle.
Riemannian Metric Connection:
A special case of the metric connection, a given connection is Riemannian if and only if
Some concept of differentiation on a manifold that is chart independent can be defined. A derivative operator whose components transform like a tensor is desired.
In an arbitrary basis {eμ}, ∇e
Since this is a vector field for each eμ, ∇e
where Γνμλ are the connection coefficients and are not the components of a tensor.
Write X=Xμeμ, Y=Yμeμ, which gives
To see how Γμνλ transform under a coordinate transformation, the user can look at
which gives
It follows from the above derivation that
Note also that
Γν′μ′λ′Λλ′αΛαγ′=Λαγ′λμ′βeβ(Λν′α)+Λαγ′Λμ′λΛν4 βΓβλα. (35)
Since ΛλαΛγ′α=δγγ′, the following is obtained
Γν′μ′γ′=Λαγ′λμ′βe62 (Λν′α)+Λαγ′Λμ′λΛν4 βΓβλα. (36)
Let ∇, {tilde over (∇)} be two connections on M. The difference
D (X, Y)=∇XY−∇XY. (37)
is always a tensor.
As discussed above, break-down acceleration for curved path can be analyzed using covariant derivative, ∇γ(t)ν(t).
Consider a point P and a neighboring point Q on the damage manifold, where Q is at a parameter distance Δt from P along curve γ. Let ν(t) and ν(t+Δt) be members of the vector field at P and Q. A new vector field ν0 can be defined which equals v(t) at Q and is parallel-transported along γ. The covariant derivative of ν(t) at P can be expressed as
Break-down acceleration can also be calculated conveniently from the definition of the Riemannian metric as an inner product by considering a path, γ(t) ∈ W, t ∈ [0,1] and velocity vectors
calculated at different points in time. The break-down speed is defined along the path as the inner product of the tangent vector
It is noted that speed is typically defined as but speed is defined as above due to the squared loss function which differs from the traditional euclidean distance that provides the convention for speed.
In the present definition, acceleration is
On the damage manifold, there is a need to develop a generalization of a straight line in a flat space. In curved spaces, a geodesic is a path that parallel transports its own tangent vectors.
For two distinct points P and Q on the damage manifold M, the geodesic connecting P and Q is defined to be the curve with minimal arclength that passes through both points. More rigorously, it satisfies the geodesic equation below.
Geodesic Equation:
To find geodesic recovery paths on W, the geodesic equation given by Equation 42 can be solved:
where, wj defines the jth basis vector of the weights space W, Γμνη specifies the Christoffel symbols
on the manifold. The Christoffel symbols capture infinitesimal changes in the metric tensor (g) along a set of directions in the manifold. They are computed by setting the covariant derivative of the metric tensor along a path specified by γ(t) to zero. Specifically, geodesic paths, γ(t), can be computed so that γ(0)=wt and γ(1) ∈ Wd where Wd is the damage hyper-plane. The damage hyper-plane is the set, Wd={wi=0, ∀i ∈ ndamage} ⊂ W, of all networks that are consistent with a given configuration of weight damage. Thus, paths can be found through weight space that achieve a given configuration of damage while maximizing network performance.
As the computation of the Christoffel symbols is both memory and computationally intensive, an optimization algorithm is described herein to evaluate the ‘approximate’ geodesic in the manifold.
Given a trained network, the method updates the weights of the network to optimize performance given a direction of damage. To discover a geodesic path γ(t), the method can begin at a trained network and iteratively solve for the tangent vector , θ(w), at every point, w=γ(t), along the path, starting from wt and terminating at the damage hyperplane, Wd. The damage hyperplane is the set of all networks, w ∈ W, such that wi=0, for i ∈ ndamaged. Specifically, the following is solved
argmin
74 (w)
θ(w), θ(w)w−βθ(w)Tvw subject to: θ(w)Tθ(w)≤0.01. (43)
The tangent vector θ(w) can be obtained by simultaneously optimizing two objective functions: (1) minimizing the increase in functional distance along the path measured by the metric tensor (gw) [min: (θ(w), θ(w)w)=(θ(w)Tgwθ(w))] and (2) maximizing the dot-product between the tangent vector and νw, vector pointing in the direction of the hyperplane [max: (θ(w)Tνw)] to enable movement towards the damage hyperplane.
For small networks (with a small number of parameters), the tangent vector of the curved path in the manifold can be evaluated by re-evaluating the metric tensor along discrete steps on the manifold. However, as the metric tensor scales as a square of the number of parameters in the network, the estimation of the metric tensor can be memory intensive for a large network (like VGG-11) with 128 million parameters. The method can find ‘approximate’ geodesic paths for larger networks, that traverses from a well-trained network to networks on a specified damage hyperplane is stated in terms of an optimization method elaborated below:
Taylor Expansion of Loss Function:
where,
is the gradient of the loss function wrt parameters of the network.
The following are optimized:
Combining the 2 functions to be optimized along with the constraint using Lagrangian multiplier formulation:
vw is the direction pointing towards the damage hyper-plane of interest.
Solving the lagrange equations:
which gives:
Substitute in second equation to evaluate μ:
For the paths evaluated above, β is 0.1; and a learning rate that varies between 0.001 to 0.1. In some embodiments, β can be, or be about, 0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, or more. In some embodiments, the learning rate can be, or be about, 0.001, 0.002, 0.003, 0.004, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, or more.
In
Throughout the present disclosure, two types of neural networks have been used: (i) Multi-layer perceptrons (MLP) and (ii) Convolutional neural networks (CNN). Both, the MLP and CNN networks use ReLU non-linearities while forward-propagating inputs through the network.
Two variants of MLP's were used: (i) MLP with 1 hidden layer, referred to as MLP-1, (ii) MLP with 2 hidden layers, referred to as MLP-2. The MLP's are trained to perform image classification on MNIST.
VGG-11 was used with batch-norm to perform image classification on CIFAR-10. A pre-trained VGG-11 model on CIFAR-10 with batch-norm was obtained for analysis.
Although the methods, approaches, algorithms, frameworks, and mathematical formulations and derivations herein are described with reference to a neural network, it is for illustration only and is not intended to be limiting. The methods, approaches, algorithms, frameworks, and mathematical formulations and derivations herein can be applied to a machine learning model in general.
Disclosed herein include methods for updating weights of a machine learning model. Any of the methods for updating weights of a machine learning model can be performed by or using the computing device 8800 described with reference to
In some embodiments, a method for updating weights of a machine learning model (e.g., a neural network) comprises: providing (or receiving) a machine learning model comprising a plurality of weights. One or more weights of the plurality of weights of the machine learning model can be damaged. Alternatively or additionally, one or more nodes of a plurality of nodes of the machine learning model can be damaged. The method can comprise: determining first updated weights corresponding to one or more weights of the plurality of weights of the machine learning model that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the machine learning model. The method can comprise: updating the weights that are undamaged with the first updated weights to generate a first updated machine learning model.
In some embodiments, a method for updating weights of a machine learning model (e.g., a neural network) comprises: providing (or receiving) a machine learning model comprising a plurality of weights. One or more first weights of the plurality of weights of the machine learning model can be damaged. Alternatively or additionally, one or more first nodes of a plurality of nodes of the machine learning model can be damaged. The method can comprise: determining first updated weights corresponding to one or more weights of the plurality of weights of the machine learning model that are undamaged using a geodesic path in a weight space comprising the plurality of weights of the machine learning model. The method can comprise: updating the weights of the machine learning model that are undamaged with the first updated weights to generate a first updated machine learning model. Second weights of the plurality of weights of the first updated machine learning model may be damaged. Alternatively or additionally, one or more second nodes of the plurality of nodes of the machine learning model can be damaged. The method can comprise: determining second updated weights corresponding to one or more weights of the plurality of weights of the first updated machine learning model that are undamaged using a geodesic path in the weight space. The method can comprise: updating the weights of the first updated machine learning model that are undamaged with the second updated weights to generate a second updated machine learning model.
In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises determining the geodesic path using a geodesic equation. In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises determining an approximation of the geodesic path using an approximation of the geodesic equation. The approximation of the geodesic equation can comprise a first order expansion of a loss function, optionally wherein the first order expansion comprises a Taylor expansion. Determining the first updated weights (or any updated weights of the present disclosure) can comprises determining the approximation of the geodesic equation using a metric (or a metric tensor). The metric can comprise a Riemannian metric, a pseudo-Riemannian metric, or a non-Euclidean metric. The combination of the weight space and the metric can comprise a Riemannian manifold or a pseudo-Riemannian manifold. The metric can comprise a positive semi-definite, symmetric matrix or a positive definite, symmetric matrix. The metric tensor can comprise a symmetric matrix, wherein the metric tensor is definite or semi-definite, wherein the metric is bilinear, and/or wherein the metric tensor is positive, or a combination thereof. The weight space can comprise a manifold, wherein the weight space comprises a smooth manifold, and/or wherein the weight space is homeomorphic to a Euclidean space.
In some embodiments, determining the first updated weights (or any updated weights of the present disclosure) comprises: determining a plurality of approximations of the geodesic path using an approximation of the geodesic equation. Determining the first updated weights (or any updated weights of the present disclosure) can comprise: selecting one of the plurality of approximations of the geodesic path as a best approximation of the geodesic path. The best approximation of the geodesic path can have a shortest total length amongst the plurality of approximations of the geodesic path to a damage hyperplane.
In some embodiments, the method comprises, prior to determining the one or more weights are damaged (or determining the one or more nodes are damaged): receiving a first input. The method can comprise: determining a first output from the first input using the machine learning model. In some embodiments, determining the first output from the first input using the machine learning model (or any output from any input using any machine learning model of the present disclosure)) corresponds to a task. The task comprises a computation processing task, an information processing task, a sensory input processing task, a storage task, a retrieval task, a decision task, an image recognition task, and/or a speech recognition task. In some embodiments, the first input comprises an image. The task can comprise an image recognition task.
In some embodiments, the method comprises, subsequent to updating the weights that are undamaged with the first updated weights: receiving a second input. The method can comprise: determining a second output from the second input using the first updated machine learning model.
In some embodiments, determining the first updated weights and updating the weights that are undamaged with the first updated weights are performed iterative a number of iterations (or epochs), such as 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 35, 40, 45, 50, or more iterations (or epochs). In some embodiments, the method comprises, subsequent to subsequent to updating the weights that are undamaged with the first updated weights: determining second updated weights corresponding to second weights of the plurality of weights of the machine learning model that are undamaged using the geodesic path in the weight space. The method can comprise: updating the second weights that are undamaged with the second updated weights to generate a second updated machine learning model. In some embodiments, the second updated machine learning model (or any machine learning model of the present disclosure) is on a damage hyperplane. In some embodiments, the first updated machine learning model (or any machine learning model of the present disclosure) is on a damage hyperplane. In some embodiments, the method comprises, subsequent to updating the second weights that are undamaged with the second updated weights: receiving a third input. The method can comprise: determining a third output from the third input using the second updated machine learning model.
In some embodiments, the machine learning model when provided comprises no weight that is damaged. In some embodiments, the machine learning model when provided comprises at least one weight that is damaged. The number of weights damaged can be or be about, for example, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 5000, 10000, or more or less. The percentage of weights that are damaged can comprise at least 5% (or 0.01%, 0.02%, 0.03%, 0.04%, 0.05%, 1%, 2%, 3%, 4%, 5%, 10%, 15%, 20%, 25%, 30%, 40%, 50%, or more or less) of the plurality of weights of the machine learning model. In some embodiments, one or more of the one or more weights have values other than zeros when undamaged. In some embodiments, one or more the one or more weights have values of zeros when damaged. In some embodiments, the method comprises setting the weights that are damaged to values of zeros. The number of nodes damaged can be or be about, for example, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 5000, 10000, or more or less. The percentage of nodes that are damaged can comprise at least 5% (or 0.01%, 0.02%, 0.03%, 0.04%, 0.05%, 1%, 2%, 3%, 4%, 5%, 10%, 15%, 20%, 25%, 30%, 40%, 50%, or more or less) of the plurality of nodes of the machine learning model.
In some embodiments, an accuracy of the machine learning model comprising no weight that is damaged is at least 90% (or at least 70%, 75%, 80%, 85%, 90%, 95% or more or less). In some embodiments, an accuracy of the machine learning model comprising the weights that are damaged is at most 80% (or at most 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% or more or less). In some embodiments, an accuracy of the machine learning model comprising the weights that are damaged is at most 90% (or at least 70%, 75%, 80%, 85%, 90%, 95% or more or less) of an accuracy of the machine learning model comprising no weight that is damaged. In some embodiments, an accuracy of the first updated machine learning model is at least 85% (or at least 70%, 75%, 80%, 85%, 90%, 95% or more or less). In some embodiments, an accuracy of the machine learning model comprising the weights that are damaged is at most 90% (or at least 70%, 75%, 80%, 85%, 90%, 95% or more or less) of an accuracy of the first updated machine learning model. In some embodiments, an accuracy of the first updated machine learning model is at most 99% (or 85%, 90%, 95%, 96%, 97%, 98%, 99%, 99.9%, or more or less) of an accuracy of the second updated machine learning model. The number of the weights of the plurality of weights of the machine learning model that are damaged can be or be about, for example, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 5000, 10000 or more or less. The weights of the plurality of weights of the machine learning model that are damaged can comprise at least 5% (or 0.01%, 0.02%, 0.03%, 0.04%, 0.05%, 1%, 2%, 3%, 4%, 5%, 10%, 15%, 20%, 25%, 30%, 40%, 50%, or more or less) of the plurality of weights of the machine learning model.
In some embodiments, the machine learning model (or a layer of the machine learning model) comprises at least 100 weights (or at least 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 5000, 10000, 20000, 30000, 40000, 100000, or more or less, weights). In some embodiments, the machine learning model (or a layer of the machine learning model) comprises at least 25 nodes (or 20, 25, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 10000, or more or less, nodes). In some embodiments, the machine learning model comprises at least 2 layers (or at least 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, or more or less). In some embodiments, the machine learning model comprises a convolutional machine learning model (CNN), a deep machine learning model (DNN), a multilayer perceptron (MLP), or a combination thereof.
Resilience determination and/or damage recovery can be performed on machine learning models. A machine learning model can be, for example, a neural network (NN), a convolutional neural network (CNN), a deep neural network (DNN), or a multilayer perceptron. The computing device 8800 described with reference to
A layer of a neural network (NN), such as a deep neural network (DNN) can apply a linear or non-linear transformation to its input to generate its output. A neural network layer can be a normalization layer, a convolutional layer, a softsign layer, a rectified linear layer, a concatenation layer, a pooling layer, a recurrent layer, an inception-like layer, or any combination thereof. The normalization layer can normalize the brightness of its input to generate its output with, for example, L2 normalization. The normalization layer can, for example, normalize the brightness of a plurality of images with respect to one another at once to generate a plurality of normalized images as its output. Non-limiting examples of methods for normalizing brightness include local contrast normalization (LCN) or local response normalization (LRN). Local contrast normalization can normalize the contrast of an image non-linearly by normalizing local regions of the image on a per pixel basis to have a mean of zero and a variance of one (or other values of mean and variance). Local response normalization can normalize an image over local input regions to have a mean of zero and a variance of one (or other values of mean and variance). The normalization layer may speed up the training process.
A convolutional neural network (CNN) can be a NN with one or more convolutional layers, such as, 5, 6, 7, 8, 9, 10, or more. The convolutional layer can apply a set of kernels that convolve its input to generate its output. The softsign layer can apply a softsign function to its input. The softsign function (softsign(x)) can be, for example, (x/(1+|x|)). The softsign layer may neglect impact of per-element outliers. The rectified linear layer can be a rectified linear layer unit (ReLU) or a parameterized rectified linear layer unit (PReLU). The ReLU layer can apply a ReLU function to its input to generate its output. The ReLU function ReLU(x) can be, for example, max(0, x). The PReLU layer can apply a PReLU function to its input to generate its output. The PReLU function PReLU(x) can be, for example, x if x≥0 and αx if x<0, where α is a positive number. The concatenation layer can concatenate its input to generate its output. For example, the concatenation layer can concatenate four 5×5 images to generate one 20×20 image. The pooling layer can apply a pooling function which down samples its input to generate its output. For example, the pooling layer can down sample a 20×20 image into a 10×10 image. Non-limiting examples of the pooling function include maximum pooling, average pooling, or minimum pooling.
At a time point t, the recurrent layer can compute a hidden state s(t), and a recurrent connection can provide the hidden state s(t) at time t to the recurrent layer as an input at a subsequent time point t+1. The recurrent layer can compute its output at time t+1 based on the hidden state s(t) at time t. For example, the recurrent layer can apply the softsign function to the hidden state s(t) at time t to compute its output at time t+1. The hidden state of the recurrent layer at time t+1 has as its input the hidden state s(t) of the recurrent layer at time t. The recurrent layer can compute the hidden state s(t+1) by applying, for example, a ReLU function to its input. The inception-like layer can include one or more of the normalization layer, the convolutional layer, the softsign layer, the rectified linear layer such as the ReLU layer and the PReLU layer, the concatenation layer, the pooling layer, or any combination thereof
The number of layers in the NN can be different in different implementations. For example, the number of layers in a NN can be 10, 20, 30, 40, or more. For example, the number of layers in the DNN can be 50, 100, 200, or more. The input type of a deep neural network layer can be different in different implementations. For example, a layer can receive the outputs of a number of layers as its input. The input of a layer can include the outputs of five layers. As another example, the input of a layer can include 1% of the layers of the NN. The output of a layer can be the inputs of a number of layers. For example, the output of a layer can be used as the inputs of five layers. As another example, the output of a layer can be used as the inputs of 1% of the layers of the NN.
The input size or the output size of a layer can be quite large. The input size or the output size of a layer can be n×m, where n denotes the width and m denotes the height of the input or the output. For example, n or m can be 11, 21, 31, or more. The channel sizes of the input or the output of a layer can be different in different implementations. For example, the channel size of the input or the output of a layer can be 4, 16, 32, 64, 128, or more. The kernel size of a layer can be different in different implementations. For example, the kernel size can be n×m, where n denotes the width and m denotes the height of the kernel. For example, n or m can be 5, 7, 9, or more. The stride size of a layer can be different in different implementations. For example, the stride size of a deep neural network layer can be 3, 5, 7 or more.
In some embodiments, a NN can refer to a plurality of NNs that together compute an output of the NN. Different NNs of the plurality of NNs can be trained for different tasks. Outputs of NNs of the plurality of NNs can be computed to determine an output of the NN. For example, an output of a NN of the plurality of NNs can include a likelihood score. The output of the NN including the plurality of NNs can be determined based on the likelihood scores of the outputs of different NNs of the plurality of NNs.
Non-limiting examples of machine learning models includes scale-invariant feature transform (SIFT), speeded up robust features (SURF), oriented FAST and rotated BRIEF (ORB), binary robust invariant scalable keypoints (BRISK), fast retina keypoint (FREAK), Viola-Jones algorithm, Eigenfaces approach, Lucas-Kanade algorithm, Horn-Schunk algorithm, Mean-shift algorithm, visual simultaneous location and mapping (vSLAM) techniques, a sequential Bayesian estimator (e.g., Kalman filter, extended Kalman filter, etc.), bundle adjustment, adaptive thresholding (and other thresholding techniques), Iterative Closest Point (ICP), Semi Global Matching (SGM), Semi Global Block Matching (SGBM), Feature Point Histograms, various machine learning algorithms (such as e.g., support vector machine, k-nearest neighbors algorithm, Naive Bayes, neural network (including convolutional or deep neural networks), or other supervised/unsupervised models, etc.), and so forth.
Some examples of machine learning models can include supervised or non-supervised machine learning, including regression models (such as, for example, Ordinary Least Squares Regression), instance-based models (such as, for example, Learning Vector Quantization), decision tree models (such as, for example, classification and regression trees), Bayesian models (such as, for example, Naive Bayes), clustering models (such as, for example, k-means clustering), association rule learning models (such as, for example, a-priori models), artificial neural network models (such as, for example, Perceptron), deep learning models (such as, for example, Deep Boltzmann Machine, or deep neural network), dimensionality reduction models (such as, for example, Principal Component Analysis), ensemble models (such as, for example, Stacked Generalization), and/or other machine learning models.
The memory 8870 may contain computer program instructions (grouped as modules or components in some embodiments) that the processing unit 8810 executes in order to implement one or more embodiments. The memory 8870 generally includes RAM, ROM and/or other persistent, auxiliary or non-transitory computer-readable media. The memory 8870 may store an operating system 8872 that provides computer program instructions for use by the processing unit 8810 in the general administration and operation of the computing device 8800. The memory 8870 may further include computer program instructions and other information for implementing aspects of the present disclosure.
For example, in one embodiment, the memory 8870 includes a resilience determination module 8874 for determining resiliency of a machine learning model (e.g., a neural network). The memory 8870 may additionally or alternatively include a damage recovery module 8876 for determining and updating damaged weights. In addition, memory 8870 may include or communicate with the data store 8890 and/or one or more other data stores that store a machine learning model (e.g., a neural network) with or without damaged weights and/or a machine learning model with updated weights.
In at least some of the previously described embodiments, one or more elements used in an embodiment can interchangeably be used in another embodiment unless such a replacement is not technically feasible. It will be appreciated by those skilled in the art that various other omissions, additions and modifications may be made to the methods and structures described above without departing from the scope of the claimed subject matter. All such modifications and changes are intended to fall within the scope of the subject matter, as defined by the appended claims.
One skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods can be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations can be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C can include a first processor configured to carry out recitation A and working in conjunction with a second processor configured to carry out recitations B and C. Any reference to “or” herein is intended to encompass “and/or” unless otherwise stated.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g.,“ a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g.,“ a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible sub-ranges and combinations of sub-ranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein can be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as “up to,” “at least,” “greater than,” “less than,” and the like include the number recited and refer to ranges which can be subsequently broken down into sub-ranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 articles refers to groups having 1, 2, or 3 articles. Similarly, a group having 1-5 articles refers to groups having 1, 2, 3, 4, or 5 articles, and so forth.
It will be appreciated that various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various embodiments disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (for example, not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, for example through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processing unit or processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, for example a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
This application claims the benefit of priority to U.S. patent application No. 63/039,749, filed on Jun. 16, 2020, the content of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63039749 | Jun 2020 | US |