A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
This application claims priority to U.S. patent application Ser. No. 18/079,588, filed Dec. 12, 2022 which claims priority to U.S. patent application Ser. No. 16/791,945, filed Feb. 14, 2020 (now U.S. Pat. No. 11,544,572) which claims priority as a non-provisional of U.S. Provisional Patent Application No. 62/806,341, titled “Embedding Constrained and Unconstrained Optimization Programs as Neural Network Layers” and filed on Feb. 15, 2019, the disclosure of which is incorporated herein by reference in its entirety.
Aspects of the disclosure relate generally to machine learning. More specifically, aspects of the disclosure may allow for the embedding of convex optimization programs in neural networks as a network layer, and may allow the neural network to learn one or more parameters associated with the optimization program.
Neural networks and their constitutive layers often specify input-output relationships such that training procedures can readily identify parameter values well-suited to given datasets and tasks. Often these relationships are chosen to be analytic and differentiable to ensure gradient-based training methods are effective. Solving optimization programs in their original forms may be difficult for neural networks, particularly regarding those optimization programs with input constraints establishing permitted/feasible and/or unpermitted/infeasible values, because the constraints may cause the derivative and/or subdifferential of the optimization program function to be ill-defined over the range of input that the neural network may encounter.
Aspects described herein may address these and other problems, and generally improve the quality, efficiency, and speed of machine learning systems. Further, aspects herein provide a practical application of the transformations used to place optimization programs in suitable form for neural network processing.
The following presents a simplified summary of various aspects described herein. This summary is not an extensive overview, and is not intended to identify key or critical elements or to delineate the scope of the claims. The following summary merely presents some concepts in a simplified form as an introductory prelude to the more detailed description provided below.
Aspects of the disclosure relate to a procedure for mapping optimality conditions associated with a broad class of optimization problems including linear optimization programs and quadratic optimization programs, and more generally convex optimization programs, onto neural network structures using Cayley transforms. This may guarantee that these networks find solutions for convex problems.
Aspects discussed herein may relate to methods and techniques for embedding constrained and unconstrained optimization programs as layers in a neural network architecture. Systems are provided that implement a method of solving a particular optimization problem by a neural network architecture. Prior systems required use of external software to pre-solve optimization programs so that previously determined parameters could be used as fixed input in the neural network architecture. Aspects described herein may transform the structure of common optimization problems/programs into forms suitable for use in a neural network. This transformation may be invertible, allowing the system to learn the solution to the optimization program using gradient descent techniques via backpropagation of errors through the neural network architecture. Thus these optimization layers may be solved via operation of the neural network itself. This may provide benefits such as improved prediction accuracy, faster model training, and/or simplified model training, among others. Features described herein may find particular application with respect to convex optimization programs, and may find particular application in recurrent neural network architectures and/or feed-forward neural network architectures.
More particularly, some aspects described herein may provide a computer-implemented method for embedding a convex optimization program as an optimization layer in a neural network architecture comprising a plurality of layers. According to some aspects, a computing system implementing the method may determine a set of parameters associated with the convex optimization program. The set of parameters may comprise a vector a=(a1, a2) of primal decision variables associated with the convex optimization program and a vector b=(b1, b2) of dual decision variables associated with the convex optimization program. Vectors a and b may be related according to:
where A is a coefficient matrix corresponding to one or more constraints of the convex optimization program, and AT denotes the transpose of matrix A. The system may determine a set of intermediary functionals ƒi=(ƒ1, ƒ2). The value of ƒi may be defined as equal to a cost term associated with a corresponding ai for a range of permitted values of ai and equal to infinity for a range of unpermitted values of ai. The one or more constraints of the convex optimization program may define the range of permitted values and the range of unpermitted values. The system may generate a set of network variables associated with the optimization layer based on applying a scattering coordinate transform to vectors a and b. The set of network variables may comprise a vector c=(c1, c2) corresponding to input to the optimization layer and a vector d=(d1, d2) corresponding to an intermediate value of the optimization layer.
The system may generate a linear component H of the optimization layer by applying a first transformation to coefficient matrix A, where the first transformation is of the form:
where I is the identity matrix. The linear component H may correspond to a linear mapping, linear operator, and/or weight matrix, and may be used in the optimization layer to determine an intermediate value for d corresponding to a given value for c.
The system may generate a non-linear component σ(·) of the optimization layer by applying a second transformation to the intermediary functionals ƒi, where the second transformation is of the form:
where ∂ƒi corresponds to the subdifferential of ƒi. The non-linear component σ(·) may correspond to a non-linear mapping, non-linear operator, and/or non-linear transformation, and may be used in the optimization layer to determine a next iteration value of c based on application to a current iteration value for d.
The system may receive, by the optimization layer and from a prior layer of the neural network architecture, input values corresponding to vector c. The system may iteratively compute, by the optimization layer, values for vectors c and d to determine fixed point values c* and d*. Each computation of a value for vectors c and d may be of the form:
where n demotes the n-th iteration of the optimization layer, and c″ and d″ denote the n-th value for vectors c and d. The system may determine fixed point values a* and b* based on applying the inverse of the scattering coordinate transform to fixed point values c* and d*.
The system may provide, by the optimization layer, output based on fixed point values a* and b*. An error between a predicted output of the neural network architecture and an expected output for training data used during a training process may be determined. The system may backpropagate the determined error through the plurality of layers as part of a machine learning process. The system may determine an updated set of parameters associated with the convex optimization program based on applying gradient descent to the linear component H and the non-linear component σ(·). A trained model may be generated as a result of repeated iterations of a machine learning process using the neural network architecture having the optimization layer described above. The trained model may be used to generate one or more predictions based on a trained set of parameters associated with the convex optimization program.
Techniques described herein may flexibly be applied to any suitable neural network architecture. For example, techniques described herein may find application in neural network architectures such as convolutional neural networks, recurrent neural networks, feed forward neural networks, and the like, and combinations thereof. Similarly, techniques described herein may be applied to any suitable machine learning application, such as speech recognition, image recognition, and others.
Corresponding apparatus, systems, and computer-readable media are also within the scope of the disclosure.
These features, along with many others, are discussed in greater detail below.
The present disclosure is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present disclosure. Aspects of the disclosure are capable of other embodiments and of being practiced or being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Rather, the phrases and terms used herein are to be given their broadest interpretation and meaning. The use of “including” and “comprising” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items and equivalents thereof.
By way of introduction, aspects described herein may provide a method for designing neural networks that solve linear and quadratic programs during inference using standard deep learning framework components. These networks are fully differentiable, allowing them to learn parameters for constraints and objective functions directly from backpropagation, thereby enabling their use within larger end-to-end networks. Aspects of this disclosure are discussed generally with respect to convex optimization programs. Illustrative examples of standard-form linear and quadratic optimization programs are discussed, as well as programs appearing in signal recovery and denoising contexts.
Before discussing these concepts in greater detail, however, several examples of a computing device that may be used in implementing and/or otherwise providing various aspects of the disclosure will first be discussed with respect to
Computing device 101 may, in some embodiments, operate in a standalone environment. In others, computing device 101 may operate in a networked environment. As shown in
As seen in
Devices 105, 107, 109 may have similar or different architecture as described with respect to computing device 101. Those of skill in the art will appreciate that the functionality of computing device 101 (or device 105, 107, 109) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, to segregate transactions based on geographic location, user access level, quality of service (QOS), etc. For example, devices 101, 105, 107, 109, and others may operate in concert to provide parallel computing features in support of the operation of control logic 125 and/or software 127.
One or more aspects discussed herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects discussed herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. Various aspects discussed herein may be embodied as a method, a computing device, a data processing system, or a computer program product.
Having discussed several examples of computing devices which may be used to implement some aspects as discussed further below, discussion will now turn to a method for embedding convex optimization programs in neural network layers.
An artificial neural network may have an input layer 210, one or more hidden layers 220, and an output layer 230. Illustrated network architecture 200 is depicted with three hidden layers. The number of hidden layers employed in neural network 200 may vary based on the particular application and/or problem domain. For example, a network model used for image recognition may have a different number of hidden layers than a network used for speech recognition. Similarly, the number of input and/or output nodes may vary based on the application. Many types of neural networks are used in practice, such as convolutional neural networks, recurrent neural networks, feed forward neural networks, combinations thereof, and others. Aspects described herein may be used with any type of neural network, and for any suitable application.
During the model training process, the weights of each connection and/or node may be adjusted in a learning process as the model adapts to generate more accurate predictions on a training set. The weights assigned to each connection and/or node may be referred to as the model parameters. The model may be initialized with a random or white noise set of initial model parameters. The model parameters may then be iteratively adjusted using, for example, gradient descent algorithms that seek to minimize errors in the model.
A problem domain to be solved by the neural network may include an associated optimization program. The optimization program may be constrained and/or unconstrained, and may assign weights and/or penalties to certain input values and/or combinations of input values. The weights (e.g., coefficient matrices) and/or constraints (e.g., maximum values or relations between different inputs) may be referred to as parameters of the optimization program. Solving for these parameters may be necessary to generate suitable predictions in certain problem domains. Convex optimization programs are a class of optimization programs that are commonly encountered in machine learning applications, and aspects herein may find particular application with respect to convex optimization programs. This disclosure discusses the example of input-output relationships characterized by standard-form linear and quadratic optimization programs. Aspects of this disclosure may find application more generally in the context of convex optimization programs.
Linear optimization programs may be those written according to equation (LP):
While quadratic optimization programs may be those written according to equation (QP):
where x is a vector of primal decision variables, q is a cost vector, r is an inequality vector, and A and Q are respectively linear and quadratic coefficient matrices.
Solving optimization programs in their original forms may be difficult for neural networks, particularly regarding those optimization programs with input constraints establishing permitted/feasible and/or unpermitted/infeasible values, because the constraints may cause the derivative and/or subdifferential of the optimization program function to be ill-defined over the range of input that the neural network may encounter. For example,
Aspects described herein introduce an optimization layer, such as optimization layer 225 illustrated in
Aspects described herein may present a method for solving a particular optimization program by a neural network. Broadly speaking, the system may transform the problem into a neural network structure that can be embedded and trained inside of a larger end-to-end network. An instance of the optimization problem may be solved by executing the network until a fixed-point is identified or another appropriate threshold for what is considered a fixed-point is met. The system may transform the fixed-point back to the original coordinate system, thereby obtaining a trained value relevant to the optimization program.
In particular, the system may transform the problem into a neural network structure that can be embedded and trained inside of a larger end-to-end network by transforming the problem parameters in an invertible and differentiable matter. The problem may be transformed into a direct and/or residual neural network form. A direct form structure may comprise a single matrix multiplication followed by a non-linearity, as illustrated by network structure 500 in
The system may solve an instance of the optimization problem by executing the optimization layer of the network until a fixed-point is identified or an appropriate threshold is met. This may be done in the direct and/or residual forms. The variables operating on in the solving may be specific combinations of primal and dual decision variables rather than the primal or dual variables themselves. The optimization problem parameters can be fixed, pre-defined, and/or learned from data and/or some combination of fixed and learned. The application of the transformations described herein to neural network processing may enable the embedding of an optimization layer allowing the neural network to solve an associated optimization program. The optimization problem may be solved directly in the inference and/or forward pass of the network without requiring the use of external optimization tools.
This procedure may be performed on every forward pass of the network as part of larger network training. The optimization problem parameters may be updated in the same manner that other parameters of the larger network are learned. These features may also allow any neural network inference framework, such as TensorRT from NVIDIA CORP, to be used as a constrained and unconstrained optimization solver. The optimization layer may be configured to run in a recurrent loop to arrive at the fixed-point. In a given iteration of the end-to-end neural network, multiple iterations of the optimization layer may run as a result of the recurrent (and or similar feed forward) structure. The fixed-point results serve as the network variables for the optimization layer for that iteration of the end-to-end neural network, and are updated in the same manner as other parameters, e.g., through gradient descent.
To begin, let a=(a1, a2) and b=(b1, b2) respectively denote vectors of primal and dual decision variables associated with an optimization program. The linear relationships imposed on the decision variables are enforced according to equation (1):
where A is a coefficient matrix associated with the constraints of the optimization program, and where AT denotes the transpose of matrix A.
Equivalently, the linear feasibility constraints imposed on the decision variables a and b above correspond to the behavioral statement shown in equation (2):
where I is the identity matrix. Moving forward, the vector subspace in equation (2) is denoted as .
Next, the remainder of the optimality conditions associated with cost terms and inequality constraints are enforced using a set describing admissible configurations of the decision vector (a, b). may be decomposed according to =1×2 where each i is a set relation restricting the variables (ai, bi) for i=1, 2. To generate i, an intermediary functional ƒi may be defined equal to the cost term associated with ai over its feasible domain and equal to infinity for infeasible values. For example, the primal vector a1 in equation (LP) is restricted to be elementwise non-negative with cost term qTa1, therefore the functional ƒ1 is shown in equation (3):
The set i is then the set of values (ai, bi) where bi=∂ƒi(ai) is the subdifferential of ƒi. A complete description of solutions to the optimality conditions is then the set of elements (a, b) in ∩.
For non-smooth convex cost functions over general convex sets, the (ai, bi) relationship encapsulated by i is not necessarily functional, thus inserting state into the structure in
Toward this end, network variables c=(c1, c2) and d=(d1, d2) may be produced from the decision variables a and b according to the scattering coordinate transform shown in equation (4):
Next, this transform may be reflected onto the optimality conditions by providing analytic expressions for the network parameter matrices H and activation functions σ(·) which encapsulate the transformed optimality conditions M(∩) as well as address issues of their well-posedness. The neural networks can then be unrolled or iterated until fixed-points (c*, d*) are identified which in turn can be used to identify solutions (a*, b*) in the original coordinate system by inverting equation (4).
To assist with the strategy above, note that the transformed behavior M is the vector space described by equation (5):
Consistent with the illustration in
This form of H corresponds to the Cayley transform of a skew-symmetric matrix characterizing the dual extension of the matrix A, thus it is both orthogonal and well-defined for arbitrary A. Efficient methods for computing H follow from standard algebraic reduction mechanisms such as the matrix inversion lemma.
Concerning the transformed relation M, convexity of the functionals ƒi paired with the coordinate transform in equation (4) provides that ci can always be written as a function of di for i=1, 2. To see this, let σi denote the mapping and note that the subgradient relation in i can be transformed and written according to equation (7):
where σi is closely related to the Cayley transform of ∂ƒi. Drawing upon this observation, a well-known result in convex analysis states that since ∂ƒi is the subdifferential of a convex function it is monotone, therefore I+∂ƒi is strongly monotone since it is the sum of a monotone and strongly monotone function. Invertibility of the term I+∂ƒi, and thus the validity of equation (7) as a well-defined operator, then follows from application of the theorem of Browder-Minty on monotone operators. Moreover, it follows that the σi is passive or non-expansive, i.e. is Lipschitz continuous with constant not greater than unity.
This disclosure will next show that the iterating the residual structure in
tends to a fixed-point c*=T(c*) for ρ∈(0,1) and where
For general convex problems, T is non-expansive since it is the composition of an orthogonal matrix H and the non-expansive operator σ in equation (7). Therefore, a single iteration of equation (8) yields equation (8.1):
where the equality is due to the application of Stewarts Theorem and the inequality is due to both the non-expansivity and fixed-point properties of T. Iterating the inequality yields equation (9):
Loosening equation (9) further and taking a limit provides the bound shown in equation (10):
Since equation (10) is bounded above it follows that T(cn)→cn and therefore cn→c* which concludes the argument of convergence for general convex optimization problems.
For the important special case of strictly convex cost functions over convex sets, the nonlinearity in equation (7) reduces to a contractive mapping. Consequently, T is also contractive and the process of iterating either the direct or residual network structures results in linear convergence to a solution or fixed-point. A proof of this fact follows from direct application of the Banach fixed-point theorem.
Proximal iterations for minimizing non-differentiable, convex functions ƒ:N→∪{∞} take the general form shown in equation (11):
where ρ is a tuning parameter and the scaled proximal operator proxρƒ:N→N is defined according to equation (12):
To connect the scattering algorithms outlined in this section with their proximal counterparts, it is shown that the proximal operator in equation (12) corresponds to a different transformation of i associated with ƒ. In particular, the scaled proximal operator is related to the subgradient ∂ƒ according to equation (13):
To prove this relationship holds it must be shown that d=(I+ρ∂ƒ)(proxρƒ(d)). To do this, let v*=proxρƒ(d) and consider the function p(v)=ƒ(v)+½ρ∥v−d∥2. The condition ∂p(v*)=0 then yields the constraint ρ∂ƒ(v*)+v*=d which is precisely the relationship in equation (13). Therefore, proximal and scattering methods are related through different coordinate transformation matrices M.
In the following sections, this disclosure explains the derivation of the requisite nonlinearities to build networks that themselves solve specific forms of linear and quadratic programming problems. The forward pass of a residual network is portrayed in
The optimality conditions for linear programming problems written in standard form directly map to
The nonlinearities for linear optimization program (LP) modules are generated using equation (7) and implemented coordinatewise using the expressions shown in equations (15) and (16):
which are both easily verified to be non-expansive and can be formed using compositions of standard ReLU activations.
The optimality conditions for quadratic programming problems written in standard form directly map to
The nonlinearities for quadratic optimization program (QP) modules are generated using equation (7) and implemented using the expressions shown in equations (19) and (20):
It is straightforward to show that 01 is contractive if Q is positive definite, non-expansive if Q is positive semidefinite, and expansive if Q is indefinite and σ2 is coordinatewise non-expansive.
In a variety of applications, nonlinear features naturally arise in programs that reduce to (LP) or (QP). Recasting tricks have been developed in response to this. These same tricks may be combined in tandem with the networks in this disclosure. Additionally and/or alternatively, specialized networks can be assembled by designing non-linearities that directly represent the nonlinear features using the procedures described herein.
For example, generating sparse solutions to underdetermined linear systems of equations satisfying certain spectral properties is a linear program often cast as the Basis Pursuit (BP) problem shown in equation (21):
Moreover, equation (21) has been extended via regularization to handle cases where the measurement vector r contains noise and Ax is only required to be reasonably close to r. This recovery problem is a quadratic program often cast as the Basis Pursuit Denoising problem (BPDN) according to equation (22):
where λ balances the absolute size of the solution with the desired agreement of Ax and r. Rather than recasting into standard form by introducing auxiliary variables and additional constraints, aspects may next define the network modules directly from the objective functions. The optimality conditions directly map to
The activations in equations (23) and (24) are non-expansive and the activation in equation (25) is contractive with Lipschitz constant equal to zero.
Similar to the recovery of sparse signals, the denoising of certain signal models from noisy measurements is often cast using quadratic programs. As a concrete example of this, the total variation denoising problem attempts to denoise or smooth observed signals y using an approximation x produced according to equation (26):
where the form of the parameter matrix D encodes the targeted signal model and A balances the approximation and model penalties. When D takes the form of a first-order difference operator, i.e. with rows ei−ei+1, the penalty ∥Dx∥1 encourages the signal to tend toward a piece-wise constant construction.
The optimality conditions associated with equation (26) directly map to
which are both easily verified to be non-expansive. Observe that equation (28) is the negative of equation (23) consistent with the formulation in equation (7) and the fact that x maps to a1 in equation (22) and v maps to a2 in equation (26).
As one example of an application of some of the processes and procedures discussed herein, the system may be configured to learn constraints in linear and quadratic programs from data by learning the measurement matrix A for the BP problem of equation (21) and BPDN problem of equation (22). A dataset may be constructed by randomly drawing an M×N matrix A* and producing network input-target samples (r, x*) by randomly building K-sparse vectors x* and computing the companion measurements r=A*x+z where K is drawn uniformly over the interval [Kmin>Kmax] and z is a noise vector with entries sampled from (0, σ2). The neural networks take measurements r as inputs and produce outputs {circumflex over (x)} which solve BP or BPDN for the current parameter matrix A during the forward pass. The training objective is to minimize ∥{circumflex over (x)}−x*∥1, i.e. to resolve the difference between the sparse vector x* which produces r using A* and the network output.
The networks may be trained using stochastic gradient descent with a learning rate of 0.01, a batch size of 64 and no momentum or weight decay terms. The training and validation splits in this example implementation comprise 256,000 and 10,000 samples, respectively.
Consistent with training neural networks via non-convex optimization, meaning gradient-based methods generally find local minima, several trials have been observed that disparate matrices A produce similar validation errors whereas the validation error at the global minima A* is slightly lower. This observation corroborates the fact that training linear and quadratic program parameters is also a non-convex optimization problem. Warm starting from A* corrupted by noise and finetuning repeatedly yielded A* as a solution.
As another example of an application of some of the processes and procedures discussed herein, the system may be configured to learn parameters in a signal processing algorithm from data by learning the denoising matrix D in equation (26). A dataset may be constructed by generating input-target samples (y,x*) of piecewise constant signals x* and their noise corrupted companions y. The neural network takes y as input and produces output {circumflex over (x)} which solves TVDN for the current denoising matrix D during the forward pass. The training objective is to minimize ∥{circumflex over (x)}−x*∥1, i.e. to resolve the difference between the piecewise constant signal x* and the network output.
The examples in this section primarily serve to underscore the feasibility and numerical stability of unrolling constrained optimization algorithms within the deep learning paradigm, and secondarily as an interesting class of algorithms themselves. The ability to learn convex programs as essentially layers within a larger network, similar to learning affine or convolution layers, may enable many new neural network architectures and applications that can take advantage of such architectures.
In accordance with the above detailed description, aspects described herein may provide a computer-implemented method for embedding a convex optimization program as an optimization layer in a neural network architecture. Exemplary steps of such a method 700 are shown in
At step 705, a computing device may determine a set of parameters associated with the convex optimization program. The set of parameters may include a vector of primal decision variables and a set of dual decision variables. A coefficient matrix may also be determined, which may correspond to one or more constraints of the convex optimization program.
At step 710, the computing device may determine a set of intermediary functions having values defined as equal to a cost term associated with corresponding values of the primal decision variables. The one or more constraints of the convex optimization program may define a range of permitted values and a range of unpermitted values. The cost term may be equal to infinity for unpermitted values of the primal decision variables and equal to other, non-infinite values for permitted values of the primal decision variables.
At step 715, the computing device may generate a set of network variables associated with the optimization layer based on applying a scattering coordinate transform to the vector of primal decision variables and the vector of dual decision variables. The network variables may comprise a vector corresponding to inputs to the optimization layer, and a vector corresponding to intermediate values of the optimization layer.
At step 720, the computing device may generate a linear component of the optimization layer by applying a transformation to the coefficient matrix.
At step 725, the computing device may generate a non-linear component of the optimization layer by applying a transformation to the intermediary functionals.
At step 730, the computing device may operate a neural network including the optimization layer, comprising the generated linear component and the non-linear component. The computing device may receive, by the optimization layer and from a prior layer of the neural network, input values corresponding to the inputs to the optimization layer. The computing device may iteratively compute, by the optimization layer, values for the network variables to determine fixed point values for the network variables.
At step 735, the computing device may determine fixed point values for the primal decision variables and the dual decision variables based on applying the inverse of the scattering coordinate transform to the fixed point values for the network variables.
At step 740, the computing device may provide, by the optimization layer, first output based on the determined fixed point values for the primal decision variables and the dual decision variables to a next layer of the neural network.
At step 745, the computing device may determine an error based on second output of the neural network, wherein the second output is based on the first output of the optimization layer. The second output may be an output of a last layer of the neural network.
At step 750, the computing device may backpropagate the determined error through the plurality of layers of the neural network. The backpropagating may comprise determining an updated set of parameters associated with the convex optimization program based on applying gradient descent to the linear component and the non-linear component.
At step 755, the computing device may generate one or more predictions by the neural network based on a trained set of parameters associated with the convex optimization program.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Number | Date | Country | |
---|---|---|---|
62806341 | Feb 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18079588 | Dec 2022 | US |
Child | 18642271 | US | |
Parent | 16791945 | Feb 2020 | US |
Child | 18079588 | US |