Manually designing deep neural networks in a trial-and-error, ad hoc fashion is a tedious process requiring both architectural engineering skills and domain expertise. Experts in the design of such networks rely on past experience and technical knowledge to create and design a neural network. Designing novel neural network architectures involves searching over a huge space of hyperparameters concerning the number of layers in the network, the number of filters in each layer, different initializations, normalization techniques etc. Manually creating different configurations of the network architecture spanning different settings under each of the mentioned parameters makes creating novel architectures is difficult and inefficient.
Neural architecture search (NAS) has been successful in replacing the manual design of neural networks with an automatic process that generates optimal network architectures. This automation leads to the discovery of efficient network structures which are searched by optimizing certain metrics such as accuracy and FLOPs. The optimization is challenging because the search space is immense and intractable and relies on initial modules which are manually designed. On the other hand, random graph neural networks have recently achieved significant outcomes by relaxing the constraint of manual architecture designs. However, the approach suffers a low degree of flexibility in exploring effective architectures due to the use of only internal connections within random graphs.
To address the issue, disclosed herein is a method for novel neural architecture search using a random graph network backbone to facilitate the creation of an efficient network structure. The method utilizes reinforcement learning algorithms to build a complex relationship between intra-connections (i.e., links between nodes in a random graph) and extra-connections (i.e., links among nodes across the graphs) for discovering a tiny yet effective random neural architecture.
By way of example, a specific exemplary embodiment of the disclosed system and method will now be described, with reference to the accompanying drawings, in which:
without extra-connections.
In graph neural networks, nodes and edges can be considered as inputs/outputs and convolution layers respectively. Graph neural networks with random structures are random graph neural networks. A random graph can be defined by three tractable hyperparameters: (1) a number of nodes (N); (2) an average degree over the graph (k); and (3) a probability that two arbitrary nodes are connected (p).
In the novel method disclosed herein, in a novel random wired neural architecture, a combination of intra- and extra-connections across chained graphs is exploited.
Properly balancing the ratio of intra-connections 102 to extra-connections 104 in the random wired network graph results in a significant improvement to the random network. The novel method disclosed herein uses a reinforcement learning algorithm based on a deep deterministic policy gradient (DDPG) with a novel setting to leverage the network efficiency due to a mix of intra and extra-connections.
The disclosed method consists of two stages. First, reinforcement learning is used to determine a deep deterministic policy gradient (DDPG) to search for random networks with a proper set of intra-connections 102 and extra-connections 104. Second, a random network explorer is trained from scratch on a visual image classification task to obtain an optimally efficient neural network.
To minimize the manual design of neural architectures, randomly wired networks make use of the structures of classical random graphs. The Watts-Strogatz (WS) structure is recognized as yielding better performance than others. For that reason, the randomly wired networks disclosed herein are based on the WS graph construction. The WS random graph model places nodes on a ring and every node connects to k/2 neighbors. Each random graph takes the output 106 of the preceding blocks as input and averages its internal outputs to the succeeding blocks, as shown in
Deep Deterministic Policy Gradient (DDPG)—
∇θ
The algorithm implementing the DDPG is shown in
In the WS model, two main parameters are considered: k and p for intra-connections. A node is connected to k/2 neighbors and two arbitrary nodes are connected with the probability p. The final random neural network graph is a chain of random graph blocks, wherein each block is defined as a state 206 in DDPG. However, to employ extra-connections, two additional parameters are introduced: k′ and p′. For a certain state or action, k′ and p′ are parameters of a random graph whose nodes include itself and subsequent graphs. For example, if a network has four blocks, at state 2, k′2 and p′2 are random graph parameters for nodes in state 2, state 3 and state 4 (or in other words, they are nodes in the 2nd, 3rd and 4th blocks). Following this set-up, the randomly wired network construction allows the presence of extra-connections across the graph blocks in a forward manner.
DDPG deploys the actor network 202 and critic network 204 to predict the parameters. Actor function 202 proposes an action given a state. Critic function 204 predicts if the action is good (positive value) or bad (negative value) given the state and the proposed action.
In detail, the action space of intra-extra connections is a=(N, k, p, k′, p′) and the state space is s=(, N, C, k, p, k′, p′), where denotes the kernel size of depth-wise convolution (e.g., 3), N is the number of nodes in the block, and C is the number of output channels.
For each state, DDPG predicts the number of nodes and intra- and extra-connections. When reaching the final state, the whole random neural network is constructed and trained for several epochs to compute rewards. The reward function 208 is given as: =−FLOPs·error that takes into account the error rate of the classification task and the computational FLOPs of the constructed random network. Therefore, an optimal network can be found by maximizing the reward.
The use of extra-connections in randomly wired networks can yield potential outcomes. Disclosed herein is a novel algorithm via architecture search to explore a proper combination of intra and extra-links. The searched networks are efficient as they achieve highly accurate results while saving the computational cost as well as memory storage in dealing with challenging visual classification tasks. Furthermore, the training procedure is affordable because the search process can be done on normal GPUs.
As would be realized by one of skill in the art, the disclosed method described herein can be implemented by a system comprising a processor and memory, storing software that, when executed by the processor, performs the functions comprising the method.
As would further be realized by one of skill in the art, many variations on implementations discussed herein which fall within the scope of the invention are possible. Moreover, it is to be understood that the features of the various embodiments described herein were not mutually exclusive and can exist in various combinations and permutations, even if such combinations or permutations were not made express herein, without departing from the spirit and scope of the invention. Accordingly, the method and apparatus disclosed herein are not to be taken as limitations on the invention but as an illustration thereof. The scope of the invention is defined by the claims which follow.
This application claims the benefit of U.S. Provisional Patent Application No. 63/150,630, filed Feb. 18, 2021, the contents of which are incorporated herein in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/016903 | 2/18/2022 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2022/178200 | 8/25/2022 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20200184334 | Glafkides | Jun 2020 | A1 |
20200279149 | Kim | Sep 2020 | A1 |
Entry |
---|
Wang, Di, and Mengqi Hu. “Deep deterministic policy gradient with compatible critic network.” IEEE Transactions on Neural Networks and Learning Systems 34.8 (2021): 4332-4344. (Year: 2021). |
Pal, Dipan K., Akshay Chawla, and Marios Savvides. “Learning Non-Parametric Invariances from Data with Permanent Random Connectomes.” arXiv preprint arXiv:1911.05266 (2019). (Year: 2019). |
International Search Report and Written Opinion for the International Application No. PCT/US22/16903, mailed Jun. 10, 2022, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20240095523 A1 | Mar 2024 | US |
Number | Date | Country | |
---|---|---|---|
63150630 | Feb 2021 | US |