Polynomial convolutional neural network with early fan-out

Information

  • Patent Grant
  • 11875557
  • Patent Number
    11,875,557
  • Date Filed
    Monday, April 29, 2019
    5 years ago
  • Date Issued
    Tuesday, January 16, 2024
    3 months ago
  • CPC
  • Field of Search
    • CPC
    • G06V10/82
    • G06V10/454
    • G06F18/21
    • G06F18/21355
    • G06F18/2414
    • G06F18/253
    • G06N3/045
    • G06N3/048
    • G06N3/08
    • G06N3/084
    • G06N20/00
    • G06N20/10
    • G06N20/20
  • International Classifications
    • G06V10/82
    • G06N3/084
    • G06N3/08
    • G06V10/44
    • G06F18/21
    • G06F18/25
    • G06F18/2135
    • G06N3/048
    • G06F18/2413
    • G06N3/045
    • Term Extension
      690
Abstract
The invention proposes a method of training a convolutional neural network in which, at each convolution layer, weights for one seed convolutional filter per layer are updated during each training iteration. All other convolutional filters are polynomial transformations of the seed filter, or, alternatively, all response maps are polynomial transformations of the response map generated by the seed filter.
Description
BACKGROUND OF THE INVENTION

Applications of deep convolutional neural networks (CNNs) have been overwhelmingly successful in all aspect of perception tasks, ranging from computer vision to speech recognition and understanding, from biomedical data analysis to quantum physics. Many successful CNN architectures have evolved in the last few years, however, training these networks end-to-end with fully learnable convolutional filters, as is standard practice, is very computationally expensive and is prone to over-fitting due to the large number of parameters. This is because, with a standard CNN, the weights for each convolutional filter for each layer in the network need to be learned for each instance of training data.


As such, it would be desirable to alleviate this issue, to reduce the computational intensiveness of the training process, by reducing the number of convolutional filters for which weights must be learned at each training step, without sacrificing the high performance delivered by a standard CNN.


SUMMARY OF THE INVENTION

To address this problem, disclosed herein is the polynomial convolutional neural network (PolyCNN), which is an approach to reducing the computational complexity of CNNs, while having the PolyCNN perform as well as a standard CNN.


In one embodiment of the invention, at each convolution layer, only one convolutional filter, referred to herein as the seed filter, is needed for learning the weights, and all the other convolutional filters are polynomial transformations of the seed filter. This is referred to as the “early fan-out” embodiment.


In a second embodiment of the invention, the seed filter could be used to generate a single response map, which is then transformed in to the multiple response maps desired to be input into the next layer via a polynomial transformation of the response map generated by the seed filter. This is referred to as the “late fan-out” embodiment.


Both the early and late fan-out embodiments allow the PolyCNN to learn only one convolutional filter at each layer, which dramatically reduces the model complexity. Parameter savings of at least 10×, 26×, 50×, etc. can be realized during the learning stage depending on the spatial dimensions of the convolutional filters (3×3, 5×5, 7×7 etc. sized filters respectively).


While being efficient during both training and testing, the performance of the PolyCNN does not suffer due to the non-linear polynomial expansion, which translates to richer representational power within the convolution layers. The PolyCNN provides on-par performance with a standard CNN on several on several well-known visual datasets, such as MNIST, CIFAR-10, SVHN, and ImageNet.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic comparison of a single layer of a CNN in view (A), a single layer of a PolyCNN with early fan-Out, in view (B) and a single layer of a PolyCNN with late fan-out, in view (C).



FIG. 2 shows graphs comparing the performance of a standard CNN and the PolyCNN of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

The polynomial convolutional neural network (PolyCNN) is a weight-learning efficient variant of the traditional convolutional neural networks. The core idea of PolyCNN is that at each convolution layer, only one convolutional filter, referred to as the seed filter, needs to be learned, and other filters can be augmented by taking point-wise polynomials of the seed filter. The weights of these augmented filters need not to be updated during the network training. When convolved with the input data, the learnable seed filter and k non-learnable augmented filters result in (k+1) response maps. This embodiment is referred to as early fan-out.


Similarly, one can instead fan-out the seed response map generated from the convolution of the input with the seed convolutional filter to create (k+1) response maps by taking point-wise polynomials of the seed response map. This embodiment is referred to as late fan-out.


Early Fan-Out Embodiment (Filter Weights)


At any given layer, given the seed weights wt for that layer, many new filter weights are generated. The weights are generated via a non-linear transformation f(wi) of the weights. The convolutional outputs are computed as follows:

y=f(w)*x  (1)
y[i]=Σkx[i−k]f(w[k])  (2)


where xj is the j-th channel of the input image and wij is the j-th channel of the i-h filter.


During the forward pass, weights are generated from the seed convolutional kernel and are then convolved with the inputs. During the backward pass,









l



w







and








l



x







need to be computed as follows:












l




w


[
l
]




=


Σ
j





l




y


[
j
]









y


[
j
]






w


[
i
]









(
3
)






=


Σ
j





l




y


[
j
]









y


[
j
]






f


(

w


[
i
]


)









f


(

w


[
i
]


)






w


[
i
]









(
4
)






=


Σ
j





l




y


[
j
]






x


[

j
-
i

]





f




(

w


[
i
]


)







(
5
)









l



w


=


(




l



y









x

)




f




(
w
)







(
6
)







For example, if the weights are transformed by a function z[i]=fj(w[i])=w[i]j are then normalized to zero-mean and unit norm,








w
^



[
i
]


=



z


[
i
]


-


1
n



Σ
i



z


[
i
]






(



Σ
i



(


z


[
i
]


-


1
n



Σ
i



z


[
i
]




)


2

)


1
2

















w
^



[
i
]






w


[
i
]




=






w
^



[
i
]






z


[
i
]









z


[
i
]






w


[
i
]









(
7
)











w
^



[
i
]






z


[
i
]




=



(

1
-

1
n


)



(



Σ
i



(


z


[
i
]


-


1
n



Σ
i



z


[
i
]




)


2

)


1
2



-



(

1
-

1
n


)



(


z


[
i
]


-


1
n



Σ
j



z


[
j
]




)




(



Σ
i



(


z


[
i
]


-


1
n



Σ
i



z


[
i
]




)


2

)


3
2








(
8
)










z


[
i
]






w


[
i
]




=



f




(

w


[
i
]


)


=


jw


[
i
]



j
-
1







(
9
)







The gradient can now be computed with respect to input x as follows:












l




x


[
i
]




=



Σ
j





l




y


[
j
]









y


[
j
]






x


[
i
]





=


Σ
j





l




y


[
j
]






f


(

w


[

j
-
i

]


)








(
10
)









l



x


=




l



y


*

f


(
w
)







(
11
)







The resulting response maps from these weights are then combined using 1×1 convolutions into a one or more feature maps, which can then be stacked and used as the input for the next layer, where the and the process repeats.


Late Fan-Out Embodiment (Response Maps)


At any given layer, the new response maps can be computed from the seed response maps via non-linear transformations of the seed response map. The forward pass for this layer involves the application of the following non-linear function z[i]−fj(x[i]). The backward propagation can be computed as:












l




x


[
i
]




=





l




z


[
i
]









z


[
i
]






x


[
i
]





=




l




z


[
i
]










f
j



(

x


[
i
]


)






x


[
i
]










(
12
)







For example, if







z


[
i
]


=



f
j



(

x


[
i
]


)


=


x


[
i
]


j







then








l




x


[
i
]




=




l




z


[
i
]






(


jx


[
i
]



j
-
i


)






To prevent the gradients from vanishing or exploding, it is important to normalize the response maps. Batch normalization may be used.


Design of the Basic PolyCNN Module


The core idea of the PolyCNN (assuming that the convolutional filter does not have bias terms) is to restrict the network to learn only one convolutional filter at each layer, and through polynomial transformations, augmenting the convolutional filters, or the response maps. The augmented filters do not need to be updated or learned during the network back-propagation. As shown in view (B) of FIG. 1, the early fan-out module of PolyCNN starts with just one learnable convolutional filter custom character which is referred to as the seed filter. If m filters in total are desired for one layer, the remaining m−1 filters are non-learnable and are the polynomial transformation of the seed filter custom character. The input image xl is filtered by these convolutional filters and becomes m response maps, which are then passed through a non-linear activation function, such as a rectified linear unit (ReLU) function and become m feature maps. Optionally, these m feature maps can be further lineally combined using m learnable weights, which is essentially another convolution operation with filters of size 1×1.


A comparison of a standard CNN, an early fan-out embodiment of the PolyCNN and a late fan-out embodiment of the PolyCNN is shown in FIG. 1, views (A)-(C) respectively. Compared to the standard CNN module under the same structure (with 1×1 convolutions), the number of learnable parameters is significantly smaller in either embodiment of the Poly CNN.


Assume that the number of input and output channels are p and q. Therefore, the size of each 3D filter in both CNN and PolyCNN is p·h·w, where h and w are the spatial dimensions of the filter, and there are m such filters. The 1×1 convolutions act on the in filters and create the q-channel output.


For standard CNN, the number of learnable weights is p·h·w·m+m·q. For PolyCNN, the number of learnable weights is p·h·w·1+m·q. For simplicity, assume p=q, which is usually the case for a multi-layer CNN architecture. q is the number of intermediate channels because it is both the number of channels from the previous layer and the number of channels for the next layer. Then we have the parameter saving ratio:









τ
=



#





of






param
.




in






CNN


#





of






param
.




in






PolyCNN


=




p
·
h
·
w
·
m

+

m
·
q




p
·
h
·
w
·
1

+

m
·
q



=



h
·
w
·
m

+
m



h
·
w

+
m








(
13
)








and when the spatial filter size h=w=3 and the number of convolutional filters desired for each layer m»32, the parameter saving ratio






τ
=



10

m


m
+
9



10.





Similarly, for spatial filter size h=w=5 and m»52, the parameter saving ratio






τ
=



26

m


m
+
25



26






and for spatial filter size h=w=7 and m»72, the parameter saving ratio






τ
=



50

m


m
+
49



50.





If the 1×1 convolutions are not included for both standard CNN and PolyCNN, thus making m=q=p, it can be verified that the parameter saving ratio r becomes m. Numerically, PolyCNN saves around 10×, 26×, and 5× parameters during learning for 3×3, 5×5, and 7×7 convolutional filters respectively. The aforementioned calculation also applies to the late fan-out embodiment of the PolyCNN.


Training of the PolyCNN


Training of the PolyCNN is quite straightforward, where the backpropagation is the same for the learnable weights and the augmented weights that do not update. Gradients get propagated through the polynomial augmented filters just like they would with learnable filters. This is similar to propagating gradients through layers without learnable parameters (e.g., ReLU, Max Pooling etc.). However, the gradient with respect to the augmented filters are not computed nor updated during the training process.


The 3D non-learnable filter banks of size p×h×w×(m−1) (assuming a total of m filters in each layer) in the PolyCNN can be generated by taking polynomial transformations from the seed filter, by raising to some exponents, which can either be integer exponents or fractional exponents that are randomly sampled from a distribution.


The advantages of the proposed PolyCNN over CNN from several aspects is now discussed.


Computational: The parametrization of the PolyCNN layer reduces the number of learnable parameters by a factor of 10× to 50× during training and inference. The lower memory requirements enable learning of much deep neural networks thereby allowing better representations to be learned through deeper architectures. In addition, PolyCNN enables learning of deep CNNs on resource constrained embedded systems.


Statistical: PolyCNN, being a simpler model with fewer learnable parameters compared to a CNN, can effectively regularize the learning process and prevent over-fitting. High capacity models such as deep CNNs with a regular convolution layer typically consists of a very large number of parameters. Methods such as Dropout, DropConnect and Maxout have been introduced to regularize the fully connected layers of a network during training to avoid over-fitting. As opposed to regularizing the fully connected layers of a network, PolyCNN directly regularizes the convolution layers.


Sample Complexity: The lower model complexity of PolyCNN makes it an attractive option for learning with low sample complexity. To demonstrate the statistical efficiency of PolyCNN, a benchmark was performed on a subset of the CIFAR-10 dataset. The training subset randomly picks 25% images (5000×0.25=:1250) per class while keeping the testing set intact. The best-performing architecture on CIFAR-10 was chosen for both the CNN and PolyCNN. The results, shown in view (A) of FIG. 2, demonstrate that PolyCNN trains faster and is less prone to over-fitting on the training data.


To provide an extended evaluation, additional face recognition was performed on the FRGC v2.0 dataset experiments under a limited sample complexity setting. The number of images in each class ranges from 6 to 132 (51.6 on average). While there are 466 classes in total, the total number of randomly selected classes was increased (10, 50 and 100) with a 60-40 train/test split. Across the number of classes, the network parameters remain the same except for the classification fully connected layer at the end. Results are shown in views (B)-(D) of FIG. 2: (1) PolyCNN converges faster than CNN, especially on small datasets; and (2) PolyCNN outperforms CNN on this task. Lower model complexity helps PolyCNN prevent over-fitting especially on small to medium-sized datasets.


The invention proposed the use of PolyCNN as an alternative to the standard convolutional neural networks. The PolyCNN module enjoy s significant savings in the number of parameters to be learned at training, at least 10× to 50×. PolyCNN also has much lower model complexity compared to traditional CNN with standard convolution layers. The proposed PolyCNN demonstrated performance on par with the state-of-the-art architectures on four image recognition datasets.

Claims
  • 1. A method for training a neural network comprising, for each convolution layer in the neural network: receiving a gradient function;adjusting a seed convolutional filter based on the gradient function;generating a plurality of augmented convolutional filters, wherein each weight of each augmented convolutional filter is generated by applying a polynomial to one or more weights of the seed convolutional filter;receiving an input;generating a plurality of response maps based on convolutions of the input with the seed convolutional filter and each of the plurality of augmented convolutional filters; andgenerating a feature map based on the plurality of response maps.
  • 2. The method of claim 1 wherein in response maps are generated based on m−1 augmented convolutional filters and the seed convolutional filter.
  • 3. The method of claim 2 wherein generating a feature map further comprises: applying a non-linear function to each of the plurality of response maps to generate a plurality of feature maps; andapplying a vector of learnable coefficients to the plurality of feature maps to generate a single feature map.
  • 4. The method of claim 3 wherein: m feature maps are generated from the m response maps; andthe vector contains m learnable coefficients.
  • 5. The method of claim 4 further comprising: adjusting the m learnable coefficients based on the gradient function.
  • 6. The method of claim 4 wherein the single feature map is generated at layer l of the neural network, further comprising: using the single feature map as the input for layer l+1 of the neural network.
  • 7. The method of claim 1 wherein generating a plurality of augmented convolutional filters comprises raising each element of the seed convolutional filter to a different exponent.
  • 8. The method of claim 7 wherein the different exponents are integer exponents or fractional exponents randomly sampled from a distribution.
  • 9. The method of claim 1 wherein generating a plurality of augmented convolutional comprises applying a polynomial function to each element of the seed convolutional filter.
RELATED APPLICATIONS

This application is a national phase filing under 35 U.S.C. § 371 claiming the benefit of and priority to International Patent Application No. PCT/US2019/029619, filed on Apr. 29, 2019, which claims the benefit of U.S. Provisional Patent Application No. 62/762,292, filed Apr. 27, 2018. The entire contents of these applications are incorporated herein by reference.

GOVERNMENT RIGHTS

This invention was made with government support under contract 20131JCXK005 awarded by the Department of Justice and contract N6833516C0177 awarded by NavAir. The government has certain rights in the invention.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2019/029619 4/29/2019 WO
Publishing Document Publishing Date Country Kind
WO2019/210295 10/31/2019 WO A
US Referenced Citations (13)
Number Name Date Kind
20160019434 Caldwell Jan 2016 A1
20160196480 Heifets et al. Jul 2016 A1
20160321784 Annapureddy Nov 2016 A1
20160350645 Brothers Dec 2016 A1
20160350648 Gilad-Bachrach et al. Dec 2016 A1
20170140270 Mnih May 2017 A1
20170200078 Bichler Jul 2017 A1
20170262737 Rabinovich et al. Sep 2017 A1
20180129893 Son May 2018 A1
20180165571 Tanabe Jun 2018 A1
20190050716 Barkan Feb 2019 A1
20190065817 Mesmakhosroshahi Feb 2019 A1
20190197693 Zagaynov Jun 2019 A1
Non-Patent Literature Citations (5)
Entry
Juefei-Xu, “Design of Weight-Learning Efficient Convolutional Modules in Deep Convolutional Neural Networks and its Application to Large-Scale Visual Recognition Tasks”, Data Analysis Project (OAP). In: Machine Learning Department, Carnegie Mellon University, May 3, 2017. Retrieved on Jun. 23, 2019 from URL: <https://www.ml.cmu.edu/research/dap-papers/S17/dap-xu-felix-juefei.pdf>, 15 pages.
Philipp et al., “Nonparametric Neural Networks”, Published as a Conference Paper at ICLR 2017. Retrieved from URL: <https://www.cs.cmu.edu/-jgc/publication/Nonparametric%20Neural%20Networks.pdf>. Retrieved on Jun. 23, 2019, 31 pages.
International Search Report and Written Opinion of International Patent Application No. PCT/US2019/029619 dated Jul. 10, 2019, 7 pages.
International Search Report and Written Opinion for International Patent Application No. PCT/US2019/029635, dated Jul. 10, 2019, 6 pages.
Huang et al., “Gradient Feature Extraction for Classification-based Face Detection” Pattern Recognition, vol. 36, 2003, pp. 2501-2511.
Related Publications (1)
Number Date Country
20210089844 A1 Mar 2021 US
Provisional Applications (1)
Number Date Country
62762292 Apr 2018 US