APPARATUS AND METHOD FOR MACHINE LEARNING BASED ON MONOTONICALLY INCREASING QUANTIZATION RESOLUTION

Information

  • Patent Application
  • 20210365838
  • Publication Number
    20210365838
  • Date Filed
    May 20, 2021
    3 years ago
  • Date Published
    November 25, 2021
    3 years ago
  • CPC
    • G06N20/00
  • International Classifications
    • G06N20/00
Abstract
Disclosed herein are an apparatus and method for machine learning based on monotonically increasing quantization resolution. The method, in which a quantization coefficient is defined as a monotonically increasing function of time, includes initially setting the monotonically increasing function of time, performing machine learning based on a quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time, determining whether the quantization coefficient satisfies a predetermined condition after increasing the time, newly setting the monotonically increasing function of time when the quantization coefficient satisfies the predetermined condition, and updating the quantization coefficient using the newly set monotonically increasing function of time. Here, performing the machine learning, determining whether the quantization coefficient satisfies the predetermined condition, newly setting the monotonically increasing function of time, and updating the quantization coefficient may be repeatedly performed.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of Korean Patent Application No. 10-2020-0061677, filed May 22, 2020, and No. 10-2021-0057783, filed May 4, 2021, which are hereby incorporated by reference in their entireties into this application.


BACKGROUND OF THE INVENTION
1. Technical Field

The present invention relates to machine learning and signal processing.


2. Description of the Related Art

Quantization technology is one of technologies that have been researched in a signal-processing field for a long time, and with regard to machine learning, research for implementing large-scale machine-learning networks or for compressing machine-learning results to make the same more lightweight has been carried out.


Particularly these days, research for adopting quantization in learning itself and using the same for implementation of embedded systems or dedicated neural-network hardware is underway. Quantized learning yields satisfactory results in some fields, such as image recognition and the like, but quantization is generally known not to exhibit good optimization performance due to the presence of quantization errors.


SUMMARY OF THE INVENTION

An object of an embodiment is to minimize quantization errors and implement an optimization algorithm having good performance in lightweight hardware in machine-learning and nonlinear-signal-processing fields in which quantization is used.


A machine-learning method based on monotonically increasing quantization resolution, in which a quantization coefficient is defined as a monotonically increasing function of time, according to an embodiment may include initially setting the monotonically increasing function of time, performing machine learning based on a quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time, determining whether the quantization coefficient satisfies a predetermined condition after increasing the time, newly setting the monotonically increasing function of time when the quantization coefficient satisfies the predetermined condition, and updating the quantization coefficient based on the newly set monotonically increasing function of time. Here, performing the machine learning, determining whether the quantization coefficient satisfies the predetermined condition, newly setting the monotonically increasing function of time, and updating the quantization coefficient may be repeatedly performed.


Here, the quantization coefficient may be defined as a function varying over time as shown in Equation (32) below:











σ


(
t
)


=


γ
24

·


Q
p

-
2




(
t
)




,

γ

R





(
32
)







Here, Q may be defined as shown in Equation (33) below:






Q
p
=η·b
n
η∈Z
+
,η<b  (33)


where a base b is b∈Z+, b≥2.


Here, the quantized learning equation may be a learning equation for acquiring quantized weight vectors for all times, as defined in Equation (34) below:













w

t
+
1

Q

=


w
t
Q

-




α
t


Q
p
2


·

Q
p






f


(

w
t

)




+



ɛ


t



Q
p

-
1










=



w
t
Q

-



α
t


Q
p


·


1

Q
p




[


Q
p





f


(

w
t

)




]







α
t



Q


(

0
,

Q
p


)










=


w
t
Q

-



α
t


Q
p







f
Q



(

w
t

)












(
34
)







Here, the quantized learning equation may be a learning equation based on a binary number system, as defined in Equation (35) below:






w
t+1
Q
=w
t
Q−2−(n-k)∇ƒQ(wt), n,k∈Z+, n>k  (35)


Here, the quantized learning equation may be a probability differential learning equation defined in Equation (36) below:






dW
s=−λt∇ƒ(Ws)ds+√{square root over (2σ(s))}·d{right arrow over (B)}s  (36)


Here, the quantization coefficient may be defined using h(t), which is a monotonically increasing function of time, as shown in Equation (37) below:






Q
p
=η·b

h(t)
, such that h(t)↑∞ as t→∞  (37)


Here, initially setting the monotonically increasing function of time may be configured to set the monotonically increasing function so as to satisfy Equation (38) below:












C

ln





2




σ


(
t
)







t
=
0



=




γ
24

·


(

η
·

b


h
_



(
0
)




)


-
1






C
1


ln





2



=




T


(
t
)




log
b




γln2

24

η




C
1

-
1






h
_



(
0
)





log
b



γln2

24

η




C

-
1









(
38
)







Here, when determining whether the quantization coefficient satisfies the predetermined condition is performed, the predetermined condition may be Equation (39) below:










σ


(
t
)




C

log


(

t
+
2

)







(
39
)







Here, when newly setting the monotonically increasing function of time is performed, the monotonically increasing function of time may be defined as Equation (40) below:











h
_



(

t
1

)


=





log
b



γln2

24

η




C

-
1



+
0.5







(
40
)







A machine-learning apparatus based on monotonically increasing quantization resolution according to an embodiment may include memory in which at least one program is recorded and a processor for executing the program. A quantization coefficient may be defined as a monotonically increasing function of time, and the program may perform initially setting the monotonically increasing function of time, performing machine learning based on a quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time, determining whether the quantization coefficient satisfies a predetermined condition after increasing the time, newly setting the monotonically increasing function of time when the quantization coefficient satisfies the predetermined condition, and updating the quantization coefficient based on the newly set monotonically increasing function of time. Here, performing the machine learning, determining whether the quantization coefficient satisfies the predetermined condition, newly setting the monotonically increasing function of time, and updating the quantization coefficient may be repeatedly performed.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other objects, features, and advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:



FIG. 1 and FIG. 2 are views for explaining a method for machine learning having monotonically increasing quantization resolution;



FIG. 3 is a flowchart for explaining a machine-learning method based on monotonically increasing quantization resolution according to an embodiment;



FIG. 4 is a hardware concept diagram according to an embodiment; and



FIG. 5 is a view illustrating a computer system configuration according to an embodiment.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

The advantages and features of the present invention and methods of achieving the same will be apparent from the exemplary embodiments to be described below in more detail with reference to the accompanying drawings. However, it should be noted that the present invention is not limited to the following exemplary embodiments, and may be implemented in various forms. Accordingly, the exemplary embodiments are provided only to disclose the present invention and to let those skilled in the art know the category of the present invention, and the present invention is to be defined based only on the claims. The same reference numerals or the same reference designators denote the same elements throughout the specification.


It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements are not intended to be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element discussed below could be referred to as a second element without departing from the technical spirit of the present invention.


The terms used herein are for the purpose of describing particular embodiments only, and are not intended to limit the present invention. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,”, “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


Unless differently defined, all terms used herein, including technical or scientific terms, have the same meanings as terms generally understood by those skilled in the art to which the present invention pertains. Terms identical to those defined in generally used dictionaries should be interpreted as having meanings identical to contextual meanings of the related art, and are not to be interpreted as having ideal or excessively formal meanings unless they are definitively defined in the present specification.


As is generally known, when quantization resolution is sufficiently high and well defined, quantization errors can be considered to be white noise. Accordingly, if quantization errors can be defined as white noise or an independent and identically distributed (i.i.d.) process, the variance of the quantization errors may be made to monotonically decrease over time by setting the quantization errors to monotonically decrease over time.


When quantization resolution is given as a monotonically increasing function of time, quantization errors become a monotonically decreasing function of time, so a global optimization algorithm for a non-convex objective function can be implemented, and this is the same dynamics as a stochastic global optimization algorithm. Also, because of the use of quantization, a machine-learning algorithm that enables global optimization may be implemented even in systems having low computing power, such as embedded systems.


Accordingly, in an embodiment, global optimization is achieved in such a way that, when quantization to integers or fixed-point numbers, applied to an optimization algorithm, is performed, quantization resolution monotonically increases over time.


Hereinafter, a machine-learning apparatus and method having monotonically increasing quantization resolution according to an embodiment will be described in detail with reference to FIGS. 1 to 5.


In the machine-learning apparatus and method having monotonically increasing quantization resolution according to an embodiment, first, Definitions 1 to 3 below are required.


Definition 1

The objective function to be optimized may be defined as follows.


For a weight vector wt∈Rn and a data vector xk∈Rn in an epoch unit t, the objective function ƒ: Rn→R is as shown in Equation (1) below:











f


(

w
t

)





1
N






k
=
1

N




f
_



(


w
t

,

x
k


)





=


1
N






l
=
1

L






k
=
1


B
t





f
_



(


w
t

,

x
k


)









(
1
)







In Equation (1), ƒ: Rn×R→R denotes a loss function for the weight vector and the data vector, N denotes the number of all data vectors, L denotes the number of mini-batches, and Bl denotes the number of pieces of data included in the l-th mini-batch.


Definition 2

For an arbitrary vector x∈R, truncation of a fractional part is defined as shown in Equation (2) below:






x
Q
≡└x┘+ϵ(ϵ∈R[0,1))  (2)


In Equation (2), xQ∈Z is the whole part of the real number x.


Definition 3

The greatest integer function or the Gauss's bracket [⋅] is defined as shown in Equation (3) below:





[x]≡└x+0.5┘=x+0.5 −ϵcustom-characterx+ϵ  (3)


where ϵ∈R(−0.5,0.5] is a round-off error.


In an embodiment, the objective function satisfies the following assumption for convergence and feature analysis. Particularly, the following assumption is definitely satisfied when an activation function, having maximum and minimum limits and based on Boltzmann statistics or Fermion statistics, is used in machine learning.


Assumption 1


For an arbitrary vector x satisfying x∈Rn, x∈Bo(x*,ρ), positive numbers (0<m<M<∞) satisfying the following equation are present for the objective function ƒ: Rn→R in which ƒ(x)∈C2.










m




v


2






v
,





2


f




x
2





(
x
)


v






M




v


2






(
4
)







In Equation (4), Bo(x,ρ) is an open set that satisfies the following equation for a positive number ρ∈R, ρ>0.






B
o(x*,ρ)={x|∥x−x*∥<ρ}.  (5)


Based on the definitions and assumptions described above, a machine-learning apparatus and method having monotonically increasing quantization resolution according to an embodiment will be described in detail.


In most existing studies on machine learning, quantization is defined in the form of multiplying a sign function of a variable x by a quantization function based on appropriate conditions for a quantization coefficient Qp (Qp∈Q,Qp>0), as shown in Equation (6) below:










x
Q

=

{



0




C


(

x
,
QP

)


<

δ
1







sign


(
x
)






δ
1



C


(

x
,
QP

)


<

δ
2








g


(

x
,

Q
p


)




sign


(
x
)





Otherwise








(
6
)







In existing studies, researchers have proposed definitions and applications of various forms of quantization coefficients in order to improve the performance of their quantization techniques. Most such quantization techniques are oriented toward increasing the accuracy of a quantization operation by decreasing quantization errors. That is, a quantization step value varies depending on the position of x, as shown in Equation (6), whereby quantization resolution is changed in the spatial terms, and this methodology generally exhibits good performance.


If defining quantization errors to be different in the spatial terms is capable of yielding a satisfactory result, as shown in the existing studies, defining quantization errors differently in terms of time may also yield a satisfactory result, and the present invention is based on this idea.


To this end, it is necessary to define more basic quantization than Equation (6), although derived from Equation (6). Accordingly, in an embodiment, a basic form of quantization may be defined using the above-described Definition 2 and Definition 3, as shown in Equation (7) below:










x
Q



=
Δ





1

Q
p







Q
p

·

(

x
+

0.5
·

Q
p

-
1




)





=



1

Q
p




[


Q
p

·
x

]



Q






(
7
)







Based on Equation (7), an equation for the quantization error may be defined as shown in Equation (8) below:










x
Q

=



1

Q
p







Q
p

·

(

x
+

0.5
·

Q
p

-
1




)





=



1

Q
p




(



Q
p

·
x

+
ɛ

)


=

x
+

ɛQ
p

-
1









(
8
)







According to an embodiment, when the fixed quantization step Qp in Equation (8) is given as a function increasing with time, a quantization error that monotonically decreases over time is simply acquired.


Also, it has been proved that if quantization errors are asymptotically pairwise independent and have uniform distribution in a quantization error range, the quantization errors are white noise.


It is intuitively obvious that in order for quantization errors to have uniform distribution, quantization must be uniform quantization. Accordingly, an embodiment assumes only uniform quantization having identical resolution at the same t, without changing the quantization resolution in the spatial terms.


Also, because a binary number system is generally used in engineering, the quantization parameter Qp is defined as shown in Equation (9) below in order to support the binary number system.






Q
p
=η·b
n
η∈Z
+
, η<b  (9)


where the base b is b∈Z+, b≥2.


Based on the above-described assumption, if quantization of x is uniform quantization according to the quantization parameter defined by Equations (7) and (9) in the present invention, the quantization error ϵQp(t)=xQ−x is regarded as white noise.


In order to apply this to general machine-learning, it is assumed that white noise described by Equation (10) is defined for an n-dimensional weight vector wt∈Rn.





{right arrow over (ϵ)}Qp=xQ−x={ϵ01, . . . ϵn-1}∈Rn  (10)


Based on the above-described Definition 1, a general gradient-based learning equation may be as shown in Equation (11) below:






w
t+1
=w
t−λt∇ƒ(wt)  (11)


In Equation (11), λt∈R(0,1) is a learning rate, and satisfies λt=argminλtinR(0,1)ƒ(wt−λt∇ƒ(wt)), and wt is a weight vector that satisfies wt∈Rn.


Here, when the weight vectors wt and wt+1 are assumed to be quantized, the learning equation in Equation (11) may be updated as shown in Equation (12) below:






w
t+1
Q=(wtQ−λt∇ƒ(wt))Q=wtQ−(λt∇ƒ(wt))Q.  (12)


When g(x,t)≡λt∇ƒ(x) is substituted into Equation (12) and when this is quantized based on Equation (7), Equation (13) may be derived.











g


(
x
)


Q

=



1

Q
p







Q
p



(


g


(
x
)


+

0.5






Q
p

-
1




)





=




1

Q
p


·

Q
p




g


(
x
)



+



ɛ


t



Q
p

-
1









(
13
)







In Equation (13), {right arrow over (ϵ)}t is a quantization error having a vector value that is defined as {right arrow over (ϵ)}t∈Rn, in which case the respective components thereof have errors defined in Definition 3 and the probability distributions of the components are independent.


If λt=atQ−1 is satisfied because a rational number at∈Q(0,Qp) is present, g(x) is factorized to g(x)=atQp−1h(x), which may be represented as shown in Equation (14) below:











g


(
x
)


Q

=





α
t


Q
p
2


·

Q
p




h


(
x
)



+



ɛ


t




Q
p

-
1


.







(
14
)







When Equation (14) is substituted into Equation (12) after h(x) in Equation (14) is changed to ∇ƒ(wt), the following quantized learning equation shown in Equation (15) may be acquired:













w

t
+
1

Q

=




w
t
Q

-




α
t


Q
p
2


·

Q
p






f


(

w
t

)




+



ɛ


t



Q
p

-
1















w
t
Q

-



α
t


Q
p


·


1

Q
p




[


Q
p





f


(

w
t

)




]







α
t



Q


(

0
,

Q
p


)














w
t
Q

-



α
t


Q
p







f
Q



(

w
t

)












(
15
)







Consequently, Equation (15), which is a learning equation for acquiring quantized weight vectors for all steps t, is acquired through mathematical induction in an embodiment.


In consideration of general hardware based on binary numbers, b and are set to b=2, η=1 in Equation (9), so αt=2k, k<n. Accordingly, Qp=2n is satisfied, and a quantized learning equation is simplified as shown in Equation (16) below:






w
t+1
Q
=w
t
Q−2−(n-k)∇ƒQ(wt), n,k∈Z+, n>k  (16)


Equation (16) shows that a learning equation in machine learning can be simplified through a right shift operation performed on the quantized ∇Qƒ(wt).


As appears in Equation (16), the most extreme form of quantization is defined by k=n−1, and the quantized gradient becomes a single bit of a sign vector. Here, when ∥δ2−δ1∥=Qp and










δ
1



=


Q
p

2


,




Equation (6) may be regarded as a quantization system that is uniformly quantized to Qp.


An embodiment is a quantization method configured to change Qp over time, rather than spatial quantization.


Assuming that each component of {right arrow over (ϵ)}t∈Rn in Equation (14) is defined like the round-off error of Definition 3 and that quantization errors are uniformly distributed, the variance of the quantization errors may be as shown in Equation (17) below:













ɛ
t


R


,



𝔼ɛ
t
2



Q
p

-
2



=

1

12
·

Q
p






2







,





ɛ


t



R
n



,


𝔼






Q
p

-
2





ɛ


t
2


=


𝔼







Q
p

-
2


·

tr


(



ɛ


t




ɛ


t
T


)




=


1

12
·

Q
p





2




·
n







(
17
)







When the variance of the quantization errors at an arbitrary time (t>0) is as shown in Equation (17), if ϵtQp−1ds=q·dBt is given for a standard one-dimensional Wiener process dBt∈R, Equation (18) may be derived.











𝔼ɛ
t
2



Q
p

-
2



ds

=


𝔼






q
2



dB
t
2


=




q
2


ds




1

1

2




Q
p

-
2




=



q
2


q

=



1
12


·

Q
p

-
1










(
18
)







In the same manner, when d{right arrow over (B)}t={right arrow over (ϵ)}ds∈Rn is given as a vector-form Wiener process and when {right arrow over (ϵ)}tQp−1ds=q·d{right arrow over (B)}t, is assumed, q=√{square root over (n/12)}·Qp−1 is acquired.


Here, if the variance of the quantization errors in Equation (18) is a function of time, because only the quantization coefficient Qp is a parameter varying over time, Qp is taken as a function of time, and Equation (19) is defined.











σ


(
t
)


=


γ
24

·


Q
p

-
2




(
t
)




,

γ

R





(
19
)







Therefore, when the learning equation is given as shown in Equation (11), if the quantized weight vector wtQ∈Rn is regarded as a probability process {Wt}t=0, Equation (15), which is the learning equation, may be defined in the form of the probability differential equation shown in Equation (20) below:














dW

?


=





-

λ
t






f


(

W
s

)




ds

+



ɛ



?





Q
p

-
1




(
s
)



ds








=





-

λ
t






f


(

W
s

)




ds

+



n
12





Q
p

-
1




(
s
)



d



B
s












(
20
)






?



indicates text missing or illegible when filed





When γ=n in Equation (20), a simplified equation may be derived, as shown in Equation (21) below:






dW
t=−λt∇ƒ(Ws)ds+√{square root over (2σ(s))}·d{right arrow over (B)}s  (21)


With regard to Equation (21), the transition probability of a weight vector is known as weakly converging to Gibb's probability, as shown in Equation (22), under appropriate conditions.














?


σ


(
t
)





(

W
t

)


=


1

Z

σ


(
t
)






exp


(

-


f


(

W
t

)



σ


(
t
)




)




,


where






Z

σ


(
t
)




=




R
n





exp


(

-


f


(

W
s

)



σ


(
s
)




)



ds











?



indicates text missing or illegible when filed






(
22
)







Here, it is known that, when σ(t)→0, the transition probability of the weight vector converges to the global minima of ƒ(Wt).


This means that the limit of Equation (19) is as shown in Equation (23) below:












lim





σ


t






(
t
)


=




γ
24

·


lim





σ


t








Q
p

-
2




(
t
)



=
0





(
23
)







That is, whenever t monotonically increases, the magnitude of the quantization coefficient monotonically increases (i.e., Qp(t)↑∞) in response thereto, which means that the quantization resolution increases over time. That is, according to the present invention, after quantization resolution is set to be low at the outset (that is, a Qp value is small), the quantization coefficient Qp is increased according to a suitable time schedule, and when the quantization resolution becomes high, global minima may be found.


Here, a quantization coefficient determination method through which the global minima can be found will be additionally described below.


When Equation (21) and Equation (23) are satisfied, if σ(t) satisfying the condition of Equation (24) is given, global minima may be found by simulated annealing.












inf






σ


(
t
)



t

=

C

log


(

t
+
2

)




,

C

R

,

C
>>
0





(
24
)







However, because σ(t) is a value that is proportional to the integer value Qp(t), it is difficult to directly substitute a continuous function, as in Equation (24).


Other conditions are T(t)≥c/log(2+t), “T(t)↓0”, and “T(t) is continuously differentiable” while satisfying Equation (25).














d
dt



e

-


2

Δ


T


(
t
)






=





d


T


(
t
)



dt

·

1


T
2



(
t
)






e

-


2

Δ


T


(
t
)







0







Δ

=


sup

x
,

y


R
n






(


f


(
x
)


-

f


(
y
)



)









(
25
)







Accordingly, when T(t) is set as the upper limit of σ(t) and when






C

log


(

t
+
2

)






is set as the lower limit of σ(t), σ(t) may be selected such that the characteristics of the upper-limit schedule T(t) is satisfied.



FIG. 1 and FIG. 2 illustrate the graphs of T(t) and σ(t) as a function of time t.


Referring to FIG. 1, T(t) and σ(t) may be defined by the relationship shown in Equation (26) below:










C

log


(

t
+
2

)





σ


(
t
)




T


(
t
)






(
26
)







In Equation (26), when a positive number a E. R is present and satisfies a<1, if T(t) is defined as T(t)=C1/log(a·t+2) for C1>C, T(t)≥C/log(t+2) is always satisfied. Accordingly, when σ(t) is set to satisfy Equations (9) and (19), which are conditions for quantization, while satisfying Equation (26), σ(t) satisfies Equation (25) although it is not continuously differentiable, whereby global minima can be found.


The quantization coefficient Qp(t) may be defined as shown in Equation (27) below using h(t)∈Z+, which is a monotonically increasing function of time.






Q
p(t)=η·bh(t), such that h(t)↑∞ as t→∞  (27)


A machine-learning method based on monotonically increasing quantization resolution through which global minima can be found based on Equation (19), Equation (26), and Equation (27) will be described below.



FIG. 3 is a flowchart for explaining a machine-learning method based on monotonically increasing quantization resolution according to an embodiment.


Here, it is assumed that a quantization coefficient is given as shown in Equation (27) and that σ(t) satisfies Equation (19).


First, a monotonically increasing function of time is initially set at step S110. That is, as shown in FIG. 1, when t=0, h(0) satisfying the following is set.












C

ln





2




σ


(
t
)







t
=
0



=




γ
24

·


(

η
·

b


h
_



(
0
)




)


-
1






C
1


ln





2



=


T


(
t
)






log
b




γln





2


24

η




C
1

-
1






h
_



(
0
)





log
b




γln





2


24

η




C

-
1










(
28
)







If the number of bits suitable for an initial value is not found using Equation (28), a suitable h(0) is set, as shown in FIG. 2.


Then, machine learning is performed at step S120 based on a quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time t.


Then, time is increased from t to t+1 at step S130, and whether the quantization coefficient satisfies a predetermined condition σ(t)≥T(t) is determined at step S140.


When it is determined at step S140 that the quantization coefficient does not satisfy the predetermined condition σ(t)≥T(t), that is, when σ(t)<T(t) is satisfied under the condition of t>0, the quantization coefficient is not updated, and σ(t) is set to







σ


(
t
)


=


γ
24





(

η
·

b


h


(
0
)


_



)


-
1


.






Then, machine learning is performed at step S120 based on the quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time t.


Conversely, when it is determined at step S140 that the quantization coefficient satisfies the predetermined condition σ(t)≥T(t), the monotonically increasing function of time is newly set at step S150.


That is, if the first t satisfying σ(t)≥T(t) is t1, h(t1)∈Z+ satisfying







σ


(
t
)




c

log


(

t
+
2

)







may be defined as shown in Equation (29) below:











h
_



(

t
+
1

)


=





log
b




γ





ln





2


24

η




C

-
1



+
0.5







(
29
)







Then, the quantization coefficient is updated by the newly set monotonically increasing function of time at step S160.


Then, machine learning is performed at step S120 based on the quantized learning equation using the quantization coefficient defined by the monotonically increasing function of the time t.


Steps S120 to S160 may be repeated until a learning stop condition is satisfied at step S170.


Referring to FIG. 3, the time coefficient t may actually correspond to a single piece of data. However, when there is a large amount of data, scheduling may be performed by adjusting the time coefficient depending on the number of pieces of data.


For example, assuming that the number of all pieces of data is N, that there are L mini-batches, and that the respective mini-batches are assigned the same number of pieces of data, the time coefficient is updated by 1 each time N/L pieces of data are processed.


Here, when the time coefficient updated for each mini-batch is t′, the time coefficient may be defined as shown in Equation (30) below:










t


=


N
L

·
t





(
30
)







Meanwhile, when this is actually implemented in hardware, η=1, b=2 are satisfied in Equation (9) due to the characteristics of binary systems. Accordingly, Equation (29) for calculating variation in the quantization coefficient value over time may be simplified as shown in Equation (31) below:











h
_



(
t
)


=





log
2




n





ln





2

24



C

-
1



+
0.5







(
31
)








FIG. 4 is a hardware concept diagram according to an embodiment.


That is, FIG. 4 illustrates the structure of the data storage device of a computing device for machine learning for supporting varying quantization resolution in order to implement the above-described machine-learning algorithm based on a quantization coefficient varying over time in hardware.



FIG. 5 is a view illustrating a computer system configuration according to an embodiment.


The machine-learning apparatus based on monotonically increasing quantization resolution according to an embodiment may be implemented in a computer system 1000 including a computer-readable recording medium.


The computer system 1000 may include one or more processors 1010, memory 1030, a user-interface input device 1040, a user-interface output device 1050, and storage 1060, which communicate with each other via a bus 1020. Also, the computer system 1000 may further include a network interface 1070 connected with a network 1080. The processor 1010 may be a central processing unit or a semiconductor device for executing a program or processing instructions stored in the memory 1030 or the storage 1060. The memory 1030 and the storage 1060 may be storage media including at least one of a volatile medium, a nonvolatile medium, a detachable medium, a non-detachable medium, a communication medium, and an information delivery medium. For example, the memory 1030 may include ROM 1031 or RAM 1032.


According to an embodiment, quantization is performed while quantization resolution is varied over time, unlike in existing machine-learning algorithms based on quantization, whereby better machine-learning and nonlinear optimization performance may be achieved.


According to an embodiment, because a methodology or a hardware design methodology based on which global optimization can be performed using integer or fixed-point operations is applied to machine learning and nonlinear optimization, optimization performance better than that of existing algorithms may be achieved, and excellent learning and optimization performance may be achieved in existing large-scale machine-learning frameworks, fields in which low power consumption is required, or embedded hardware configured with multiple large-scale RISC modules.


According to an embodiment, because there is no need for a floating-point operation module, which requires a relatively long computation time, the present invention may be easily applied in the fields in which real-time processing is required for machine learning, nonlinear optimization, and the like.

Claims
  • 1. A machine-learning method based on monotonically increasing quantization resolution, in which a quantization coefficient is defined as a monotonically increasing function of time, comprising: initially setting the monotonically increasing function of time;performing machine learning based on a quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time;determining whether the quantization coefficient satisfies a predetermined condition after increasing the time;newly setting the monotonically increasing function of time when the quantization coefficient satisfies the predetermined condition; andupdating the quantization coefficient based on the newly set monotonically increasing function of time,wherein performing the machine learning, determining whether the quantization coefficient satisfies the predetermined condition, newly setting the monotonically increasing function of time, and updating the quantization coefficient are repeatedly performed.
  • 2. The machine-learning method of claim 1, wherein the quantization coefficient is defined as a function varying over time as shown in Equation (32) below:
  • 3. The machine-learning method of claim 2, wherein Q is defined as shown in Equation (33) below: Qp=η·bn η∈Z+,η<b  (33)where a base b is b∈Z+, b≥2.
  • 4. The machine-learning method of claim 2, wherein the quantized learning equation is a learning equation for acquiring quantized weight vectors for all times, as defined in Equation (34) below:
  • 5. The machine-learning method of claim 2, wherein the quantized learning equation is a learning equation based on a binary number system, as defined in Equation (35) below: wt+1Q=wtQ−2−(n-k)∇ƒQ(wt), n,k∈Z+, n>k  (35)
  • 6. The machine-learning method of claim 2, wherein the quantized learning equation is a probability differential learning equation defined in Equation (36) below: dWs=−λt∇ƒ(Ws)ds+√{square root over (2σ(s))}·d{right arrow over (B)}s  (36)
  • 7. The machine-learning method of claim 2, wherein the quantization coefficient is defined using {right arrow over (h)}(t), which is a monotonically increasing function of time, as shown in Equation (37) below: Qp=η·bh(t), such that h(t)↑∞ as t→∞  (37)
  • 8. The machine-learning method of claim 7, wherein initially setting the monotonically increasing function of time is configured to set the monotonically increasing function so as to satisfy Equation (38) below:
  • 9. The machine-learning method of claim 8, wherein, when determining whether the quantization coefficient satisfies the predetermined condition is performed, the predetermined condition is Equation (39) below:
  • 10. The machine-learning method of claim 9, wherein, when newly setting the monotonically increasing function of time is performed, the monotonically increasing function of time is defined as Equation (40) below:
  • 11. A machine-learning apparatus based on monotonically increasing quantization resolution, comprising: memory in which at least one program is recorded; anda processor for executing the program,wherein:a quantization coefficient is defined as a monotonically increasing function of time, andthe program performsinitially setting the monotonically increasing function of time;performing machine learning based on a quantized learning equation using the quantization coefficient defined by the monotonically increasing function of time;determining whether the quantization coefficient satisfies a predetermined condition after increasing the time;newly setting the monotonically increasing function of time when the quantization coefficient satisfies the predetermined condition; andupdating the quantization coefficient based on the newly set monotonically increasing function of time, andperforming the machine learning, determining whether the quantization coefficient satisfies the predetermined condition, newly setting the monotonically increasing function of time, and updating the quantization coefficient are repeatedly performed.
  • 12. The machine-learning apparatus of claim 11, wherein the quantization coefficient is defined as a function varying over time as shown in Equation (41) below:
  • 13. The machine-learning apparatus of claim 12, wherein is defined as shown in Equation (42) below: Qp=η·bn η∈Z+, η<b  (42)where a base b is b∈Z+, b≥2.
  • 14. The machine-learning apparatus of claim 12, wherein the quantized learning equation is a learning equation for acquiring quantized weight vectors for all times, as defined in Equation (43) below:
  • 15. The machine-learning apparatus of claim 12, wherein the quantized learning equation is a learning equation based on a binary number system, as defined in Equation (44) below: wt+1Q=wtQ−2−(n-k)∇ƒQ(wt), n,k∈Z+, n>k  (44)
  • 16. The machine-learning apparatus of claim 12, wherein the quantized learning equation is a probability differential learning equation defined in Equation (45) below: dWs=−λt∇ƒ(Ws)ds+√{square root over (2σ(s))}·d{right arrow over (B)}s  (45)
  • 17. The machine-learning apparatus of claim 12, wherein the quantization coefficient is defined using h(t), which is a monotonically increasing function of time, as shown in Equation (46) below: Qp(r)=η·bh(t), such that h(t)↑∞ as t→∞  (46)
  • 18. The machine-learning apparatus of claim 17, wherein initially setting the monotonically increasing function of time is configured to set the monotonically increasing function so as to satisfy Equation (47) below:
  • 19. The machine-learning apparatus of claim 18, wherein, when determining whether the quantization coefficient satisfies the predetermined condition is performed, the predetermined condition is Equation (48) below:
  • 20. The machine-learning apparatus of claim 19, wherein, when newly setting the monotonically increasing function of time is performed, the monotonically increasing function of time is defined as Equation (49) below:
Priority Claims (2)
Number Date Country Kind
10-2020-0061677 May 2020 KR national
10-2021-0057783 May 2021 KR national