This application is based upon and claims the benefit of priority from Japanese patent application No. 2017-107122, filed on May 30, 2017, the disclosure of which is incorporated herein in its entirety by reference.
The present disclosure relates to a variable group calculation apparatus, a variable group calculation method, a variable group calculation program, and a data structure.
In application of machine learning, high dimensional data such as image data and voice data has a problem in which a result of the learning is affected by unnecessary information included in the data, and thus overfitting is likely to occur.
An effective method for avoiding such a problem is “sparse learning”. The sparse learning is a method of learning while deleting information irrelevant to a purpose using low dimensionality (sparseness) of intrinsically significant information included in the data.
In the sparse learning, when a desired high-dimensional observation data group (y) is represented by a linear combination of a dictionary matrix (A), column vectors of the dictionary matrix (A) are in a linear dependence relationship, and unnecessary information is often included in the dictionary matrix (A). Thus, an explanatory variable group functioning to indicate intrinsically significant column components in the dictionary matrix (A) is used as an undetermined variable group (x), which is a main factor for obtaining the observation data group (y).
In order to calculate the undetermined variable group (x) in the sparse learning, it is necessary to solve a minimization problem for calculating the undetermined variable group (x) that minimizes a difference value between an added composite value (Ax), which is obtained by adding and combining the undetermined variable group (x) and the dictionary matrix (A), and the observation data group (y). However, the minimization problem needs further improvement in the calculation accuracy. A representative example of the minimization problem includes a least squares method of calculating the undetermined variable group (x) that minimizes |y−Ax|2.
It is known that in the sparse learning, a constraint (a regularization term |x|p, where p is a constant) that does not increase the undetermined variable group (x) is added to the above minimization problem, and x, which simultaneously decreases (minimizes) the above-mentioned difference value and a data value including the difference value and the regularization term, is calculated, so that the undetermined variable group (x) is accurately calculated.
In the sparse learning, for example, the following technique is known as a technique for adding the regularization term and calculating the undetermined variable group (x).
Japanese Patent No. 6080783 discloses a calculation apparatus that performs the above-described calculation with the regularization term of L0 norm where p=0.
Japanese Unexamined Patent Application Publication No. 2017-033172 discloses a calculation apparatus that performs the above-described calculation with the regularization term of an L1 norm (|x|) where p=1 or an L2 norm (|x|2) where p=2.
Ryota Tomioka discloses, in “Machine Learning with Sparsity Inducing Regulations (machine learning professional series)”, Kodansha Ltd., December 2015, a calculation apparatus that performs the above-described calculation with the regularization term of an L2 norm (|x|2) where p=2.
However, when the regularization term has p=0, it is practically a problem of combining the undetermined variable groups in the minimization problem. Thus, a large amount of data pieces in the observation data groups leads to a problem of insufficient calculation speed.
When the regularization term has p=1, the regularization term includes many non-consecutive parts that cannot be differentiated. For this reason, complicated branching conditions need to be included in the calculation process of the minimization problem, leading to a problem that the calculation of the minimization problem needs further improvement.
When the regularization term has p=2 or greater, the regularization term is a continuous function, which is easy to differentiate, hence it is expected that the calculation speed can be faster than when p=0 and p=1. However, such a case has a trade-off problem in which the undetermined variable to be a main factor is difficult to be identified because the number of undetermined variable groups obtained as results of the calculation tends to be large.
In light of this problem, in the sparse learning, a variable group calculation apparatus that achieves a high calculation speed and can easily identify the undetermined variable group is desired as the variable group calculation apparatus that adds the regularization term and calculates the undetermined variable group.
The present disclosure has been made in view of the above problem. The present disclosure provides a variable group calculation apparatus, a variable group calculation method, a variable group calculation program, and a data structure that achieve a high calculation speed and can easily identify the undetermined variable group as the main factor.
An example aspect of the present disclosure is a variable group calculation apparatus for calculating an undetermined variable group that simultaneously minimizes a difference value and a data value. The difference value is a difference between an added composite value, which is obtained by adding and combining the undetermined variable group and a dictionary data group, and an observation data group. The data value includes the difference value and a regularization term of the undetermined variable group. The variable group calculation apparatus includes:
a convolution unit configured to convert the regularization term to a convolution value for an L1 norm using the undetermined variable group and a mollifier function; and
a calculation unit configured to perform the calculation using the regularization term, which is converted to the convolution value by the convolution unit.
Another example aspect of the present disclosure is a variable group calculation method by a variable group calculation apparatus for calculating an undetermined variable group that simultaneously minimizes a difference value and a data value. The difference value is a difference between an added composite value, which is obtained by adding and combining the undetermined variable group and a dictionary data group, and an observation data group. The data value includes the difference value and a regularization term of the undetermined variable group. The variable group calculation method includes:
converting the regularization term to a convolution value for an L1 norm using the undetermined variable group and a mollifier function; and
performing the calculation using the regularization term, which is converted to the convolution value.
Another example aspect of the present disclosure is a variable group calculation program for causing a computer, which calculates an undetermined variable group that simultaneously minimizes a difference value and a data value, the difference value being a difference between an added composite value, which is obtained by adding and combining the undetermined variable group and a dictionary data group, and an observation data group, and the data value including the difference value and a regularization term of the undetermined variable group, to execute:
converting the regularization term to a convolution value for an L1 norm using the undetermined variable group and a mollifier function; and
performing the calculation using the regularization term, which is converted to the convolution value.
Another example aspect of the present disclosure is a data structure used by a variable group calculation apparatus. The data structure includes:
a difference value between an added composite value, which is obtained by adding and combining an undetermined variable group and a dictionary data group, and an observation data group; and
a regularization term of the undetermined variable group, the regularization term being a convolution value for an L1 norm using the undetermined variable group and a mollifier function.
The data structure is used by the variable group calculation apparatus in order to calculate the undetermined variable group that simultaneously minimizes the difference value and a data value including the difference value and the regularization term.
According to the respective example aspects of the present disclosure, it is possible to provide a variable group calculation apparatus, a variable group calculation method, a variable group calculation program, and a data structure that achieve a high calculation speed and can easily identify an undetermined variable group as a main factor.
The above and other objects, features and advantages of the present disclosure will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus are not to be considered as limiting the present disclosure.
Hereinafter, a specific embodiment will be described in detail with reference to the drawings. The same or corresponding elements are denoted by the same signs throughout the drawings, and repeated descriptions will be omitted as necessary for the sake of clarity.
First, an outline of this embodiment will be described.
The observation data group (y) is expressed by the following equation (1), where β represents an explanatory variable group, which is an example of the undetermined variable group to be a main factor for obtaining the observation data group (y), and A represents a dictionary matrix, which is an example of a dictionary data group.
y=A·β (1)
Here, A=(A1, . . . , Am), and β=(β1, . . . , βq). Further, m and q are constants.
Incidentally, the actual observation data group (y) relatively often has a structure (a sparse structure) in which the explanatory variable group (β) actually affecting the observation data group (y) is sparsely present. For example, in the image data, colors of adjacent pixels are mostly similar colors. When the pixels having similar colors are put together well, the information of the image data can be largely compressed.
In the sparse learning, although the observation data group (y) itself is large, it is used when there are only a few explanatory variables (βj) (j is a positive integer where 1≤j≤q) actually affecting the observation data group (y). In the sparse learning, a huge number of explanatory variable groups (β) is prepared, and the explanatory variable (βj) not affecting the observation data group (y) within the explanatory variable groups (β) is estimated to be “0”. For example, in
In the sparse learning, the regularization term of the explanatory variable group (β) is added to the minimization problem for minimizing the difference value between the added composite value (Ax), which is obtained by adding and combining the undetermined variable group (x) and the dictionary matrix (A), and the observation data group (y). Further, the explanatory variable group (β) for simultaneously minimizing the above difference value and the data value including the above difference value and the regularization term is calculated. It is known that by doing so, the explanatory variable group can be accurately calculated.
In the sparse learning, a cost function R(β) when the regularization term is added and the explanatory variable group (β) is calculated is expressed by, for example, the following equation (2).
R(β)=f(β)+φ(β) (2)
In this equation, f(β) represents a loss function, and Ψ(β) represents a regularization term. In this embodiment, the loss function f(β) is expressed by, for example, the following equation (3).
f(β)=∥y−A·β∥2 (3)
In addition, the regularization term Ψ(β) is typically expressed as an L1 norm, for example, by the following equation (4).
φ(β)=λΣj−1q|βj| (4)
In this equation, λ represents a regularization variable. The regularization term Ψ(β) expressed by the equation (4) is a product of the sum of the absolute values of the explanatory variables (βj) and the regularization variable λ. The sparse learning of the type in which the regularization term Ψ(β) is expressed by the equation (4) is referred to as LASSO (Least Absolute Shrinkage and Selection Operator).
In the minimization problem expressed as above, the explanatory variable group (β) that simultaneously minimizes the loss function f(β) and the cost function R(β) is calculated. The loss function f(β) corresponds to the difference value between the added composite value (Ax), which is obtained by adding and combining the explanatory variable group (β) and the dictionary matrix (A), and the observation data group (y). The cost function R(β) corresponds to the data value including the loss function f(β) and the regularization term Ψ(β).
Here, when the regularization term Ψ(β) is a regularization term of the L1 norm like the equation (4) (hereinafter, the regularization term of the L1 norm is appropriately referred to as an L1 regularization term), it is easy to narrow down the explanatory variable group (β), which is an advantage.
Furthermore, if Newton's method or its variations can be applied to the calculation of the explanatory variable group (β), it is expected the calculation speed could be improved because Newton's method is quadratic convergence and has a higher speed of convergence to a solution compared with the linear convergence and the superlinear convergence.
However, when the waveform of the L1 regularization term Ψ(β) is sharp, Ψ(β) cannot be differentiated, hence Newton's method or its variations cannot be applied to the calculation of the above explanatory variable group (β). Thus, the calculation speed cannot be improved.
On the other hand, the waveform of the loss function f(β) is often smooth.
Thus, in this embodiment, the L1 regularization term Ψ(β) becomes a smoothed convex function, so that the entire cost function R(β) becomes a smoothed convex function. By doing so, Newton's method or its variations can be applied while making full use of the characteristics of the L1 regularization term that enables the explanatory variable group (β) to be easily narrowed down. This improves the calculation speed of the explanatory variable group (β).
Next, a configuration of this embodiment will be described.
The memory 20 stores a program (a calculation program) including instructions to be executed by the processor 10. The memory 20 is, for example, a volatile memory, a non-volatile memory, or a combination thereof.
The interface (I/F) unit 30 inputs and outputs various information items from and to the outside.
The processor 10 reads the program including the instructions from the memory 20 and executes it to thereby achieve the functions of the smoothing unit setting unit 11, the convolution unit 12, and the calculation unit 13. The smoothing unit setting unit 11, the convolution unit 12, and the calculation unit 13 will be described later in detail. The processor 10 is a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a microprocessor, a combination thereof, or the like.
The above-mentioned program can be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as floppy disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g., magneto-optical disks), CD-ROM (Compact Disk-Read Only Memory), CD-R (CD-recordable), CD-R/W (CD-rewritable), and semiconductor memories (such as mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory), etc.).
The program may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g. electric wires, and optical fibers) or a wireless communication line.
Next, an operation of this embodiment will be described.
As shown in
Next, the smoothing unit setting unit 11 normalizes each of the explanatory variables (βj) so that a variance (σ2j) becomes 1 and a mean (μj) becomes 1, and sets smoothing units. The smoothing unit indicates a range in which a function value of each of the normalized explanatory variables (βj) is smoothed. The smoothing unit setting unit 11 sets a plurality of the smoothing units for each explanatory variable (βj) (Step S2).
Next, the convolution unit 12 determines a mollifier function (a polynomial) according to the necessary number of differentiations of the L1 regularization term (Step S3). As described later, the mollifier function of an order n is a function that can be differentiated n−1 times. Thus, when the necessary number of differentiations is nd (nd is an integer of two or greater), a mollifier function having an order not less than (nd+1) is determined.
Next, the convolution unit 12 selects one of the plurality of smoothing units set in Step S2 (Step S4). After that, the minimization problem is solved for the smoothing unit selected in Step S4.
Next, the convolution unit 12 converts, for each explanatory variable (βj), the function value of each explanatory variable (βj) within the range indicated by the smoothing unit selected in Step S4 to a convolution value to be convoluted with the mollifier function determined in Step S3 so as to smooth the function value (Step S5).
As expressed by the equation (4), the L1 regularization term Ψ(β) is represented by the product of the sum of the absolute values of the explanatory variables (βj) and the regularization variable λ. For each smoothing unit selected in Step S4, the function value of each explanatory variable (βj) is smoothed, to thereby smooth the L1 regularization term Ψ(β). Then, the L1 regularization term Ψ(β) becomes a convex function that can be differentiated for the necessary number of differentiations. As a result, Newton's method or its variations can be applied to the calculation of the explanatory variable group (β), as the entire cost function R(β) becomes a smoothed convex function.
Thus, the calculation unit 13 applies Newton's method or its variations to calculate the explanatory variable group (β) that simultaneously minimizes the loss function f(β) and the cost function R(β) (Step S6). This consequently improves the calculation speed of the explanatory variable group (β).
Next, the calculation unit 13 estimates the explanatory variable (βj) that does not affect the observation data group (y) within the explanatory variable group (β) calculated in Step S6 to be “0” (Step S7).
The calculation of the minimization problem is thus complete for the smoothing unit selected in Step S4.
Next, the calculation unit 13 decides whether there is an unselected smoothing unit yet to be selected out of the plurality of smoothing units set in Step S2 (Step S8). If there is an unselected smoothing unit (YES in Step S8), the process returns to Step S4 where the convolution unit 12 selects the unselected smoothing unit, and solves the minimization problem for the selected smoothing unit likewise.
On the other hand, if there is no unselected smoothing unit (NO in Step S8), the calculation unit 13 outputs, for each of the plurality of smoothing units set in Step S2, the explanatory variable group (β) estimated in Step S7 to the outside via the interface (I/F) unit 30 (Step S9).
Accordingly, an external apparatus, which has received the explanatory variable group (β) for each of the plurality of smoothing units, can obtain, for each of the plurality of smoothing units, the main explanatory variables (βj) that affect the observation data group (y) and the number of the main explanatory variables (βj).
In
Hereinafter, the processing of the smoothing unit setting unit 11, the convolution unit 12, and the calculation unit 13 will be described in detail using a specific example.
First, a specific example of the processing of the smoothing unit setting unit 11 will be described.
As shown in
To be more specific, firstly the smoothing unit setting unit 11 statistically performs, for each explanatory variable (βj), the normalization processing using the following equation (5) so that the variance (σ2j) becomes 1 and the average (μj) becomes 1.
{tilde over (β)}j(βj−μj)/σj (5)
In this equation,
{tilde over (β)}j
represents the normalized explanatory variable (βj).
As shown in
an=0.1 σj,0.2 σj, . . . ,0.9 σj,1 σj,2 σj,3 σj, . . .
That is, firstly, the smoothing unit setting unit 11 sets an=0.1 σj, and sets, for each explanatory variable (βj), the section of the normalized explanatory variable (βj) of [−0.05 σj, 0.05 σj] as the smoothing unit.
Next, the smoothing unit setting unit 11 sets an=0.2 σj and sets, for each explanatory variable (βj), the section of the normalized explanatory variable (βj) of [−0.1 σj, 0.1 σj] as the smoothing unit.
In this way, the smoothing unit setting unit 11 sets the plurality of smoothing units for each explanatory variable (βj).
Hereinafter, a specific example of the processing of the convolution unit 12 will be described.
The convolution unit 12 selects one of the plurality of smoothing units set by the smoothing unit setting unit 11 and, for each explanatory variable (βj), converts each function value of the explanatory variable (βj) within the range indicated by the selected smoothing unit to the convolution value to be convoluted with the mollifier function so as to smooth the function value.
For example, when the smoothing unit with an=0.2 σj is selected, as shown in
The above-mentioned mollifier function will be described in detail.
The mollifier function is, for example, a Ternary Polynomial function. The ternary polynomial function is a polynomial function that has three regions of a function value increasing region, constant region, and decreasing region. The increasing region and the decreasing region are symmetrical with the constant region interposed therebetween. Details of the ternary polynomial are disclosed in Japanese Unexamined Patent Application Publication No. 2009-053926 already filed by the applicant of the present disclosure and shall be incorporated herein by reference.
The mollifier function Tn(x) (n is 0 or a positive integer) is a function that can be differentiated n−1 times, and has constants a0, a1, . . . an, and C. Here, d is defined as d=(a0, a1, . . . , an, C). Further, C is adjusted in advance so that an integrated value of the mollifier function Tn(x) becomes 1.
As shown in
For example, when T2(x) is generated from T0(x), T0(x) may be symmetrically divided (symmetrically distributed) and integrated to generate T1(x), and T1(x) may be symmetrically divided and integrated to generate T2(x). Alternatively, after T0(x) is symmetrically divided, it may be further symmetrically divided to generate T″2(x), and T″2(x) may be integrated twice to generate T2(x).
Next, a method of calculating a composite product of the function value of the explanatory variable (βj) and the mollifier function Tn(x) will be described in detail.
The convolution unit 12 calculates a composite product (Ψj)d(x) using the following equation (6) based on Ψj(βj) representing the function value of the explanatory variable (βj) and the mollifier function Tn(x) of the above-mentioned polynomial to obtain the convolution value.
(Ψj)d(x)=∫−∞+∞Tn(x−βj)Ψj)βj)dβj) (6)
In this equation, (Ψj)d(x) represents a function that can be differentiated n−1 times and uniformly converge to (Ψj)d→Ψj. The composite product of the absolute value function and the mollifier function (polynomial) like the one above can be calculated by algebraic processing with a small processing load. This achieves high-speed processing.
Next, a specific example of the processing of the calculation unit 13 will be described.
As described above, the convolution unit 12 converts the function value of each explanatory variable (βj) to the convolution value to be convolved with the mollifier function so as to smooth the function value. Thus, the L1 regularization term Ψ(β) is smoothed and becomes the convex function that can be differentiated for the necessary number of differentiations.
That is, before the smoothing of the L1 regularization term Ψ(β), the minimization problem is expressed as a minimization problem of LASSO as in the following equation (7).
R(β)∥y−A·β∥2+φ(β) (7)
On the other hand, after the L1 regularization term Ψ(β) is smoothed, the minimization problem is expressed as the minimization problem of the composite product LASSO (Convolutional LASSO) as shown in the following equation (8).
R(β)=∥y−A·β∥2+{tilde over (φ)}(β) (8)
In this equation,
{tilde over (φ)}(β)
represents the smoothed L1 regularization term Ψ(β).
The composite product LASSO is a convex function, and thus Newton's method or its variations may be applied to calculate a global solution.
Accordingly, the calculation unit 13 applies Newton's method or its variations to calculate the explanatory variable group (β) that simultaneously minimizes ∥y−A−β∥2 and the cost function R(β).
Next, the calculation unit 13 compares each of the explanatory variables (βj) constituting the above-mentioned calculated explanatory variable group (β) with a corresponding εj. That is, the calculation unit 13 compares each of β=(β1, β2) with corresponding one of (ε1, ε2). Here, εj is, for example, 0.1 σj. Note that this εj is an example, and it is not limited to this.
Then, if βj<εj, the calculation unit 13 decides that the explanatory variable (βj) does not affect the observation data group (y), and estimates the explanatory variable (βj) to be “0”. Further, when βj≥εj, the calculation unit 13 decides that the explanatory variable (βj) affects the observation data group (y), and leaves the explanatory variable (βj) as it is. For example, if β1<ε1 and β2≥ε2, the calculation unit 13 estimates the explanatory variable (β1) to be “0” and sets the explanatory variable group (β)=(0, β2). In this case, the main explanatory variable is β2, and the number of main explanatory variables is one.
The calculation unit 13 outputs the estimated explanatory variable group (β) for each of the plurality of smoothing units, which is calculated as described above, to the outside via the interface (I/F) unit 30.
Next, an effect of this embodiment will be described. In the sparse learning, the variable group calculation apparatus 1 according to this embodiment, when adding the regularization term and calculating the explanatory variable group (β), firstly converts the regularization term Ψ(β) to the convolution value for the L1 regularization term, which is the regularization term of the L1 norm, and the mollifier function so as to smooth the regularization term Ψ(β). After that, the smoothed regularization term Ψ(β) is used as the convolution value to calculate the explanatory variable group (β).
This makes it possible to differentiate the regularization term Ψ(β) two or more times while making full use of the characteristics of the L1 norm that enables the explanatory variable group (β) to be easily narrowed down as the main factor, and Newton's method (the quadratic convergence) or its variations can be applied to the calculation of the explanatory variable group (β). This guarantees the convergence order of the quadratic convergence or greater order convergence, and thus the calculation speed of the explanatory variable group (β) can be improved. Therefore, the explanatory variable group (β) can be easily identified, and the calculation speed can be improved.
Further, the variable group calculation apparatus 1 of this embodiment determines the mollifier function according to the necessary number of differentiations of the L1 regularization term Ψ(β). At this time, when the order of the mollifier function is increased, the acceleration method of the required convergence order can be applied.
Note that the present disclosure is not limited to the above-described embodiment, and can be appropriately changed without departing from the spirit of the present disclosure.
For example, in the above embodiment, the L1 norm is used, but the present disclosure can be applied to when the L1 norm is changed to the Lp norm (p≠2).
In the above embodiment, the image data is used as the observation data group. However, the observation data group is not limited to this, and any large data group (big data) may be used. Examples of the observation data include voice data (conversation data), biometric data, astronomical data, natural language processing data.
From the disclosure thus described, it will be obvious that the embodiments of the disclosure may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended for inclusion within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2017-107122 | May 2017 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
20080197842 | Lustig | Aug 2008 | A1 |
20100091131 | Furukawa | Apr 2010 | A1 |
20140266869 | Liu et al. | Sep 2014 | A1 |
20160260030 | He | Sep 2016 | A1 |
20180014130 | Lunner | Jan 2018 | A1 |
20180174028 | Lin | Jun 2018 | A1 |
Number | Date | Country |
---|---|---|
2009-53926 | Mar 2009 | JP |
2017-33172 | Feb 2017 | JP |
6080783 | Feb 2017 | JP |
Entry |
---|
Ivan Selesnick, “Sparsity-Assisted Signal Smoothing (Revisited)” IEEE, International Conference on Acoustics Speech and signal Processing (ICASSP), Mar. 5, 2017, pp. 4546-4550. |
Number | Date | Country | |
---|---|---|---|
20180349318 A1 | Dec 2018 | US |