Mapping determination methods and data discrimination methods using the same

Information

  • Patent Grant
  • 5796921
  • Patent Number
    5,796,921
  • Date Filed
    Wednesday, October 11, 1995
    28 years ago
  • Date Issued
    Tuesday, August 18, 1998
    25 years ago
Abstract
A mapping determination method for obtaining mapping F from an N-dimensional metric vector space .OMEGA..sub.N to an M-dimensional metric vector space .OMEGA..sub.M has the following steps to get the optimal mapping quickly and positively. In the first step, complete, periodic, L.sub.m basic functions g.sub.m (X) according to the distribution of samples classified into Q categories on the N-dimensional metric vector space .OMEGA..sub.N are set. In the second step, a function f.sub.m (X) indicating the m-th component of the mapping F is expressed with the linear sum of the functions g.sub.m (X) and L.sub.m coefficients c.sub.m. The third step provides Q teacher vectors T.sub.q =(t.sub.q.1, t.sub.q.2, t.sub.q.3, . . . , t.sub.q.M) (where q=1, 2, . . . , Q) for the categories on the M-dimensional metric vector space .OMEGA..sub.M, calculates the specified estimation function J, and obtains the coefficients c.sub.m which minimize the estimation function J. In the fourth step, the coefficients c.sub.m obtained in the third step are stored in memory.
Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to mapping determination methods required in various fields, such as pattern recognition, pattern generation, control systems for industrial robots, and prediction processing systems for economic problems and, more particularly, to a mapping determination system for implementing the desired mapping effectively and efficiently with the required precision and with an estimation function being minimized when the features of the mapping are determined by learning.
2. Description of the Related Art
In many fields, such as pattern recognition including voice recognition and image recognition, pattern generation including voice generation and computer graphics, prediction processing systems for economic prediction and stock-price prediction, and control systems for industrial robots, processing systems having mapping which generates or outputs output vectors in a specified dimension from input vectors in a specified dimension are used.
In voice recognition and image recognition, for example, linear mapping or nonlinear mapping is used for compressing characteristic vectors obtained from input data in digital signal processing.
Orthogonal transformations such as discrete Fourier transformation (DFT) are linear mathematical transformations. A logarithmic transformation is a nonlinear mathematical transformation. Since these mappings are fixed, however, they are hardly applied to a system in which the desired output vectors are obtained from any input vectors.
Therefore, methods for determining the desired mapping by learning has been examined. Typical examples of this kind of mapping are the KL transformation (linear) and a hierarchy neural network (nonlinear). More particularly, the hierarchy neural network has been applied to many fields because it can theoretically express any continuous mapping by increasing the number of intermediate layers.
This hierarchy neural network is a network in which connections are made from the input layer to the output layer such that the outputs of basic units (neuron elements) of each layer are connected to the inputs of basic units of the next layer. FIG. 1 shows an example of such a hierarchy neural network. In this example, the neural network includes three layers. The input layer has four elements (elements 1 to 4), the intermediate layer has three elements (elements 5 to 7), and the output layer has one element (element 8).
Processing performed in a general three-layer neural network having N elements in the input layer, L elements in the intermediate layer, and M elements in the output layer will be described below. Assume that an input vector X input to the input layer is expressed as X=(x.sub.1, x.sub.2, x.sub.3, . . . , x.sub.N) output vector Y output from the corresponding output layer is expressed as Y=(y.sub.1, y.sub.2, y.sub.3, . . . , y.sub.M).
The outputs from the N elements in the input layer are simply the inputs x.sub.i (i=1, 2, 3, . . . , N) to the elements, and they are input to the L elements in the intermediate layer. The intermediate layer performs the following calculation and outputs the results. .omega..sub.ij indicates combination weight coefficients and s(x) is a sigmoid function. ##EQU1## where, j=1, 2, . . . , L
X.sub.N .omega..sub.Nj =.omega..sub.Nj
The outputs x'.sub.j (j=1, 2, 3, . . . , L) from the L elements in the intermediate layer are input to the M elements in the output layer. The output layer performs the following calculation. ##EQU2## where, k=1, 2, . . . , M
X'.sub.L .omega.'.sub.kL =.omega.'.sub.kL
A neural network having four or more layers basically has the same network structure except for increases in the number of layers depending on an input-output relationship.
In a neural network, mapping is restricted to this kind of structure. In order to set the features of the mapping (to set combination weight coefficients .omega.), learning samples are given to the input layer and the outputs of the mapping (the neural network) corresponding to the learning samples are obtained. Then, teacher vectors are given corresponding to these mapping outputs, an estimation function is specified as the sum of square errors between the mapping outputs and the teacher vectors, and the combination weight coefficients .omega.) are determined by back propagation. The algorithm of this back propagation is a steepest descent method (probability descent method) for each data applied to a neural network.
In the steepest descent method, the result generally depends on how initial values are given to an estimation function, depending on the type of the estimation function. A solution corresponding to a minimal (local minimum) value may be obtained instead of the optimal solution corresponding to the minimum (global minimum) value.
The estimation function in the neural network exactly falls in this case. It is not assured that the solution obtained in back propagation has the minimum error. This means that depending on given initial values mapping may be obtained with outputs quite different from the teacher vectors.
Therefore, in order to obtain the best possible quasi-optimal solution, measures are taken such as a method in which learning is repeated with random numbers given as initial numbers.
It is not assured, however, that the optimal solution (the minimum value) will be theoretically obtained with these symptomatic treatments. Quasi-optimal solutions may have larger errors compared with the minimum error of the optimal solution. It takes a huge amount of learning time to get even such quasi-optimal solutions.
In addition, only when the number of neuron elements in an intermediate layer is indefinite, namely, in a ideal condition, can the neural network express any continuous mapping. In reality, the desired mapping is configured with an intermediate layer having the definite given number of neuron elements. In other words, the performance of a neural network is the extent to which an ideal mapping is approximated when the number of elements in an intermediate layer is limited to a number in actual use.
The degrees of structural freedom in a neural network comprise the number of hierarchial layers and the number of elements, both of which affect the size of the network, as well as combination weight coefficients. Therefore, a neural network may have insufficient approximation capability under the limited size in an actual use.
As described above, a neural network serving as a conventional learning-type mapping apparatus has the following three problems in learning.
(1) A minimum error is not assured for solutions. Solutions may be locally minimum values.
(2) A great amount of learning time is required to obtain a solution as close as possible to the optimal solution.
(3) A neural network may have insufficient approximation capability for the desired mapping in relation to the size in actual use.
SUMMARY OF THE INVENTION
Under these conditions, the present invention is made in order to get the optimal solution quickly and positively and to obtain mapping having higher approximation capability.
Accordingly, it is an object of the present invention to provide a mapping determination method especially suitable for periodic mapping, wherein the desired mapping is given under the condition that the estimation function is minimized and a great amount of learning time is not required to determine the mapping.
The above object of the present invention is achieved through the provision of a mapping determination method for obtaining mapping F from an N-dimensional metric vector space .OMEGA..sub.N to an M-dimensional metric vector space .OMEGA..sub.M, comprising: a first step of setting complete, periodic, L.sub.m basic functions g.sub.m (X) according to the distribution of samples classified into Q categories on the N-th dimensional metric vector space .OMEGA..sub.N ; a second step of expressing a function f.sub.m (X) indicating the m-th component of the mapping F with the linear sum of the function g.sub.m (X) and L.sub.m coefficients c.sub.m ; a third step of providing Q teacher vectors T.sub.q =(t.sub.q.1, t.sub.q.2, t.sub.q.3, . . . , t.sub.q.M) (where q=1, 2, . . . , Q) for the categories on the M-dimensional metric vector space .OMEGA..sub.M, calculating the specified estimation function J, and obtaining the coefficients c.sub.m which minimize the estimation function J; and a fourth step of storing the coefficients c.sub.m obtained in the third step.
The estimation function J may be expressed as follows, where E{X.epsilon.S.sub.q }.times.{f(X)} indicates a calculation for obtaining the expected values of the function f(X) over all elements in a learning sample S.sub.q. ##EQU3##
The coefficients c.sub.m which minimize the estimation function J may be obtained by partially differentiating the estimation function J by the coefficients c.sub.m, and setting the result to 0 in the third step.
The basic functions g.sub.m (X) may be trigonometric functions.
The above object of the present invention is also achieved through the provision of a data discrimination method for discriminating input-data categories using the mapping function obtained from the above-described mapping determination method.





BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a configuration example of a neural network.
FIG. 2 shows a configuration of a network using a mapping determination method of the present invention.
FIG. 3 is a flowchart indicating the processes of the mapping determination method of the present invention.
FIG. 4 is a block diagram showing a configuration example of a discrimination apparatus to which the mapping determination method of the present invention is applied.
FIG. 5 indicates two-dimensional learning data in categories C.sub.1 and C.sub.2.
FIG. 6 indicates the outputs of the mapping to which the learning data is input.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the present invention, a function f.sub.m (X) for the m-th component of mapping F is defined as follows with linear combination of L.sub.m functions g.sub.m.1 (X) when the mapping F from an N-dimensional metric vector space .OMEGA..sub.M to an M-dimensional metric vector space .OMEGA..sub.M is determined.
f.sub.m (X)=c.sub.m.sup.T g.sub.m (X) 3
where,
X.epsilon..OMEGA..sub.N,
c.sub.m =�c.sub.m.1, c.sub.m.2, . . . , c.sub.m.Lm !.sup.T,
g.sub.m (x)=�g.sub.m.1 (x), g.sub.m.2 (x), . . . , g.sub.m.Lm (x)!.sup.T,
T: inversion
X=(x.sub.1, x.sub.2, x.sub.3, . . . , x.sub.n),
c.sub.m : Specified coefficients.
That is, a complete function system on an N-variable function space is employed as the function g.sub.m.1 (x) in the present invention. From a theorem in function analysis that says any function can be expressed with linear combination of complete function systems, it is understood that any continuous mapping can be theoretically expressed with the functions g.sub.m.1 (x) when the number L.sub.m is large enough. This corresponds to the condition that any continuous mapping can be theoretically expressed when the number of neuron elements in the intermediate layer of a hierarchy neural network is large enough.
For comparison with the neural network shown in FIG. 1, a network according to mapping of the present invention is illustrated in FIG. 2. Inputs x.sub.1 to x.sub.4 are input to elements 11 to 14, then are output to elements 15 to 17 in the intermediate layer as is. The element 15 performs the calculation shown by the following expression.
X'.sub.1 =c.sub.1 g.sub.1 (X) 4
The function g.sub.1 (X) (=g.sub.1 (x.sub.1, x.sub.2, x.sub.3, x.sub.4)) is calculated from the variables x.sub.1, x.sub.2, x.sub.3, and x.sub.4. Then coefficient c.sub.1 is multiplied. In the same way, the elements 16 and 17 perform the calculations shown by the following expressions.
X'.sub.2 =c.sub.2 g.sub.2 (X) 5
X'.sub.3 =c.sub.3 g.sub.3 (X) 6
The element 18 in the output layer calculates the sum of x'.sub.1, x'.sub.2, and x'.sub.3 output from the elements 15 to 17 to obtain the output y.
When the functions g.sub.i (X) are selected or set to the specific functions, mapping F is obtained by setting coefficients c.sub.i to the specific values in learning.
In other words, the functions g.sub.i (X) are selected such that the pattern structure to be analyzed is more clearly identified. When a pattern is distributed on three classes (categories) in one dimension, the three classes cannot be identified with functions 1 and X. It is necessary to add a term of the second or subsequent order X.sup.i (i>1) to the functions g.sub.i (X).
To determine the coefficients C.sub.i, learning samples (set of the learning samples on categories C.sub.q : S.sub.q (=(S.sub.q1, S.sub.q2, . . . , S.sub.qN)) classified into Q categories C.sub.q (q=1, 2, 3, . . . Q) on an N-dimensional (metric) vector space .OMEGA..sub.N are used together with Q teacher vectors T.sub.q (=(t.sub.ql1, t.sub.q.2, t.sub.q.3, . . . , t.sub.q.M), where q=1, 2, . . . , Q) on an M-dimensional (metric) vector space .OMEGA..sub.N against the categories C.sub.q to calculate the estimation function J expressed by the following expression. It is preferred that the teacher vectors be at general positions. This means that t.sub.q.1 t.sub.q.2, t.sub.q.2 t.sub.q.3, . . . , t.sub.q. M-1 t.sub.q.M are linearly independent. ##EQU4##
Let E{x.epsilon.S.sub.q }� ! mean calculation of expected values (averages) over S.sub.q within � !. Coefficient vector c.sub.m is determined in advance such that this estimation function is minimized. The desired mapping can be performed against input vectors by calculating f.sub.m (x) specified in the expression 3 with the use of the obtained coefficient vector c.sub.m.
Based on the Hibert's famous theorem in function analysis that says any function can be expressed with linear combination of complete function systems, any continuous mapping can be theoretically expressed by employing complete function systems in an n-variable function space as the basic functions g.sub.m.i (x) and making the number L.sub.m large enough.
How to obtain coefficient vector c.sub.m which makes the estimation function, described in the expression 7, minimum will be described below. From the expression 7, the following expression can be obtained. ##EQU5##
The expression 8 can be expressed as follows; ##EQU6##
The expression 3 is substituted into the expression 10 to get the following expression. ##EQU7##
G.sub.M is an L.sub.m -by-L.sub.m symmetrical matrix, H.sub.M is an L.sub.m -order vector, and K.sub.M is a constant, obtained from learning samples and teacher vectors. The expression 11 is substituted into the expression 9 to get the following expression. ##EQU8##
The necessary and sufficient condition for c.sub.m to minimize the expression 12 is as follows;
.differential.J/.differential.c.sub.m =0(m=1, 2, . . . , M) 13
From this expression, the following expression is obtained.
G.sub.m c.sub.m -H.sub.m =)(m=1, 2, . . . , M) 14
By solving this equation for c.sub.m, the coefficient vector c.sub.m is determined. The coefficient c.sub.m for minimizing the estimation function J is obtained by solving the equation 14.
As described above, when mapping according to the present invention is determined, minimizing the estimation function is assured except for special cases such as when the equation 14 is indeterminate or inconsistent. This means, instead of solving the equation 14, applying the steepest descent method to the estimation function J, shown in the expression 12, also generates the solution c.sub.m uniquely without being annoyed by the initial-value problem.
This feature, that a unique solution is obtained, eliminates repeated learning for obtaining a quasi-optimal solution in a neural network with an initial value being changed.
Assume that the mapping shown in the expression 3 uses a periodic function (f(x)=f(x+.theta.), .theta.: period), especially a trigonometric function, for the basic function. That is, the mapping in which the m-th order function is expressed by the expression 15 is used. Any periodic function is approximated by a periodic function with fewer terms than, for example, a monomial expression with N variables.
f.sub.m (x)=a.sub.m +b.sub.m.sup.T.sub.m (x)+c.sub.m.sup.T.sub.m (x) 15
where a.sub.m, b.sub.m =�b.sub.m.1, b.sub.m.2, . . . , b.sub.m.Im !.sup.T, and c.sub.m =�c.sub.m.1, c.sub.m.2, . . . , c.sub.m.Jm !.sup.T are coefficients, 1, g.sub.m (x)=�g.sub.m.1 (x), g.sub.m.2 (x), . . . , g.sub.m.Im (x)!.sup.T, and h.sub.m (x)=�h.sub.m.1 (x), h.sub.m.2 (x), . . . , h.sub.m.Jm (x)!.sup.T are basic functions. g.sub.m.i (x) and h.sub.m.j (x) are expressed as follows;
g.sub.m.i (x)=cos(p.sub.i.sup.T x) (i=1, . . . , I.sub.m) 16
h.sub.m.j (x)=sin(q.sub.j.sup.T x) (j=1, . . . , J.sub.m) 17
p.sub.i and q.sub.j are N-dimensional vectors and selected such that the following is linearly independent.
{1, g.sub.m.1 (x), g.sub.m.2 (x), . . . , g.sub.m.im (x), h.sub.m.1 (x), h.sub.m.2 (x), . . . , h.sub.m.Jm (x)}
When x.epsilon..OMEGA..sub.2, that is, N=2, for example, g.sub.m.i (x) corresponds to cosX.sub.1, cos2x.sub.1, cos3x.sub.1, . . . , cosX.sub.2, cos2x.sub.2, cos3x.sub.2, . . . , cos(x.sub.1 +x.sub.2), . . . and so on. The sine function h.sub.m.j (x) corresponds in the same way.
The following expressions are defined to express the expression 15 in the same way as the expression 3.
r.sub.m =�a.sub.m b.sub.m.sup.T c.sub.m.sup.T !.sup.T 18
s.sub.m (t)=�1 g.sub.m.sup.T (x)h.sub.m.sup.T (x)!.sup.T 19
With the expressions 13 and 14, the expression 3 can be expressed as follows;
f.sub.m (x)=r.sub.m.sup.T s.sub.m (x) 20
By defining the estimation function J as the expression 7, r.sub.m which minimizes J can be obtained in the same way as for c.sub.m.
.differential.J/.differential.r.sub.m =0(m=1, 2, . . . , M) 21
From this expression, the following equation is obtained.
G.sub.m r.sub.m -H.sub.m =0 (m=1, 2, . . . , M) 22
G.sub.m and H.sub.m are coefficients obtained from the following expressions. ##EQU9##
Mapping F which minimizes the estimation function J is obtained by solving the equation 22 to get r.sub.m. Being suitable for minimizing the estimation function J, this mapping is superior in terms of using a less number of terms in approximating a periodic mapping with the use of trigonometric functions as the basic functions.
FIG. 3 shows the above-described process. A basic function vector s.sub.m (X) using trigonometric functions is determined according to the expression 19 in a step S1. Coefficients G.sub.m and H.sub.m are obtained from learning data and teacher vectors according to the expressions 23 and 24 in a step S2. Then in a step S3, a coefficient vector r.sub.m is obtained by solving the expression 22. Next in a step S4, mapping f.sub.m (x) is obtained from the basic function vector and the coefficient vector according to the expression 20.
As the basic function, a cosine function only, a sine function only or combinations of polynomials and trigonometric functions may be used.
An example will be described below in which the above-described mapping determination method is applied to a discrimination apparatus for classifying two-dimensional data into two categories. FIG. 4 shows the category discrimination apparatus. Sample data for determining mapping is supplied to a mapping determination section 44. It may be two-dimensional data to be input to a guaranteed global minimum mapping (GGM) calculation section 41. The mapping determination section 44 determines mapping coefficients c.sub.i using the mapping determination method described above. The determined coefficients c.sub.i are stored in a coefficient storage 42. The two-dimensional data is input to the GGM calculation section 41, and calculation is performed to obtain the function f.sub.m (x) of mapping F. Referring to the coefficients c.sub.i, stored in the coefficient storage 42, if necessary, the GGM calculation section 41 performs the specified calculation. The output y of the GGM calculation section 41 is sent to a discrimination section 43 for the specified discrimination. This apparatus corresponds to a system having a two-dimensional input space (N=2), one-dimensional output space (M=1), and two categories (Q=2, that is C.sub.1 and C.sub.2). The configuration of the discrimination apparatus will be described below in a case in which two-dimensional learning data (x.sub.1 and x.sub.2) are given for two categories and T.sub.1 =1 and T.sub.2 =0 are given as teacher vectors (scalar) for the categories C.sub.1 and C.sub.2.
Assume that two-dimensional data (x.sub.1 and x.sub.2) shown in FIG. 5 is given. In this example, the learning data for the category C.sub.1 exists around points (0.0, 0.0), (0.4, 0.4), and (0.8, 0.8). The learning data for the category C.sub.2 exists around points (0.2, 0.2) and (0.6, 0.6). They are indicated by black points and white points, respectively. The number of C.sub.1 data points is 156 and that of C.sub.2 data points is 105, both including some points in common. This learning data seems to appear periodically. The mapping is determined by using the following 29 functions as the basic functions in f.sub.m (x), shown in the expression 3.
1, cosX.sub.1, cosX.sub.2, cos2x.sub.1, cos2x.sub.2, cos(x.sub.1 +x.sub.2), cos3x.sub.1, cos3x.sub.2, cos(2x.sub.1 +x.sub.2), cos(x.sub.1 +2X.sub.2), cos4x.sub.1, cos4X.sub.2, cos(3x.sub.1 +X.sub.2), cos(2x.sub.1 +2x.sub.2), cos(x.sub.1 +3x.sub.2), sinX.sub.1, sinX.sub.2, sin2x.sub.1, sin2x.sub.2, sin(x.sub.1 +x.sub.2), sin3x.sub.1, sin3x.sub.2, sin(2x.sub.1 +x.sub.2), sin(x.sub.1 +2x.sub.2), sin4x.sub.1, sin4X.sub.2, sin(3x.sub.1 +x.sub.2), sin(2x.sub.1 +2x.sub.2), sin(x.sub.1 +3x.sub.2)
g.sub.m (x) and h.sub.m (x) are expressed with these sine functions and cosine functions. G.sub.m and H.sub.m, shown in the expressions 23 and 24, are calculated for these basic functions. In other words, the expected values E{x.epsilon.S.sub.1 }� ! in the expression 23 are obtained for the learning data in the category C.sub.1, and the expected values E{x.epsilon.S.sub.2 }� ! in the expression 23 are obtained for the learning data in the category C.sub.2. The sum of these expected values is G.sub.m.
In the same way, the expected values E{x.epsilon.S.sub.1 }� ! in the expression 24 are obtained for the learning data in the category C.sub.1, and the expected values E{x.epsilon.S.sub.2 }� ! in the expression 24 are obtained for the learning data in the category C.sub.2. The sum of products with the teacher vectors T.sub.1 =1 and T.sub.2 =0 is calculated as H.sub.m. Then, the equation 22 is solved with the obtained coefficients G.sub.m and H.sub.m to get the coefficient vector r.sub.m =�a.sub.m b.sub.m.sup.T C.sub.m.sup.T !.sup.T. With this coefficient vector and the basic functions, the mapping (expression 15) is determined. Table 1 shows the coefficients actually obtained for the learning data shown in FIG. 5, corresponding to the basic functions.
TABLE 1______________________________________Basic Functions and Corresponding CoefficientsNo. Basic function Coefficient______________________________________1 1 28.3550722 cos x.sub.1 -10.5877793 cos x.sub.2 -4.3548814 cos 2x.sub.1 -37.0781995 cos 2x.sub.2 -8.7624936 cos (x.sub.1 + x.sub.2) -20.5280077 cos 3X.sub.1 -3.7168958 cos 3X.sub.2 -0.6962779 cos (2X.sub.1 + X.sub.2) 13.32690810 cos (X.sub.1 + 2X.sub.2) 5.62629411 cos 4X.sub.1 -0.78256112 cos 4X.sub.2 6.45690913 cos (3X.sub.1 + X.sub.2) 5.72386014 cos (2X.sub.1 + 2X.sub.2) 5.72892815 cos (X.sub.1 + 3X.sub.2) -6.62260316 sin x.sub.1 8.51944317 sin x.sub.2 -0.53410718 sin 2x.sub.1 -3.63521619 sin 2x.sub.2 3.85604520 sin (x.sub.1 + x.sub.2) 5.95676921 sin 3x.sub.1 -0.88616522 sin 3x.sub.2 -13.77097123 sin (2x.sub.1 + x.sub.2) 12.67957824 sin (x.sub.1 + 2x.sub.2) -2.73027625 sin 4x.sub.1 4.52697526 sin 4x.sub.2 -3.53369427 sin (3x.sub.1 + x.sub.2) -17.69796128 sin (2x.sub.1 + 2x.sub.2) 11.32137929 sin (x.sub.1 + 3x.sub.2) 1.625174______________________________________
FIG. 6 shows the outputs of a discrimination apparatus using the obtained mapping. The outputs for the data inputs in the category C.sub.1 are black points whereas those of the data inputs in the category C.sub.2 are white points. The horizontal axis indicates data numbers and the vertical axis indicates the outputs of the mapping for the corresponding data. The C.sub.1 data is mapped around T.sub.1 =1 and the C.sub.2 data is mapped around T.sub.2 =0, indicating that this discrimination apparatus effectively works for classifying the input data into the categories.
As described above, with the use of learning data in each category in an input vector space and teacher vectors in each category in an output space, mapping can be configured with linear combination of basic functions, especially using trigonometric functions as basic functions, such that the estimation function (expression 7) is minimized.
It is needless to say that the above-described method which uses trigonometric functions as the basic functions can be applied to create the desired mapping from .OMEGA..sub.N to .OMEGA..sub.M even when given learning data is periodic as in the above example. When an input vector space is limited to a finite space on .OMEGA..sub.N and learning data is periodic, this method is especially effective.
When data obtained in observation for a certain period is used for prediction at a future time as in a prediction processing system, if the observed data is periodic, a prediction processing system can be implemented for data other than the observed data, that is, future data, by configuring mapping having trigonometric functions described above as the basic functions with the observed data being used as learning data.
In addition, with the use of the present invention, the precision in pattern recognition can be increased, providing a highly precise pattern recognition apparatus.
According to the present invention, since trigonometric functions are used as the basic functions, minimization of an estimation function is assured and quick learning is enabled because repeated learning is not required. Mapping effective to periodic inputs can be implemented.
Claims
  • 1. A mapping determination method for obtaining a mapping F from an N-dimensional metric vector space .OMEGA..sub.N to an M-dimensional metric vector space .OMEGA..sub.M, comprising the steps of:
  • a first step of setting L.sub.m complete, periodic basic functions g.sub.m,i (X), where i is an integer in the range from 1 through L.sub.m, according to a distribution of samples classified into Q categories on said N-dimensional metric vector space .OMEGA..sub.N ;
  • a second step of expressing functions f.sub.m (X), where m=1, 2, . . . , N, each indicating the m-th component of said mapping F as a linear sum determined by said functions g.sub.m,i (X) and a set of L.sub.m coefficients c.sub.m ;
  • a third step of providing Q teacher vectors T.sub.q =(t.sub.q.1, t.sub.q.2, t.sub.q.2, . . . t.sub.q.M) (where q=1, 2, . . . , Q) for said categories on said M-dimensional metric vector space .OMEGA..sub.M, calculating an estimation function J specified by the functions f.sub.m (X). said teacher vectors, and said distribution of samples, and determining values of said coefficients c.sub.m which minimize said estimation function J; and
  • a fourth step of storing into memory said values of said coefficients c.sub.m obtained in the third step, wherein said estimation function J is expressed as follows, where E{X.epsilon.S.sub.q }x {f(X)} indicates a calculation for obtaining expected values of said functions f.sub.m (X) over all elements in learning samples S.sub.q : ##EQU10## wherein said values of the coefficients c.sub.m which minimize said estimation function J are obtained by partially differentiating said estimation function J by said coefficients c.sub.m, and setting the result to 0 in the third step, and wherein the third step further comprises the steps of:
  • calculating ##EQU11## calculating ##EQU12## and obtaining said values of said coefficients c.sub.m from G.sub.m .times.c.sub.m -Hm=0.
  • 2. The mapping determination method of claim 1, wherein said basic functions g.sub.m,i (X) comprise trigonometric functions and polynomial functions.
  • 3. A mapping determination apparatus for obtaining a mapping F from an N-dimensional metric vector space .OMEGA.N to an M-dimensional metric vector space .OMEGA.M, comprising:
  • first means for setting L.sub.m complete, periodic basic functions g.sub.m,i (X), where i is an integer in the range from 1 through L.sub.m according to a distribution of samples classified into Q categories on said N-dimensional metric vector space .OMEGA.N;
  • second means for expressing functions f.sub.m (X). where m=1, 2, . . . N, each indicating the m-th component of said mapping F as a linear sum determined by said functions g.sub.m,i (X) and a set of L.sub.m coefficients c.sub.m ;
  • third means for providing Q teacher vectors T.sub.1 =(t.sub.1.1, t.sub.q.2, t.sub.q.3, . . . t.sub.q.M (where q=1, 2, . . . , Q) for said categories on said M-dimensional metric vector space .OMEGA..sub.M, calculating an estimation function J specified by the functions f.sub.m (X), said teacher vectors, and said distribution of samples, and determining values of said coefficients c.sub.m which minimize said estimation function J: and
  • fourth means for storing into memory said coefficients cm obtained by the third means, wherein said estimation function J is expressed as follows, where E{X.epsilon.S.sub.q } X {f(X)} indicates a calculation for obtaining expected values of said functions f.sub.m (X) over all elements in learning samples S.sub.q : ##EQU13## and wherein the third means further comprises: means for calculating ##EQU14## means for calculating ##EQU15## and means for obtaining said values of said coefficients c.sub.m from G.sub.m .times.c.sub.m -Hm=0.
  • 4. A data discrimination method for classifying input N-dimensional metric vectors into plural categories specified in advance, comprising the steps of:
  • a first step of receiving said N-dimensional metric vectors; a second step of reading coefficients c.sub.m stored in memory, said coefficients c.sub.m being determined by a method comprising the steps of:
  • a first calculation step of setting L.sub.m complete, periodic, basic functions g.sub.m,i (X), where i=1, . . . , L.sub.m, according to a distribution of samples on said N-dimensional metric vector space .OMEGA.N
  • a second calculation step of expressing functions f.sub.m (X). where m=1, . . . , N. each indicating the m-th component of said mapping F as a linear sum determined by said functions g.sub.m,i (X) and a set of L.sub.m coefficients c.sub.m ;
  • a third calculation step of providing Q teacher vectors T.sub.q =(t.sub.q.1, t.sub.q.2, t.sub.q.e, . . . , t.sub.q.M) (where q=1, 2, . . . , Q) on an M-dimensional metric vector space .OMEGA.M, calculating a specified estimation function J, and obtaining values of coefficients c.sub.m which minimize said estimation function J; and
  • a fourth calculation step of storing into memory said values of said coefficients c.sub.m obtained in the third calculation step;
  • a step of obtaining said values of said functions f.sub.m (X). each indicating the m-th component of said mapping F from said values of said coefficients c.sub.m and said functions g.sub.m,i (X); and
  • a step of classifying said N-dimensional metric vectors into said plural categories specified in advance from the calculation results obtained by substituting said N-dimensional metric vectors into said values of said functions f.sub.m (X) wherein said estimation function J is expressed as follows, where E{X.epsilon.S.sub.q } x {f(X)} indicates a calculation for obtaining expected values of said functions f.sub.m (X) over all elements in learning samples S.sub.q ; ##EQU16## wherein said values of said coefficients c.sub.m which minimize said estimation function J are obtained by partially differentiating said estimation function J by said coefficients c.sub.m, and setting the result to 0 in the third calculation step, and wherein the third calculation step further comprises the steps of:
  • calculating ##EQU17## calculating ##EQU18## and obtaining said values of said coefficients c.sub.m from G.sub.m .times.c.sub.m -H.sub.m =0.
Priority Claims (1)
Number Date Country Kind
6-264948 Oct 1994 JPX
US Referenced Citations (4)
Number Name Date Kind
5408588 Ulug Apr 1995
5517597 Aparicio, IV et al. May 1996
5517667 Wang May 1996
5528729 Watanabe Jun 1996
Non-Patent Literature Citations (9)
Entry
Shridhar, M. and Badreldin, A. "Feature evaluation and sub-class determination through functional mapping" published in Pattern Recognition Letters, May 1985, Netherlands, vol. 3, No. 3, pp. 155-159.
Tseng, Chea-Tin (Tim) and Moret, Bernard M.E. "A New Method for One-Dimensional Linear Feature Transformations" published in Pattern Recognition, Jan. 1990, Great Britain, vol. 23, No. 7, pp. 745-752.
Siedlecki, Wojciech; Siedlecka, Kinga and Sklansky, Jack "An Overview of Mapping Techniques for Exploratory Pattern Analysis" published in Pattern Recognition, 1988, Great Britain, vol. 21, No. 5, pp. 411-429.
Reber, William L. and Lyman, John "An Artificial Neural System Design for Rotation and Scale Invariant Pattern Recognition" IEEE First International Conference on Neural Networks, Jun. 21-24, 1987, San Diego, California, pp. IV-277-IV-283.
Laine, Andrew F. and Schuler, Sergio "Hexagonal Wavelet Representations for Recognizing Complex Annotations," Proceedings of the Computer Society Conference on Computer Vision A Pattern Recognition, Seattle, Jun. 21-23, 1994, Institute of Electrical and Electronics Engineers, pp. 740-745.
B.E. Rosen, et al., "Transcendental Functions in Backward Error Propagation," 1990 Int'l. Conf. on Systems, Man, and Cybernetics, pp. 239-241. Nov. 1990.
Y. Shin and J. Ghosh, "Approximation of Multivariate Functions Using Ridge Polynomial Networks," 1992 Int'l. Conf. on Neural Networks, vol. 2, pp. II-380 to II-385. Jun. 1992.
D.H. Rao and M.M. Gupta, "Dynamic Neural Units and Function Approximation," 1993 Int'l. Conf. on Neural Networks, pp. 743-748. Mar. 1993.
C.-H. Chang, et al., "Polynomial and Standard Higher Order Neural Network," 1993 Int'l. Conf. on Neural Networks, pp. 989-994. Mar. 1993.