Method and apparatus for operating a neural network with missing and/or incomplete data

Information

  • Patent Grant
  • 6591254
  • Patent Number
    6,591,254
  • Date Filed
    Tuesday, November 6, 2001
    23 years ago
  • Date Issued
    Tuesday, July 8, 2003
    21 years ago
Abstract
A neural network system is provided that models the system in a system model (12) with the output thereof providing a predicted output. This predicted output is modified or controlled by an output control (14). Input data is processed in a data preprocess step (10) to reconcile the data for input to the system model (12). Additionally, the error resulted from the reconciliation is input to an uncertainty model to predict the uncertainty in the predicted output. This is input to a decision processor (20) which is utilized to control the output control (14). The output control (14) is controlled to either vary the predicted output or to inhibit the predicted output whenever the output of the uncertainty model (18) exceeds a predetermined decision threshold, input by a decision threshold block (22). Additionally, a validity model (16) is also provided which represents the reliability or validity of the output as a function of the number of data points in a given data region during training of the system model (12). This predicts the confidence in the predicted output which is also input to the decision processor (20). The decision processor (20) therefore bases its decision on the predicted confidence and the predicted uncertainty. Additionally, the uncertainty output by the data preprocess block (10) can be utilized to train the system model (12).
Description




TECHNICAL FIELD OF THE INVENTION




The present invention pertains in general to neural networks, and more particularly, to methods for estimating the accuracy of a trained neural network model, for determining the validity of the neural network's prediction, and for training neural networks having missing data in the input pattern and generating information as to the uncertainty in the data, this uncertainty utilized to control the output of the neural network.




BACKGROUND OF THE INVENTION




A common problem that is encountered in training neural networks for prediction, forecasting, pattern recognition, sensor validation and/or processing problems is that some of the training/testing patterns might be missing, corrupted, and/or incomplete. Prior systems merely discarded data with the result that some areas of the input space may not have been covered during training of the neural network. For example, if the network is utilized to learn the behavior of a chemical plant as a function of the historical sensor and control settings, these sensor readings are typically sampled electronically, entered by hand from gauge readings and/or entered by hand from laboratory results. It is a common occurrence that some or all of these readings may be missing at a given time. It is also common that the various values may be sampled on different time intervals. Additionally, any one value may be “bad” in the sense that after the value is entered, it may be determined by some method that a data item was, in fact, incorrect. Hence, if the data were plotted in a table, the result would be a partially filled-in table with intermittent missing data or “holes”, these being reminiscent of the holes in Swiss cheese. These “holes” correspond to “bad” or “missing” data. The “Swiss-cheese” data table described above occurs quite often in real-world problems.




Conventional neural network training and testing methods require complete patterns such that they are required to discard patterns with missing or bad data. The deletion of the bad data in this manner is an inefficient method for training a neural network. For example, suppose that a neural network has ten inputs and ten outputs, and also suppose that one of the inputs or outputs happens to be missing at the desired time for fifty percent or more of the training patterns. Conventional methods would discard these patterns, leading to training for those patterns during the training mode and no reliable predicted output during the run mode. This is inefficient, considering that for this case more than ninety percent of the information is still there for the patterns that conventional methods would discard. The predicted output corresponding to those certain areas will be somewhat ambiguous and erroneous. In some situations, there may be as much as a 50% reduction in the overall data after screening bad or missing data. Additionally, experimental results have shown that neural network testing performance generally increases with more training data, such that throwing away bad or incomplete data decreases the overall performance of the neural network.




If a neural network is trained on a smaller amount of data, this decreases the overall confidence that one has in the predicted output. To date, no technique exists for predicting the integrity of the training operation of the network “on the fly” during the run mode. For each input data pattern in the input space, the neural network has a training integrity. If, for example, a large number of good data points existed during the training, a high confidence level would exist when the input data occurred in that region. However, if there were a region of the input space that was sparsely populated with good data, e.g., a large amount of bad data had been thrown out from there, the confidence level in the predicted output of a network would be very low. Although some prior techniques may exist for actually checking the actual training of the network, these techniques do not operate in a real-time run mode.




SUMMARY OF THE INVENTION




The present invention disclosed and claimed herein comprises a network for estimating the error in the prediction output space of a predictive system model for a prediction input space. The network includes an input for receiving an input vector comprising a plurality of input values that occupy the prediction input space. An output is operable to output an output prediction error vector that occupies an output space corresponding to the prediction output space of the system model. A processing layer maps the input space to the output space through a representation of the prediction error in the system model to provide said output prediction error vector.




In another aspect of the present invention, a data preprocessor is provided. The data preprocessor is operable to receive an unprocessed data input vector that is associated with substantially the same input space as the input vector. The unprocessed data input vector has associated therewith errors in certain portions of the input space. The preprocessor is operable to process the unprocessed data input vector to minimize the errors therein to provide the input vector on an output. The unprocessed data input in one embodiment is comprised of data having portions thereof that are unusable. The data preprocessor is operable to reconcile the unprocessed data to replace the unusable portion with reconciled data. Additionally, the data preprocessor is operable to output an uncertainty value for each value of the reconciled data that is output as the input vector.




In a further aspect of the present invention, the system model is comprised of a non-linear model having an input for receiving the input vector within the input space and an output for outputting a predicted output vector. A mapping function is provided that maps the input layer to the output layer for a non-linear model of a system. A control circuit is provided for controlling the prediction output vector such that a change can be effected therein in accordance with predetermined criteria. A plurality of decision thresholds are provided that define predetermined threshold rates for the prediction error output. A decision processor is operable to compare the output prediction error vector with the decision thresholds and operate the output control to effect the predetermined changes whenever a predetermined relationship exists between the decision thresholds and the output prediction error vector.




In an even further aspect of the present invention, the non-linear representation of the system model is a trained representation that is trained on a finite set of input data within the input space. A validity model is provided that yields a representation of the validity of the predicted output of a system model for a given value in the input space. The validity model includes an input for receiving the input vector with an input space and an output for outputting a validity output vector corresponding to the output space. A processor is operable to generate the validity output vector in response to input of a predetermined value of the input vector and the location of the input vector within the input space. The value of the validity output vector corresponds to the relative amount of training data on which the system model was trained in the region of the input space about the value of the input vector.




In a yet further aspect of the present invention, the system model is trained by a predetermined training algorithm that utilizes a target output and a set of training data. During training, an uncertainty value is also received, representing the uncertainty of the input data. The training algorithm is modified during training as a function of the uncertainty value.











BRIEF DESCRIPTION OF THE DRAWINGS




For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying Drawings in which:





FIG. 1

illustrates an overall block diagram of the system model illustrating both a validity model and a prediction error model to process reconciled data and control the output with the use of the validity model and prediction error model;





FIGS. 2



a


and


2




c


illustrates an overall block diagram of a method for training the system model utilizing the uncertainty generated during data reconciliation;





FIG. 2



b


illustrates an example of reconciliation and the associated uncertainty;





FIGS. 3



a


-


3




c


illustrate data patterns representing the data distribution, the prediction error and the validity level,





FIG. 4



a


illustrates a diagrammatic view of a data pattern sampled at two intervals illustrating a complete neural network pattern;





FIG. 4



b


illustrates a diagrammatic view of a data pattern illustrating time merging of data;





FIG. 5

illustrates an auto-encoding network for reconciling the input data to fill in bad or missing data;





FIG. 6

illustrates a block diagram of the training operation for training the model;





FIG. 7

illustrates an overall block diagram for training the validity model;





FIGS. 8



a


and


8




b


illustrate examples of localized functions of the data for use with training the validity model;





FIG. 9

illustrates a diagrammatic view of radial basis function centers in a two-dimensional space;





FIG. 10

illustrates a diagrammatic view of the validity function;





FIG. 11

illustrates distribution of training data and two test patterns for x


a


and x


b


; and





FIG. 12

illustrates an overall block diagram for generating the validity targets that are utilized during the training of the validity model.











DETAILED DESCRIPTION OF THE INVENTION




In

FIG. 1

, there is illustrated an overall block diagram of the system of the present invention. A data input vector x(t) is provided that represents the input data occupying an input space. This data can have missing or bad data which must be replaced. This data replacement occurs in a data preprocess section


10


, which is operable to reconcile the data patterns to fill in the bad or missing data and provide an output x′(t) vector. Additionally, the error or uncertainty vector μ


x′


(t) is output. This represents the distribution of the data about the average reconciled data vector x′(t), and this is typically what is discarded in prior systems. The reconciled data x′(t) is input to a system model


12


, which is realized with a neural network. The neural network is a conventional neural network that is comprised of an input layer for receiving the input vector and an output layer for providing a predicted output vector. The input layer is mapped to the output layer through a non-linear mapping function that is embodied in one or more hidden layers. This is a conventional type of architecture. As will be described hereinbelow, this network is trained through any one of a number of training algorithms and architectures such as Radial Basis Functions, Gaussian Bars, or conventional Backpropagation techniques. The Backpropagation learning technique is generally described in D. E. Rumelhart, G. E. Hinton & R. J. Williams,


Learning Internal Representations by Error Propagation


(in D. E. Rumelhart & J. L. McClennand,


Parallel Distributed Processing


, Chapter 8, Vol. 1, 1986), which document is incorporated herein by reference. However, Backpropagation techniques for training conventional neural networks is well known. The output of the system model


12


is a predicted output y(t). This is input to an output control circuit


14


, which provides as an output a modified output vector y′(t). In general, whenever data is input to the system model


12


, a predicted output results, the integrity thereof being a function of how well the network is trained.




In addition to the system model, a validity model


16


and a prediction-error model


18


are provided. The validity model


16


provides a model of the “validity” of the predicted output as a function of the “distribution” of data in the input space during the training operation. Any system model has given prediction errors associated therewith, which prediction errors are inherent in the architecture utilized. This assumes that the system model was trained with an adequate training data set. If not, then an additional source of error exists that is due to an inadequate distribution of training data at the location in the input space proximate to the input data. The validity model


16


provides a measure of this additional source of error. The prediction-error model


18


provides a model of the expected error of the predicted output.




A given system model has an associated prediction error which is a function of the architecture, which prediction error is premised upon an adequate set of training data over the entire input space. However, if there is an error or uncertainty associated with the set of training data, this error or uncertainty is additive to the inherent prediction error of the system model. The overall prediction error is distinguished from the validity in that validity is a function of the distribution of the training data over the input space and the prediction error is a function of the architecture of the system model and the associated error or uncertainty of the set of training data.




The output of the validity model


16


provides a validity output vector v(t), and the output of the prediction error model


18


provides an estimated prediction error vector e(t). These two output vectors are input to a decision processor


20


, which output is used to generate a control signal for input to the output control


14


. The decision processor


20


is operable to compare the output vectors v(t) and e(t) with the various decision thresholds which are input thereto from a decision threshold generator


22


. Examples of the type of control that are provided are: if the accuracy is less than a control change recommendation, then no change is made. Otherwise, the controls are changed to the recommended value. Similarly, if the validity value is greater than the validity threshold, then the control recommendation is accepted. Otherwise, the control recommendation is not accepted. The output control


14


could also modify the predicted outputs. For example, in a control situation, an output control change value could be modified to result in only 50% of the change value for a given threshold, 25% of the change value for a second threshold and 0% of the change value for a third threshold.




Referring now to

FIG. 2



a


, there is illustrated one embodiment of a method for training the system model


12


utilizing the uncertainty μ(t) of the input training data. In general, learning of the system model


12


is achieved through any of a variety of neural network architectures, and algorithms such as Backpropagation, Radial Basis Functions or Gaussian Bars. The learning operation is adjusted such that a pattern with less data in the input space is trained with less importance. In the backpropagation technique, one method is to change the learning rate based on the uncertainty of a given pattern. The input uncertainty vector μ


x′


(t) is input to an uncertainty training modifier


24


, which provides control signals to the system model


12


during training.




The data pre-processor


10


calculates the data value x′(t) at the desired time “t” from other data values using a reconciliation technique such as linear estimate, spline-fit, box-car reconciliation or more elaborate techniques such as an auto-encoding neural network, described hereinbelow. All of these techniques are referred to as data reconciliation, with the input data x(t) reconciled with the output reconciled data x′(t). In general, x′(t) is a function of all of the raw values x(t) given at present and past times up to some maximum past time, Xmax. That is,











(

t
N

)

,


x
2



(

t
N

)


,










x
n



(

t
N

)



;










x
1



(

t

N
-
1


)


,




x
1



(

t

N
-
2


)















x
1



(

t

N
-
1


)



;










x
1



(

t
1

)


,




x
2



(

t
1

)















x
n



(

t
1

)



)






(
001
)













where some of the values of x


i


(t


j


) may be missing or bad.




This method of finding x′(t) using past values is strictly extrapolation. Since the system only has past values available during runtime mode, the values must be reconciled. The simplest method of doing this is to take the next extrapolated value x′


i


(t)=x


i


(t


N


); that is, take the last value that was reported. More elaborate extrapolation algorithms may use past values x


i


(t−τ


ij


), jεt(o, . . . i


max


). For example, linear extrapolation would use:










)
=



x
i



(

t

N
-
1


)


+







x
i



(

t
N

)


-


x
i



(

t

N
-
1


)





t
N

-

t

N
-
1







t



;
t




(
002
)













Polynomial, spline-fit or neural-network extrapolation techniques use Equation 1. (See eg. W. H. Press, “Numerical Recipes”, Cambridge University Press (1986), pp. 77-101) Training of the neural net would actually use interpolated values, i.e., Equation 2, wherein the case of interpolation t


N


>t.




Any time values are extrapolated or interpolated, these values have some inherent uncertainty, μ


x′


(t). The uncertainty may be given by a priori measurement or information and/or by the reconciliation technique. An estimate of the uncertainty μ


x′


(t) in a reconciled value x′(t) would be:











μ



x



=

{






μ



ox



+


μ

1


x





t

+


μ

2


x






t
2






μ


μ
max








μ


max




μ


μ
max










(
003
)













where μ


max


is the maximum uncertainty set as a parameter (such as the maximum range of data) and where:




μ


ox′


is the a priori uncertainty










μ

1


x




=



&LeftBracketingBar;





x







t


&RightBracketingBar;







(
004
)













i.e., the local velocity average magnitude and where:










μ

2


x




=



&LeftBracketingBar;


1
2






2




x








t
2




&RightBracketingBar;







(
005
)













i.e., ½ the local acceleration average magnitude.




A plot of this is illustrated in

FIG. 2



b.






Once the input uncertainty vector μ


x′


(t) is determined, the missing or uncertain input values have to be treated differently than missing or uncertain output values. In this case, the error term backpropagated to each uncertain input is modified based on the input's uncertainty, whereas an error in the output affects the learning of all neuronal connections below that output. Since the uncertainty in the input is always reflected by a corresponding uncertainty in the output, this uncertainty in the output needs to be accounted for in the training of the system model


12


, the overall uncertainty of the system, and the validity of the system's output.




The target output y(t) has the uncertainty thereof determined by a target preprocess block


26


which is substantially similar to the data preprocess block


10


in that it fills in bad or missing data. This generates a target input for input to a block


28


, which comprises a layer that is linearly mapped to the output layer of the neural network in the system model


12


. This provides the reconciled target y′(t).




Referring now to

FIG. 2



c


, there is illustrated an alternate specific embodiment wherein a system model


12


is trained on both the reconciled data x′(t) and the uncertainty μ


40


(t) in the reconciled data x′(t). This data is output from the data preprocess block


10


to a summation block


30


that is controlled on various passes through the model to either process the reconciled data x′(t) itself or to process the summation of the reconciled data x′(t) and the uncertainty μ


x′


(t). Two outputs result, a predicted output p(t) and an uncertainty predicted output μ


p


(t). These are input to a target error processor block


34


, which also receives as inputs the reconciled target output y′(t) and the uncertainty in the reconciled target output μ


y′


(t). This generates a value Δy


total


. This value is utilized to calculate the modified Total Sum Squared (TSS) error function that is used for training the system model with either a Backpropagation Radial Basis Function or Gaussian Bar neural network.




In operation, a first forward pass is performed by controlling the summation block


30


to process only the reconciled data x′(t) to output the predicted output p(t). In a second pass, the sum of the reconciled data input x′(t) and the uncertainty input μ


x′


(t) is provided as follows:








t


)+{right arrow over (μ)}


x′(t)


=(


x′




1





x′






1






, x′




2





x′






2






, . . . , x′




n


+μ  (006)






This results in the predicted output p′(t). The predicted uncertainty μ


p


(t) is then calculated as follows:








{right arrow over (p)}′


(


t


)−


{right arrow over (p)}


(


t


)=(


p′




1




−p




1




, p′




2




−p




2




, . . . , p′




m


  (007)






The total target error Δy


total


is then set equal to the sum of the absolute values of μ


p


(t) and μ


y′


(t) as follows:






Δ


{right arrow over (y)}




total


=(|μ


p






1




|+|μ


y′






1




|, |μ


p






2




|+|μ


y′






2




|, . . . )  (008)






The output error function, the TSS error function, is then calculated with the modified uncertainty as follows:









E
=




i
=
1


N
PATS






(



y


i

-


p


i


)

2



(

1
-


Δ







y



total
i




Δ







y



max
i





)







(
009
)













where N


PATS


is the number of training patterns. For Backpropagation training, the weights W


ij


are updated as follows:










W
ij

=

{





-
η





E




W
ij







j

input







-
η





E




W
ij






(

1
-

(



&LeftBracketingBar;

μ

x
j



&RightBracketingBar;

2



&LeftBracketingBar;

μ


x
j



max


&RightBracketingBar;

2


)


)

2





j

input









(
010
)













As such, the network can now have the weights thereof modified by an error function that accounts for uncertainty.




For neural networks that do not utilize Backpropagation, similar behavior can be achieved by training the system model through multiple passes through the same data set where random noise is added to the input patterns to simulate the effects of uncertainty in these patterns. In this training method, for each x′(t) and associated μ


x′


(t), a random vector can be chosen by choosing each x″


i


as x″


i


=x′


i


+n


i


, wherein n


i


is a noise term chosen from the distribution:








e




(−x′






i








2






/2μ






x′








2








2






)


  (011)






In this case:






{right arrow over (μ)}


p


(


t


)=


f


(


{right arrow over (x)}′


)−


f


(


{right arrow over (x)}


″)  (012)






Where f(x(t)) is the system model producing this system predicted output p(t).




Referring now to

FIGS. 3



a


-


3




c


, there are illustrated plots of the original training data, the system-model prediction and the prediction error, and the validity, respectively. In

FIG. 3



a


, the actual data input-target patterns are illustrated. It can be seen that the data varies in density and variance across the x-axis. Once the system model is trained, it yields a prediction, y (x), line


42


. The system-model has an inherent prediction-error (due to inaccuracies in the training data). These prediction errors are illustrated by two dotted lines


44


and


46


that bound on either side of the predicted value on line


42


. This represents basically the standard deviation of the data about the line


42


. The validity is then determined, which is illustrated in

FIG. 3



c


. The validity is essentially a measure of the amount of training data at any point. It can be seen that the initial point of the curve has a high validity value, illustrated by reference numeral


48


, and the latter part of the curve where a data was missing has a low level, as represented by reference numeral


50


. Therefore, when one examines a neural network trained by the data in

FIG. 3



a


, one would expect the reliability or integrity of the neural network to be high as a function of the training data input thereto whenever a large amount of training data was present.




Referring now to

FIG. 4



a


, there is illustrated a data table with bad, missing, or incomplete data. The data table consists of data with time disposed along a vertical scale and the samples disposed along a horizontal scale. Each sample comprises many different pieces of data with two data intervals illustrated. It can be seen that when the data is examined for both the data sampled at the time interval


1


and the data sampled at the time interval


2


, that some portions of the data result in incomplete patterns. This is illustrated by a dotted line


52


, where it can be seen that some data is missing in the data sampled at time interval


1


and some is missing in time interval


2


. A complete neural network pattern is illustrated box


54


, where all the data is complete. Of interest is the time difference between the data sampled at time interval


1


and the data sampled at time interval


2


. In time interval


1


, the data is essentially present for all steps in time, whereas data sampled at time interval


2


is only sampled periodically relative to data sampled at time interval


1


. As such, the reconciliation procedure fills in the missing data and also reconciles between the time samples in time interval


2


such that the data is complete for all time samples for both time interval


1


and time interval


2


.




The neural network models that are utilized for time-series prediction and control require that the time-interval between successive training patterns be constant. Since the data that comes in from real-world systems is not always on the same time scale, it is desirable to time-merge the data before it can be used for training or running the neural network model. To achieve this time-merge operation, it may be necessary to extrapolate, interpolate, average or compress the data in each column over each time-region so as to give an input value x(t) that is on the appropriate time-scale. The reconciliation algorithm utilized may include linear estimates, spline-fits, boxcar algorithms, etc., or more elaborate techniques such as the auto-encoding network described hereinbelow. If the data is sampled too frequently in the time-interval, it will be necessary to smooth or average the data to get a sample on the desired time scale. This can be done by window averaging techniques, sparse-sample techniques or spline techniques.




Referring now to

FIG. 4



b


, there is illustrated an input data pattern and target output data pattern illustrating the pre-process operation for both preprocessing input data to provide time merged output data and also pre-processing the target output data to provide pre-processed target output data for training purposes. The data input x(t) is comprised of a vector with many inputs, x


1


(t), x


2


(t), . . . x


n


(t), each of which can be on a different time scale. It is desirable that the output x′(t) be extrapolated or interpolated to insure that all data is present on a single time scale. For example, if the data at x


1


(t) were on a time scale of one sample every second, a sample represented by the time t


k


, and the output time scale were desired to be the same, this would require time merging the rest of the data to that time scale. It can be seen that the data x


2


(t) occurs approximately once every three seconds, it also being noted that this may be asynchronous data, although it is illustrated as being synchronized. The data buffer in

FIG. 4



b


is illustrated in actual time. However, the data output as x


1


′(t) is reconciled with an uncertainty μ


x′






1




(t) since the input time scale and the output time scale are the same, there will be no uncertainty. However, for the output x′


2


(t), the output will need to be reconciled and an uncertainty μ


x′






2




(t) will exist. The reconciliation could be as simple as holding the last value of the input x


2


(t) until a new value is input thereto, and then discarding the old value. In this manner, an output will always exist. This would also be the case for missing data. However, a reconciliation routine as described above could also be utilized to insure that data is always on the output for each time slice of the vector x′(t). This also is the case with respect to the target output which is preprocessed to provide the preprocessed target output y′(t).




Referring now to

FIG. 5

, there is illustrated a diagrammatic view of an auto-encoding network utilized for the reconciliation operation. The network is comprised of an input layer of input nodes


60


and an output layer of output nodes


62


. Three hidden layers


64


,


66


and


68


are provided for mapping the layer


60


to the output layer


62


through a non-linear mapping algorithm. The input data patterns x


1


(t), x


2


(t), . . . , x


n


(t) are input thereto, reconciled and reproduced over regions of missing data to provide the output data pattern x


1


′(t), x


2


′(t), x


3


′(t), . . . , x


n


′(t). This network can be trained via the backpropagation technique. Note that this system will reconcile the data over a given time base even if the data were not originally sampled over that time base such that data at two different sampling intervals can be synchronized in time.




The techniques described above involve primarily building, training and running a system model on data that may have missing parts, be on the wrong time-sale increment and/or possesses bad data points. The primary technique involves reconciliation over the bad or missing data and/or time-merging the data. However, once a model is built and trained, there are two other factors that should be taken into account before the model can be used to its full extent to solve a real-world problem. These two factors are the prediction accuracy of the model and the model validity. The model typically does not provide an accurate representation of the dynamics of the process that is modeled. Hence, the prediction output by the model will have some prediction-error e(t) associated with each input pattern x(t), where:








{right arrow over (e)}


(


t


)=


{right arrow over (y)}


(


t


)−


{right arrow over (p)}


(


t


)  (013)






This provides a difference between the actual output at time “t” and the predicted output at “t”. The prediction error e(t) can be used to train a system that estimates the system-model accuracy. That is, a structure can be trained with an internal representation of the model prediction error e(t). For most applications, predicting the magnitude ∥e(t) ∥ of the error (rather than the direction) is sufficient. This prediction-error model is represented hereinbelow.




Referring now to

FIG. 6

, there is illustrated a block diagram of the system for training the prediction-error model


18


. The system of

FIG. 2



c


is utilized by first passing the reconciled input data x′(t) and the uncertainty μ


x′


(t) through the trained system model


12


, this training achieved in the process described with respect to

FIG. 2



c


. The target error Δy


total


is calculated using the target error processor in accordance with the same process illustrated with respect to Equation 8, in addition to Δy as a function of “y”. This is then input as a target to the prediction error model


18


with the inputs being the reconciled input data x′(t) and the uncertainty μ


x′


(t). The prediction-error model can be instantiated in many ways, such as with a lookup table, or with a neural network. If instantiated as a neural network, it may be trained via conventional Backpropagation, Radial Basis functions, Gaussian Bars, or any other neural network training algorithm.




The measurement of the validity of a model is based primarily on the historical training data distribution. In general, neural networks are mathematical models that learn behavior from data. As such, they are only valid in the regions of data for which they were trained. Once they are trained and run in a feed-forward or test mode, (in a standard neural network) there is no way to distinguish, using the current state of the model lone, between a valid data point (a point in the region where the neural network was trained) versus an invalid data point (a point in a region where there was no data). To validate the integrity of the model prediction, a mechanism must be provided for keeping track of the model's-valid regions.




Referring now to

FIG. 7

, there is illustrated an overall block diagram of the processor for training the validity model


16


. The data preprocess block


10


is utilized to provide the reconciled input data x′(t) to the input of the validity model


16


. The input data x(t) and the reconciled input data x′(t) are input to a validity target generator


70


to generate the validity parameters for input to a layer


72


.




A validity measure v(x) is defined as:










v


(

x


)


=

S


(





i
=
1


N
PATS





a
i




h
i



(


x


,


x


i


)




-

b
i


)






(
014
)













where:




v(x) is the validity of the point x




S is a saturating, monotonically increasing function such as a sigmoid:










S


(
z
)


=

1

1
+



-
z








(
015
)













a


i


is a coefficient of importance, a free parameter,




h


i


is a localized function of the data x(t) and the training data point x


i


(t),




N


pats


is the total number of training patterns, and




b


i


is a bias parameter.




The parameter h


i


is chosen to be a localized function of the data that is basically a function of the number of points in a local proximity to the point x(t). As a specific embodiment, the following relationship for h


i


is chosen:











  

 

 

 







,


x


i


)

=

{







-


(


x


-


x


1


)

2


/

σ
i
2







&LeftDoubleBracketingBar;


x


-


x


i


&RightDoubleBracketingBar;

<





o




&LeftDoubleBracketingBar;


x


-


x


i


&RightDoubleBracketingBar;


α









(
016
)













The resultant function is illustrated in

FIG. 8



a


with the function cut of atασ so that far-away points do not contribute. Other functions such as the one illustrated in

FIG. 8



b


could also be used.




Referring now to

FIG. 9

, there is illustrated an input space represented by inputs x


1


and x


2


. It can be seen that there are three regions, each having centers x


1


, x


2


and x


3


, each having a given number of points n


1


, n


2


and n


3


, respectively, and a radius r


1


, r


2


and r


3


. The centers of the regions are defined by the clustering algorithms with the number of points determined therein.




Referring now to

FIG. 10

, there is illustrated a representation of the validity function wherein the validity model


16


is illustrated as having the new data x(t) input thereto and the output v(x(t)) output therefrom. A dotted line is provided to the right of the validity model


16


illustrating the training mode wherein the inputs in the training mode are the historical data patterns x


1


(t), x


2


(t), . . . X


Npats


(t), σ


i


, α, a


i


, b


i


. In a specific embodiment, the values in the above are chosen such that a


i


=1, b


i


=2, for all i, σ


i


=0.1, α=3, for all i.




The Equation 14 can be difficult to compute, so it is more efficient to break the sum up into regions which are defined as follows:









{





i


Cell
i






a
i




h
i



(


x


,


x


i


)




+




j


Cell
j






a
j




h
j



(


x


,


x


j


)




+





(
017
)













where the cells are simple geometric divisions of the space, as illustrated in

FIG. 10

, which depicts a test pattern.




In

FIG. 11

, the test pattern x


a


(t) has a validity that is determined by cells C


15


, C


16


, C


12


and C


11


as long as the cell-size is greater than or equal to the cutoff ασ, where the data point x


b


(t) is only influenced by cell C


15


and C


14


. Hence, the algorithm for finding the validity is straightforward.




1) Train system model on training patterns (x


1


, X


2


, X


3


, . . . X


Npats


)




2) Train validity model by keeping track of x


1


. . . X


Npats


, e.g., via a binary tree or K-d tree.




3) Partition the data space into cells C


1


, C


2


. . . C


Ncells


(eg. K-d tree)




4) Determine which cell the new data point falls into, eg. cell-index (x)=(kx


1


) (kx


2


) . . . (kx


n


), if the cells are equally divided into k partitions/dimension and x


i


ε(0,1)




5) Compute sum in cell




6) Compute sum in n-neighbors.




7) The validity function will then be defined as:










v


(


x




)


=




Cell




x





+


f


(

d
i

)





Neighbors







(
018
)













 where d


i


is the distance from x′ to neighbor i, and f(d


i


) is a decreasing function of d


i


.




Again, Equation 18 can be difficult to calculate. Furthermore, it may be the case that few data points fall into the individual cells. A useful approximation of the full sum may be made by including only those neighbors with large f(d


i


). A second, simpler, and faster way of computing the sums in Equation 18 is to approximate the sums by averaging all points in a region as follows:






′)≡


S


(


N




1




a




1




h




1


(


{right arrow over (x)}′, {right arrow over (x)}




1


)+


N




2




a




2




h




2


(


{right arrow over (x)}′, {right arrow over (x)}




2


)−  (019)

















v


(


x




)




S


(




Regions




N
i



a
i




h
i



(



x




,


x


i


)




-
b

)






(
020
)













The region centers x


i


can be selected as the centers of the cells x


i


, or as the centers of k-d tree cells, or as the centers of Radial Basis functions that are selected via a k-means clustering algorithm.




Referring now to

FIG. 12

, there is illustrated a block diagram of the validity model


16


for receiving the output of the pre-processor


10


and generating the validity value v(x′(t)). As described above, the output of the preprocessor


10


comprises both the reconciled data x′(t) and the uncertainty μ


x′


(t). This is input to a region selector


76


which is operable to determine which region of the test pattern the reconciled data resides in. During training, a counter


78


is incremented to determine the number of points in the region over which the system model


12


was trained. This is stored on a region-by-region basis and, during a run mode, the incrementing operation that is controlled by a line


77


is disabled and only a region line


79


is activated to point to the region determined by the region selector


76


. The output of the counter comprises the number of points in the region N


i


, which is then input to a region activation block


80


. The block


80


provides the function h(x′(t)), x


i


(t)), which, as described above, is the localized function of the data x′(t) and the training data points x′


i


(t). The output of the region activation block


80


is input to a difference circuit


81


to subtract therefrom a validity bias value “b”. This is essentially an offset correction which is an arbitrary number determined by the operator. The output of the difference circuit


81


is input to a sigmoidal function generator that provides the output v(x′(t)). The sigmoidal function provides a sigmoidal activation value for each output of the vector v(x′(t)).




In operation, the validity model


16


of

FIG. 12

allows for on-the-fly calculation of the validity estimation. This requires for the calculation the knowledge of the number of points in each region and knowledge of the region in which the input pattern resides. With this information, the estimation of the validity value can be determined. During the training mode, the increment line


77


is enabled such that the number of points in each region can be determined and stored in the counter


78


. As described above, the run mode only requires output of the value N


i


.




In the embodiment of

FIG. 7

, the validity target generator


70


could utilize the structure of

FIG. 12

to calculate a target output for each value of x(t) input to the preprocessor


10


. This would allow the validity model


16


to be realized with a neural network, which is then trained on the validity targets and the input data in accordance with a training algorithm such as backpropagation.




In summary, there has been provided a method for accounting for bad or missing data in an input data sequence utilized during the run mode of a neural network and in the training mode thereof. The bad or missing data is reconciled to provide a reconciled input data time series for input to the neural network that models the system. Additionally, the error that represents uncertainty of the predicted output as a function of the uncertainty of the data, or the manner in which the data behaves about a particular data point or region in the input space, is utilized to control the predicted system output. The uncertainty is modelled during the training phase in a neural network and this network utilized to provide a prediction of the uncertainty of the output. This can be utilized to control the output or modify the predicted system output value of the system model. Additionally, the relative amount of data that was present during training of the system is also utilized to provide a confidence value for the output. This validity model is operable to receive the reconciled data and the uncertainty to predict a validity value for the output of the system model. This is also used to control the output. Additionally, the uncertainty can be utilized to train the system model, such that in regions of high data uncertainty, a modification can be made to the network to modify the learning rate as a function of the desired output error during training. This output error is a function of the uncertainty of the predicted output.




Although the preferred embodiment has been described in detail, it should be understood that various changes, substitutions and alterations can be made therein without departing from the spirit and scope of the invention as defined by the appended claims.



Claims
  • 1. A network for estimating the error in the prediction output space of a predictive system model operating over a prediction input space, comprising:an input for receiving an input vector comprising a plurality of input values that occupy the prediction input space; an output for outputting an output prediction error vector that occupies an output space corresponding to the prediction output space of the predictive system model; and a processing layer for mapping the prediction input space to the prediction output space through a representation of the prediction error in the predictive system model to provide said output prediction error vector.
  • 2. The network of claim 1, and further comprising:a preprocess input for receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as said input vector, said unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and a data preprocessor for processing the unprocessed data in the unprocessed data input vector to minimize the errors therein to provide said input vector on an output.
  • 3. The network of claim 2, wherein said unprocessed data input vector is comprised of data having portions thereof that are unusable and said data preprocessor comprises a reconciliation device for reconciling the unprocessed data to replace the unusable portions with reconciled data.
  • 4. The network of claim 2, wherein said data prepossessor is operable to calculate and output the uncertainty for each value output by said data preprocessor.
  • 5. The network of claim 1, wherein the predictive system model comprises a non-linear model having an input for receiving the input vector that is within the prediction input space and an output for outputting a predicted output vector within the prediction output space, said non-linear model mapping the prediction input space to the prediction output space through a non-linear representation of a system.
  • 6. The network of claim 5, wherein the predictive system model is trained on a set of training data having uncertainties associated therewith and wherein said processing layer is operable to map the prediction input space to the prediction output space through a representation of the combined prediction error in the predictive system model and the prediction error in the set of training due to the uncertainty in the set of training data.
  • 7. The network of claim 5 and further comprising:a plurality of decision thresholds for defining predetermined threshold values for said output prediction error vector; an output control for effecting a change in the value of said predicted output vector from the predictive system model; and a decision processor for receiving said output prediction error vector and comparing it to said decision thresholds and operating said output control to effect said change on the value of said predicted output vector when the value of said output prediction error vector meets a predetermined relationship with respect to said decision thresholds.
  • 8. The network of claim 6, wherein said non-linear representation is a trained representation that is trained on a finite set of input data within the input space in accordance with a predetermined training algorithm and further comprising a validity model for providing a representation of the validity of the predicted output vector of the system model for a given value of the input vector within the input space, said validity model having:an input for receiving the input vector within the input space; an output for outputting a validity output vector corresponding to the output space; a validity processor for generating said validity output vector in response to input of said input vector and the location of said input vector in the input space, the value of said validity output vector corresponding to the amount of training data on which the system model was trained in the region of the input space about the value of the input vector.
  • 9. The network of claim 8, and further comprising:a plurality of decision thresholds for defining predetermined threshold values for the validity output vector; an output control for effecting a change in the value of said predicted output vector from the predictive system model; and a decision processor for receiving said validity output vector and comparing said validity output vector to said decision thresholds, and operating said output control to effect said change in the value of said predicted output vector when the value of said validity output vector meets a predetermined relationship with respect to said decision thresholds.
  • 10. A network for providing a measure of the validity in the prediction output space of a predictive system model that provides a prediction output and operates over a prediction input space, comprising:an input for receiving an input vector comprising a plurality of input values that occupy the prediction input space; an output for outputting a validity measure output vector that occupies an output space corresponding to the prediction output space of the predictive system model; and a processing layer for mapping the prediction input space to the prediction output space through a representation of the validity of the system model that was learned on a set of training data, the representation of the validity of the system model being a function of the distribution of the training data in the prediction input space that was input thereto during training to provide a measure of the validity of the system model prediction output.
  • 11. The network of claim 10, and further comprising:a preprocess input for receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as said input vector, said unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and a data preprocessor for processing the unprocessed data in the unprocessed data input vector to minimize the errors therein to provide said input vector on an output.
  • 12. The network of claim 11, wherein said unprocessed data input vector is comprised of data having portions thereof that are unusable and said data preprocessor comprises a reconciliation device for reconciling data to replace the unusable portions with reconciled data.
  • 13. The network of claim 12, wherein said data preprocessor is operable to calculate and output the uncertainty for each value of reconciled data output by said data preprocessor.
  • 14. The network of claim 10, wherein the predictive system model comprises a non-linear model having an input for receiving the input vector that is within the prediction input space and an output for outputting a predicted output vector within the prediction output space, said non-linear model mapping the prediction input space to the prediction output space through a non-linear representation of a system.
  • 15. A network for providing a measure of the validity in the prediction output space of a predictive system model that is comprised of a non-linear model that provides a prediction output and operates over a prediction input space, comprising:an input for receiving an input vector comprising a plurality of input values that occupy the prediction input space; an output for outputting a validity measure output vector that occupies an output space corresponding to the prediction output space of the predictive system model; a processing layer for mapping the prediction input space to the prediction output space through a non-linear representation of the validity of the system model that was learned on a set of training data, the representation of the validity of the system model being a function of the distribution of the training data in the prediction input space that was input thereto during training to provide a measure of the validity of the system model prediction output; a plurality of decision thresholds for defining predetermined threshold values for said validity measure output vector; an output control for effecting a change in the value of said predicted output vector from the predictive system model; and a decision processor for receiving said validity measure output vector and comparing it to said decision threshold and operating said output control to effect said change on the value of said predicted output vector when the value of said validity measure output vector meets a predetermined relationship with respect to said decision threshold.
  • 16. A network for providing a measure of the validity in the prediction output space of a predictive system model that provides a prediction output and operates over a prediction input space, comprising:an input for receiving an input vector comprising a plurality of input values that occupy the prediction input space; an output for outputting a validity measure output vector that occupies an output space corresponding to the prediction output space of the predictive system model; and a processing layer for mapping the prediction input space to the prediction output space through a representation of the validity of the system model that was learned on a set of training data, the representation of the validity of the system model being a function of the distribution of the training data in the prediction input space that was input thereto during training to provide a measure of the validity of the system model prediction output; wherein said processing layer comprises: a memory for storing a profile of the training data density over the input space, and a processor for processing the location of the input data in the input space and the density of the training data at said location as defined by said stored profile to generate said validity measure output vector as a function of the distribution of said training data proximate to the location in the input space of the input data.
  • 17. A method for estimating the error in the prediction output space of a predictive system model over a prediction input space, comprising the steps of:receiving an input vector comprising a plurality of input values that occupy the prediction input space; outputting an output prediction error vector that occupies an output space corresponding to the prediction output space of the predictive system model; and mapping the prediction input space to the prediction output space through a representation of the prediction error in the predictive system model to provide the output prediction error vector in the step of outputting.
  • 18. The method of claim 17, and further comprising the steps of:receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as the input vector, the unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and processing the unprocesssed data in the unprocessed data vector to minimize the errors therein to provide the input vector on an output.
  • 19. The method of claim 18, wherein the step of receiving an unprocessed data input vector comprises receiving an unprocessed data input vector that is comprised of data having portions thereof that are unusable and the step of processing the unprocessed data comprises reconciling the unprocessed data to replace the unusable portions with reconciled data.
  • 20. The method of claim 19, wherein the step of processing the data is further operable to calculate and output the uncertainty for each value of the reconciled data output by the step of processing.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of Ser. No. 09/207,719, Dec 8, 1998 U.S. Pat. No. 6,314,414, issued Nov. 6, 2001, entitled “Method For Operating a Neural Network With Missing and/or Incomplete Data”, which is a continuation of application Ser. No. 09/167,400, filed Oct. 6, 1998, now U.S. Pat. No. 6,169,980 issued Jan. 2, 2001, entitled “Method for Training and/or Testing a Neural Network on Missing and/or Incomplete Data”, which is a continuation of application Ser. No. 08/724,377, filed Oct. 1, 1996, now U.S. Pat. No. 5,819,006, issued Oct. 6, 1998, entitled “Method for Operating a Neural Network with Missing and/or Incomplete Data”, which is a continuation of application Ser. No. 08/531,100, filed Sep. 20, 1995, now U.S. Pat. No. 5,613,041, issued Mar. 18, 1997, entitled “Method and Apparatus for Operating a Neural Network With Missing and/or Incomplete Data.”

US Referenced Citations (34)
Number Name Date Kind
4802103 Faggin et al. Jan 1989 A
4813077 Woods et al. Mar 1989 A
4872122 Altschuler et al. Oct 1989 A
4910691 Skeirik Mar 1990 A
4965742 Skeirik Oct 1990 A
5006992 Skeirik Apr 1991 A
5052043 Gaborski Sep 1991 A
5081651 Kubo Jan 1992 A
5111531 Grayson et al. May 1992 A
5113483 Keeler et al. May 1992 A
5121467 Skeirik Jun 1992 A
5140523 Frankel et al. Aug 1992 A
5150313 van den Engh et al. Sep 1992 A
5175797 Funabashi et al. Dec 1992 A
5255347 Matsuba et al. Oct 1993 A
5276771 Manukian et al. Jan 1994 A
5335291 Kramer et al. Aug 1994 A
5353207 Keeler et al. Oct 1994 A
5402519 Inoue et al. Mar 1995 A
5444820 Tzes et al. Aug 1995 A
5461699 Arbabi et al. Oct 1995 A
5467428 Ulug Nov 1995 A
5479573 Keeler et al. Dec 1995 A
5559690 Keeler et al. Sep 1996 A
5581459 Enbutsu et al. Dec 1996 A
5613041 Keeler et al. Mar 1997 A
5659667 Buescher et al. Aug 1997 A
5704011 Hansen et al. Dec 1997 A
5720003 Chiang et al. Feb 1998 A
5729661 Keeler et al. Mar 1998 A
5819006 Keeler et al. Oct 1998 A
6002839 Keeler et al. Dec 1999 A
6169980 Keeler et al. Jan 2001 B1
6314414 Keeler et al. Nov 2001 B1
Foreign Referenced Citations (6)
Number Date Country
0262647 Apr 1988 EP
0327268 Aug 1989 EP
0436916 Jul 1991 EP
WO9412948 Jun 1994 WO
WO9417482 Aug 1994 WO
WO9417489 Aug 1994 WO
Non-Patent Literature Citations (20)
Entry
Phoha, Shashi; “Using the National Information infrastructure (NII) for Monitoring, Diagnostics and Prognostics of Operating Machinery,” IEEE Proceeedings of the 35th Conference on Decision and Control, Kobe, Japan, pp. 2583-2587, Dec. 1996.
Hartman, Eric J., Keele James D., Kowalski, Jacek M.; “Layered Neural Networks with Gaussian Hidden Units as Universal Approximations,” Neural Computation 2, 1990, Massachusetts Institute of Technology, pp. 210-215.
Hartman, Eric, Keeler, James D.; “Predicting the Future: Advantages of Semilocal Units,” Neural Computation 3, 1991, Massachusetts Institute of Technology, pp. 566-578.
Press, William H., Flannery, Brian P., Teukolsky, Saul A., Vetterling, William T.; Numerical Recipes: The Art of Scientific Computing, 1986, ch. 3, “Interpolation and Extrapolation,” pp. 77-101.
Serth, R.W., Heenan, W. A.; “Gross Error Detection and Data Reconciliation in Steam-Metering Systems,” AlChE Journal, vol. 32, No. 5, May 1986, pp. 733-742.
Myung-Sub Roh et al.; Thermal Power Prediction of Nuclear Power Plant Using Neural Network and Parity Space Model, IEEE Transactions on Nuclear Science, vol. 38, No. 2, Apr. 1991, pp. 866-872.
Autere, Antti; On Correcting Systematic Errors Without Analyzing Them by Performing a Repetitive Task, IEEE/RSI International Workshop on Intelligent Robots and Systems IROS '91, Nov. 3-5, 1991, pp. 472-477.
Kimoto, Takashi et al.; Stock Market Prediction System with Modular Neural Networks, IEEE, Jun. 1990, pp. I-1 to I-6.
Dorronsoro et al.; “Neural Fraud Detection in Credit Card Operations,” IEEE Transactions on Neural Networks, vol. 8, No. 4, Jul. 1997, pp. 827-834.
Spenceley, S.E., Warren, J.R.; “The Intelligent Interface for Online Electronic Medical Records Using Temporal Data Mining,” IEEE Xplore, Proceedings of the 31st Annual Hawaii, International Conference on System Sciences. Jan. 1998, pp. 266-745.
Koutsougeras; “A feedforward neural network classifier model: multiple classes, confidence output values, and implementation,” International Journal of Pattern Recognition and Artificial Intelligence Ed. World Scientific Publishing Co., Oct. 1992, vol. 6, No. 4, pp. 539-569.
Lapedes, Alan, Farber, Robert; “How Neural Nets Work,” American Inst. of Physics, 1988, pp. 442-457.
Weigend, Andreas S., Huberman, Bernado A., Rumelhart, David E; “Predicting the Future: A Connectinist Approach,” Stanford University, Stanford-PDP-90-01/PARC-SSL-90-20, Apr. 1990.
Rumelhart, D.E., Hinton, G.E., Williams. R.J.; “Learning Internal Representations by error Propagation,” Parallel Distributed Processing, vol. 1, 1986.
Rander, P.W., Unnikrishnan, K.P.; “Learning the Time-Delay Characteristics in a Neural Network,” ICAAADDP-92-IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 23, 1992, vol. 2, pp. 285-288.
Tam, David C., Perkel, Donald H.; “A Model for Temporal Correlation of biological Neuronal Spike Trains,” IJCNN Int. Joint Conference of Neural Networks, Dept. of Physiology and Biophysics, University of California, pp. I-781 -I-786.
Levin, Esther, Gewirtzman, Raanan, Inbar, Gideon F.; “Neural Network Architecture for Adaptive System Modeling and Control,” Neural Networks, 4(1991) No. 2, pp. 185-191.
Troudet, T., Garg, S., Mattern, D. Merrill, W.; “Towards Practical Control Design Using Neural Computation,” ICJNN-91-Seattle—International Joint Conference on Neural Networks, vol. 2, pp. 675-681, Jul. 8, 1991.
Beerhold, J.R., Jansen, M., Eckmiller, R.; “Pulse-Processing Neural Net Hardware with Selectable Topology and Adaptive Weights and Delays,” IJCNN International Joint Conference on Neural Networks, Jun. 17, 1990, vol. 2, pp. 569-574.
Haffner, Patrick, Franzini, Michael, Waibel, Alex; “Integrating Time Alignment and Neural Networks for High performance Continuous Speech Recognition,” ICASSP 91, sponsored by The Inst. of electrical and Electronics Engineers, Signal Processing Society, 1991 Int. Conf. on Acoustics, Speech and Signal Processing, May 14-17, 1991, pp. 105-108.
Continuations (4)
Number Date Country
Parent 09/207719 Dec 1998 US
Child 10/040085 US
Parent 09/167400 Oct 1998 US
Child 09/207719 US
Parent 08/724377 Oct 1996 US
Child 09/167400 US
Parent 08/531100 Sep 1995 US
Child 08/724377 US