Embodiments of the invention are related to the field of artificial intelligence (AI) by machine learning. In particular, embodiments of the invention are related to deep learning using neural networks.
Catastrophic forgetting is a problem in which neural networks lose the information of a first task after subsequently training a second task. The ability to learn tasks in a sequential fashion is important to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable limitation. Catastrophic forgetting is a recurring challenge to developing versatile deep learning models.
In the recent years, online incremental learning (OIL) has attracted a great deal of attention in the deep learning community. Though it is well-known that deep neural networks (DNNs) have achieved state-of-the-art performances in many machine learning (ML) tasks, they suffer from catastrophic forgetting which makes it difficult for continual learning. The problem is that when a neural network is used to learn a sequence of tasks, the learning of the later tasks may degrade the performance of the models learned for the earlier tasks. Our human brains, however, seem to have this remarkable ability to learn a large number of different tasks without any of them negatively interfering with each other. OIL algorithms try to achieve this same ability for neural networks and to solve the catastrophic forgetting problem. Thus, in essence, continual learning performs incremental learning of new tasks.
Conventional OIL techniques not only train on a new task's data, but also retrain on old tasks data. As tasks accumulate, however, their associated training data also accumulates, resulting in prohibitively large amount of training data and prohibitively time-consuming training sessions that train based on all past and current training data.
Accordingly, there is therefore a need in the art to overcome this limitation and efficiently train neural networks to maintain expertise on tasks which they have not experienced for a long time.
Embodiments of the invention train a machine learning model using incremental learning without forgetting. Whereas conventionally incremental learning unlearns old trained tasks upon learning new trained tasks, embodiments of the invention incrementally train new tasks using training data from old tasks to retain their knowledge. Instead of a naïve approach of retraining by accumulating all training data for new and old tasks which is often prohibitively data-heavy and time-consuming, embodiments of the invention provide an efficient technique to retain prior knowledge.
According to an embodiment of the invention, prior task training data may be efficiently input as a distribution of aggregated prior task training data. For example, prior task training data may be aggregated as a distribution profile defined by a mean, standard deviation and mode (e.g., three data points) or more complex (e.g., multi-node or arbitrarily shaped) distributions. Incorporating a distribution of prior task training data provides efficient incremental learning without forgetting by using a compact data representation of the prior task training data to reduce memory consumption, as compared to simply inputting the prior task training data itself (e.g., using three aggregate data points instead of hundreds or thousands of past training samples). Using such a compact data representation of the prior task training data distribution further increases training speed as training is based on less data, compared to inputting the raw prior task training data itself. Additionally, training using a distribution of prior task training data prevents over-fitting by providing an averaged accumulation of knowledge instead of specific knowledge based on the actual past training samples. This leads to retention of a general impression of past knowledge, without fixing the model to the exact past knowledge that often results in an inflexibility to train future tasks.
According to an embodiment of the invention, prior task training data may be input, not as additional training data, but used to modify a propagator, which is then applied to current task training data to generate model parameters. This allows the prior task training data to be embedded into the model without actively training on the data and therefore without requiring training labels. Incorporating the prior task training data without its training labels allows the prior task training data to be input based on its distribution which typically has no labels (labels are linked to specific training samples, not their distributed aggregates) and may also be more accurate than prior task training data by eliminating errors associated with its improper labelling (as labels are generated by the model as it is being trained, before it reaches full accuracy).
Some embodiments of the invention provide a device, system and method for training a machine learning model using incremental learning without forgetting. A sequence of a plurality of training tasks may be received, wherein each training task is associated with one or more training samples and corresponding labels respectively associated with the one or more training samples. A subset of shared model parameters that are common to the plurality of training tasks and a subset of task-specific model parameters for each training task that are not common to the plurality of training tasks may be generated. The machine learning model may be trained in a sequence of a plurality of sequential training iterations respectively associated with the sequence of a plurality of training tasks. In each of the plurality of sequential training iterations the machine learning model is trained by generating the task-specific parameters for the current training iteration by applying a propagator to the one or more training samples associated with the current training task, wherein the training of the model for the current training task is constrained by one or more of the training samples associated with a previous training task in a previous training iteration, and classifying the one or more samples associated with the current training task based on the machine learning model defined by combining the subset of shared parameters and the task-specific parameters generated based on the training samples associated with the current training task and the previous training task.
Non-limiting examples of embodiments of the disclosure are described below with reference to figures attached hereto. Dimensions of features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale. The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features and advantages thereof, can be understood by reference to the following detailed description when read with the accompanied drawings. Embodiments are illustrated without limitation in the figures, in which like reference numerals indicate corresponding, analogous or similar elements, and in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn accurately or to scale. For example, the dimensions of some of the elements can be exaggerated relative to other elements for clarity, or several physical components can be included in one functional block or element.
Embodiments of the invention relate to online incremental machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to a classical machine learning approaches that use a batch learning techniques. For example, embodiments of the invention relate to techniques that make use of various types of streaming algorithms to perform sequential or incremental learning on real-time streaming data.
Online incremental learning is a method in online machine learning in which input data is continuously used to extend the existing model's knowledge, i.e., to further train the model. Online incremental learning represents a dynamic technique of supervised and/or unsupervised learning that can applied when training data becomes available gradually over time. Incremental algorithms may be applied to data streams, also addressing the issues of memory and complexity.
When training on new tasks or categories, a neural network tends to forget the information learned in the previous trained tasks. This usually means a new task will likely override the weights that have been learned in the past, and thus degrade the model performance for the past tasks. Without fixing this problem, a single neural network may be unable to adapt itself to an online incremental scenario, because it forgets the existing information/knowledge when it learns new knowledge. On the one hand, if a model is too stable, it will not be able to consume new information from the future training data. On the other hand, if a model is too flexible it may suffer from large weight changes and forget previously learned representations.
Catastrophic forgetting occurs in deep neural networks (DNNs). One example of catastrophic forgetting is transfer learning using a deep neural network. In a typical transfer learning setting, where the source domain has plenty of labeled data and the target domain has little labeled data, fine-tuning is widely used in DNNs to adapt the model for the source domain to the target domain. Before fine-tuning, the source domain labeled data is used to pre-train the neural network. Then the output layers of this neural network are retrained given the target domain data. Backpropagation-based finetuning is applied to adapt the source model to the target domain. However, such an approach suffers from catastrophic forgetting because the adaptation to the target domain usually disrupts the weights learned for the source domain, resulting in inferior inference in the source domain.
Embodiments of the invention solve the problem of catastrophic forgetting by implementing an accurate and efficient device, system, and method for training a machine learning model using incremental learning of a plurality of tasks that reduces or minimizes forgetting previous task training. Embodiments of the invention may receive or generate an ordered sequence of the plurality of training tasks and may incrementally train the model with each task sequentially in order (not in parallel). Each training task may be associated with one or more training samples and one or more corresponding labels respectively associated with the one or more training samples. To increase training efficiency, the model parameters may be divided into a subset of shared model parameters (common to the plurality of training tasks) and a subset of task-specific model parameters specific to each training task (not common to the plurality of training tasks). The subset of shared model parameters may be directly known, while the task-specific model parameters may be generated by applying a propagator to each training sample. While the subset of shared model parameters are trained for every task, the subset of task-specific model parameters are only trained for their specific associated training task (and not other task with which they are not specifically associated). Embodiments of the invention may thereby eliminate computations for the remaining task-specific parameters not specific to the current task (specific to different tasks), and significantly reducing the associated number of computed parameters and thereby working memory usage and processing power used to conventionally compute those different task-specific parameters. In one example, a model may have 50% shared parameters and 50% task-specific parameters, where the task specific parameters are divided equally among five different tasks (10% task-specific parameters per task). Accordingly, instead of computing all 100% of parameters to train each task, embodiments of the invention need only compute 60% of parameters when training each task (50% shared parameters+10% task-specific parameters specific to the current task), resulting in a 40% reduction in computing time and working memory for training the model.
To reduce forgetting, to train the task-specific parameters for each current task in each current iteration of the plurality of sequential training iterations, in addition the using current training samples associated with the current training task, embodiments of the invention use replay training samples associated with previous training task(s) in previous training iteration(s). Re-introducing knowledge from replay training samples for previous tasks, maintains past training knowledge (reduces past training forgetting) after training subsequent tasks. However, instead of (or in addition to) inputting the replay training samples in the training dataset for the current task (e.g., adding it to the current training sample dataset), the replay training samples may be used to constrain the training for the current task. For example, the current training samples may train an encoder which generates new samples that are introduced as replay samples in the next training iteration, which uses these samples to modify parameters (alter the propagator). This constraint may be, for example, to use the replay training samples associated with one or more previous training tasks to reduce or minimize variations of one or more layer outputs of the model caused by changes in the subset of shared parameters and a propagator resulting from the current training iteration (e.g., as shown in equations 8 and 9). The task-specific parameters for the current training iteration may be generated based on a compressed encoding of training samples associated with the current training task and a non-compressed version of replay training samples associated with previous training tasks. the compressed encoding may be generated by an encoder trained by adding a mean square error reconstruction loss of the one or more training samples associated with the previous training task to a penalized form of a Wasserstein distance between a distribution of the compressed encoding and a multivariate normal distribution of an embedded low dimensional space (e.g., as shown in equations 1 and 2). Using replay samples to constrain the model, instead of simply inputting them into the training dataset, allows the replay samples to be used without labels (training data requires labels). Because labels are generated based on the model, and early model training has poor accuracy, early task iteration samples often have inaccurate labelling. Eliminating labels in the replay samples thus reduces inaccurate training based on those inaccurate labels to increase training accuracy compared to simply adding the replay samples and their labels to the training dataset. In addition, because the replay samples are used without labels, embodiments of the invention may use an aggregated distribution of replay samples zm (e.g., averages, standard deviations, modes, histograms, etc.), with which no labels are associated, rather than the discrete samples themselves. Training based on an aggregated distribution of replay samples, rather than the discrete replay samples themselves, provides a more distributed general knowledge of past training, without fixing the model to the exact past knowledge that often results in inflexible over-fitting that is less adaptable to training future tasks. The result of training based on replay sample distributions, as compared to discrete replay samples, is better knowledge retention of past tasks (averaged over many or all past samples) and better training for future tasks (more flexible training). In addition, training based on the aggregated distributions is more efficient because the replay sample data size is bounded by a threshold limit (e.g., three distribution values such as mean, standard deviation, and mode) regardless of how many accumulated replay samples are aggregated (e.g., hundreds or thousands), which cumulatively grows as the number of tasks are incrementally trained. In addition, because past samples are incorporated based on their aggregate distribution, there is no need to store the actual samples for previous tasks, thus decreasing memory usage.
Once trained, the model may classify the one or more samples associated with the current training task based on the machine learning model. This classification may be defined by combining the subset of shared parameters and the task-specific parameters for the current task. As discussed, the task-specific parameters for the current task are generated based on the current training samples associated with the current training task (input as the training dataset) and the replay training samples associated with previous training tasks (to constrain the model).
Reference is made to
For some deep learning applications (e.g., disease tracking, fraud detection, etc.), where online incremental learning can be crucial, catastrophic forgetting should be avoided.
In conventional artificial neural networks, all neurons in the hidden layer are initially activated, and in order to concentrate on a specific task, some of them may be turned off. In other words, it may be useful to ‘forget’ all unnecessary information. In the context of artificial neural networks, activation means that the neurons are involved in forward propagation during evaluation and backward propagation during training.
Given a sequence of supervised learning tasks T=(T1, T2, . . . TN), embodiments of the invention sequentially train those tasks in the given sequence such that the learning of each new task will not forget the models learned for the previous tasks. Embodiments of the invention thus solve the problem of catastrophic forgetting by providing a decision boundary 104 for each sequentially learned new task Ti that retains knowledge of all past tasks T1, . . . , Ti−1.
Reference is made to
The sequence of supervised learning tasks may be denoted T=(T1, T2, . . . TN). Each task Ti is represented by Ti={xij, yij|j ϵ (1, . . . , Ni)}, where xij is the j-th example/sample of Ti, and yij is its associated label. (xt, yt) denotes a test instance/example 200. To simplify the notation, (xi, yi) denotes a training example from task Ti, omitting the second subscript j. The terms example, sample, and instance may be used interchangeably.
The proposed parameter generation and learning adaptation (PPLA) architecture may include computer C 202, propagator g(·) 204, and data generator DG 206:
Computer C 202 is a classification model. The parameter set of computer C 202 may include two subsets, φ0 that is shared by all tasks (and instances) and H, a parameter place holder, which may be replaced by the generated parameters set pi (or pt) 220 for each training (or testing) example xi (or xt) 200. Parameters set pi (or pt) 220 serves to adapt the computer C 202 to classify the example 200 in training or testing. Embodiments of the invention adopt this parameter split because only part of the parameters of a neural network may be adjusted when learning a new task.
Dynamic Parameter Propagator (DPP) g(·) 204 takes the embedding zi (or zt) 218 of each input training (or testing) example xi (or xt) 200 to generate the parameters pi (or pt) 220 for computer C 202. Embedded data zi (or zt) 218 may be used instead of raw data xi (or xt) 200 to reduce the raw data's dimension. The embedding zi (or zt) 218 as a relatively low dimensional dense representation, as compared to raw data 200, reduces the mapping space for DPP 204 and thus reduces the difficulty in its parameter generation.
Data Generator (DG) 206 may generate a set of replayed data or samples {x′m}m=1M 208 using its decoder DGD 216 for previous tasks to deal with catastrophic forgetting. Additionally or alternatively, data Generator (DG) 206 may generate the embedding zi (or zt) 218 of each input training (or testing) example xi (or xt) 210 using its encoder DGE 212.
Testing is described first before training. Given a test instance xt 210, encoder DGE 212 first generates its embedding zt 218, which is fed to propagator g(·) 204 to generate a set of parameters pt 220. Computer C 202 then takes xt 200 as input and uses the trained/learned shared parameters φ0 and pt to classify xt. Shared parameters φ0 contains the common features of all tasks learned so far. Parameter set pt 220 adapts computer C 202 for test instance xt 210 in order to classify xt.
For training, the pipeline of the proposed PPLA framework is shown in
In training the computer C 202 and DPP 204 for the new task Ti, both g(·) 204 and the shared parameters φ0 will change, which can cause forgetting in DPP for previous tasks. To keep DPP 204 remembering the acquired knowledge for previous tasks, embodiments of the invention may minimize the variation of certain layers' output caused by the changes of φ0 and g(·) 204 using the set of replayed samples {x′m}m=1M 208 generated by DG 206.
PPLA has training components:
DPP 204 and computer C 202 (referred to as DPP-C). E.g., DPP 204 may be optimized together with computer C 202.
DG 206: The training components have their respective objective functions, and are trained alternately. This is because DPP 204 takes input zi 218 (which is produced by DG 206) to generate parameters 220, and alternating training ensures consistent convergence rates of DG 206 and DPP 204.
Dynamic Parameter Propagator (DPP) 204 and Computer C 202 Implementation: Several neural networks can be used to implement DPP for parameter generation, e.g., convolutional neural network (CNN), recurrent neural network (RNN) and multilayer perceptron (MLP). Although an embodiment is described using MLP, this is a non-limiting example, and any other type of NN can be used. Formally, DPP can be written e.g., as:
p
i
=g(zi, η)=f(aDzi+b) (eq.1)
where f is an activation function, and aD and b are parameters of DPP denoted by η.
For the implementation of computer C 202, several deep learning networks can be used as well. Each layer of the NN (e.g., MLP) is a perceptron and can be formalized by output=f*(a*xinput), where xinput denotes the input of the particular perceptron. A perceptron may also be referred to by a basic unit e.g., 1. In general, for each basic unit k of the computer C 202, embodiments of the invention may have a shared portion of the parameters φ0,k and a generated portion of the parameters pi,k, e.g.:
φi*=join(φ0,pi)={[ai,k*]}k=1K={φ0,k;pi,k}k=1K (eq. 2)
where join (φ0,pi) is a concatenation of φ0 and pi, and k may be the number of basic units of the computer C 202. In one embodiment, k may be the number of hidden layers (basic units) of the computer MLP.
The joint method has sufficient capacity to adjust computer C 202 in general because only part of the parameters are generated. pi,k can affect all dimensions of the output of its corresponding basic unit k, e.g., as:
a
i,k
*
x
input=φ0,kxinput1+pi,kxinput2 (eq. 3)
where xinput1 and xinput2 are block vectors of xinput·pi,kxinput2 may be biased and can adapt the output vector to any point in vector space.
To train DPP-C, embodiments of the invention may use cross entropy loss ψce. The objective function of computer C including DPP g(zi,η) and φ0 for learning each new classification task Ti may be defined e.g., as:
minimize(n, φ0) {ψce(C(xi,φi*),yi)} (eq. 4)
where ψce cross entropy, φi* is the whole set of parameters of C (φi*=joint(pi, φ0)) representing the adaptation of computer C.
The generated replayed samples x′m may be used as constraints to reduce DPP-C's forgetting. That is, to keep the past learned knowledge, the output of the basic units in computer C may not change much when learning a new task with the help of generated data. If no activation function is used, eq. 4 may be rewritten e.g., as:
min Σm=1MΣk=1K∥ai,k*x′m,k−ai−1,k*x′m,k∥(eq. 5)
where K denotes the number of basic units, and M denotes the number of replayed samples. A basic unit with smaller k is e.g., at a relative lower layer of computer network (e.g., constraining only the units in the last layer can already achieve good results). x′m,k is the input of kth basic unit and is calculated through forward propagation, except x′m,1=DGD(zmsample, φ′d) which is the initial replayed sample x′m generated by data Generator (DG) 206 (before optimizing current task Ti) ai,k* is jointed parameter.
Data Generator (DG) 206: As indicated earlier, DG may have various functions, e.g.:
Data Generator (DG) 206 may compresses the original input data xi 210 to embedding zi 218 using its encoder 212 to reduce the number of dimensions of xi and consequently reduce the mapping space of DPP 204 to reduce the complexity and improve the efficiency of generating the parameters pi 220. The compression may be formulated e.g., by:
z
i
=DG
E(xi,φe) (eq. 6)
where DGE is the encoder 212 of DG with parameters φe.
Data Generator (DG) 206 may generate the replayed data for previous tasks to deal with forgetting in computer C 202 and DPP 204. DG 206 does not generate both the replay data x′m and their labels y′m (using the computer learned so far), which is typically noisy. DG 206 only generates the data x′m 208 but not the associated labels (which are not needed according to embodiment of the invention), so that labeling errors will not affect the model PPLA.
Each replayed sample x′m 208 is generated by the decoder 216 of DG, referred to as DGD, e.g., as follows:
x′
m
=DG
D(zmsample, φ′d) (eq. 7)
where zmsample is the m-th sample, e.g., sampled from the multivariate normal distribution, and φ′d is the set of parameters of decoder DGD 216 before optimizing the current task Ti.
DG 206 can be implemented with an auto-encoder, e.g., VAE-like (Variational Auto-Encoder and WAE-like (Wasserstein Auto-Encoder) auto-encoders. One embodiment uses WAE in DG, which allows different examples to stay far away from each other (e.g., having a Euclidean vector distance greater than a predetermined threshold or relatively greater than between other instances), which promotes better reconstruction. For example, if examples or instances are too close to each other creating density, these instances may not be considered representative. Representative instances may be those that differ from others but belong to the same sample or group of instances.
To train DG, embodiments of the invention may use e.g., a mean square error as a reconstruction loss to enable its replay ability of the past data, and add to it a penalized form of the Wasserstein distance between the distribution of embedding zi 218 and multivariate normal distribution to help generate data (e.g., together denoted by ψwae).
min Σm=1M∥zmsample−DGEx′mφe∥ (eq. 8)
min Σm=1M∥DGD(zmsample,φd)−x′m∥ (eq. 9)
where x′m is the replayed data (see e.g., eq. 4), φe and φd are the encoder's and decoder's parameters of DG for the process of learning new task Ti, respectively, and zmsample and M have the same meanings as they are in DPP-C.
Eq. 8 and 9 may constrain the consistency of DG's decoder 216 and encoder 212 over the randomly sampled zmsample. Eq. 9 may ensure that DG's decoder 216 can still remember training based on prior task training data. Using eq. 8 and eq. 9, embodiments of the invention maintain the DG's ability to reflect knowledge from the prior task training data. Overall, the final objective function for DG is composed by ψwae, eq. 8 and eq. 9.
Note, data generator DG 206 may also have a forgetting problem caused by incremental training of new tasks. To avoid data generator DG 206 forgetting its prior task training, embodiments of the invention constrain DG 206, e.g., using loss functions, to overcome its forgetting as shown in
Reference is made to
The training procedure of the Parameter Propagation in Learning Adaptation (PPLA) may proceed, e.g., as follows:
While multitasking T1, . . . , TN allows the proposed model to switch between several problems Ti, learning without forgetting may also be useful during the solving of a single problem. Many tasks can be hierarchically divided into sub-tasks, and the depth of such partition increases with the complexity of the basic task. Achieving a goal in such multilevel environments is a problem. When some mechanisms of active forgetting are introduced, the model can simplify goal achieving by breaking tasks into simpler steps and training a separate combination of neurons for each sub- task. This trick naturally increases the ability of the model to select the correct action.
In addition to reducing forgetting in incremental learning, embodiments of the invention provide other improvements in training machine learning models. In particular, some embodiments of the invention do not increase the parameters or expand the network to learn new tasks. Embodiments of the invention may provide more efficient memory and computational efficiency compared to adding replay data to the training set. Additionally or alternatively, no previous data needs to be stored to enable the system to remember the previously learned models or knowledge.
Evaluation, Results and Comparison: Results of the proposed approach PPLA and compare with state-of-the-art baselines using two image datasets and two text datasets.
Datasets—Two text datasets:
Experiment Settings:
Data Preparation: To simulate sequential learning, the same two data processing methods named disjoint and shuffled were used.
Disjoint: This method divides each dataset into several subsets of classes. Each subset is a task. For example, the DBPedia dataset were divided into two tasks (or subsets of classes). The first task consists of digits (classes) {0; 1; 2; 3; 4} and the second task consists of the remaining digits (classes) {5; 6; 7; 8; 9}. The systems learned the two subsets as two tasks in a sequential fashion and regard them together as 10-class classification. In order to consider more tasks in testing, THUCNews, which has all 10 classes, two experiment settings were created of 2 tasks (5 classes per task) and 5 tasks (2 classes per task). For DBPedia, which has 14 classes, three experiment settings were created for 2 tasks (7 classes per task), 3 tasks (5, 5, and 4 classes for the three tasks respectively), and 5 tasks (3, 3, 3, 3, and 2 classes for the 5 tasks respectively).
Shuffled: This method shuffles the input pixels of a text with a fixed random permutation. Two experiment settings were created: 3 tasks and 5 tasks. In both cases, the dataset for the first task was the original dataset. The datasets for the rest of the tasks were constructed through shuffling. Since shuffling of words in a sentence will change the sentence meaning and results in confusion, this experiment was not performed on the text datasets.
Baselines: Results were compared for three state-of-the-art baselines that are representative of the current approaches:
Training Details: For fair comparison, embodiments of the invention were tested using the same computer (or classifier) as the baselines. That is, a multilayer perceptron was adopted as the computer/classifier (as the baselines all use this method), which is a 3-layer network (i.e., two basic units with each hidden layer as a unit) followed by a softmax layer. For the inventive approach, the total number of parameters in the computer included both the generated parameters p and the shared parameters φ0. Due to the differences among different datasets, different settings were adopted for them. All baselines and the inventive approach were compared using the same setting for the same dataset.
Testing used a 3-layer perceptron (with 2 hidden layers) network for DPP and set the size of each hidden layer to 1000. Each network can generate 100 parameters at a time. Several networks may be parallelized in the DPP to generate more parameters when needed. The network parameters were updated using the Adam algorithm with a learning rate of 0.001.
Results and Analysis: results of a comparison between the inventive and baseline systems are shown in Table 1 and Table 2. Tables 1, 2, and 3 below indicate the following:
Table 1 shows that the PPLA technique has consistently superior accuracy than EWC and IMM techniques based on different datasets with 3 or 4 tasks (5 tasks results are given in Table 2). This improvement is at least partially based on the ability of the PPLA model to reduce the effects of the accuracy deterioration problem. EWC's performance is poor for the disjoint case (a more realistic setting in practice).
Table 2 shows that the PPLA technique has consistently superior accuracy to GR and IMM techniques based on different datasets with 5 tasks.
Ablation Study: understand performance by removing parameters. Table 3 analyses how the system behaves with less and less shared parameters in φ0 or more and more parameters replaced by the parameters 220 generated by DPP 204. The disjoint DBPedia tasks setting were selected to conduct the experiment as it is more useful and more difficult than other datasets. DBPedia is more useful because it contains many more training samples and test samples than other datasets, such as, THUCNews and the like. DBPedia is more difficult in current context of experiments because it has many classes (14), and complex configuration set of classes per different tasks.
Table 3 shows accuracy results (averaged in the same way as in Table 1) when a portion of the parameters in only the last layer of the computer is replaced by the parameters generated by DPP 204. The accuracy improves with increased percentages of parameters being replaced. The best accuracy is obtained when 80% of the parameters in the last layer are replaced through DPP, and the accuracy does not further improve with more replaced parameters. This observation indicates that replacing a part of parameters in computer to adapt new input tasks is sufficient. The same conclusion can also be made by replacing the parameters in the first hidden layer of the computer (which has 2 hidden layers). The replacing percentage of the last layer were fixed to 20%, and then the replacing percentages of the first layer were increased. The best accuracy reaches 92.91%(±0.67%) when replacing 40% parameters of the first layer, which gains only 1.87% in accuracy compared with no replacement. This result indicates that it suffices to replace the parameters in the last layer.
Embodiments of the invention propose a novel approach PPLA to reduce or eliminate catastrophic forgetting. The PPLA approach learns to build a model with two sets of parameters. The first set is shared by all tasks learned so far and the second set is dynamically generated to adapt the model (computer) to suit each individual test example. Experimental results show that the proposed approach significantly outperformed existing baseline methods.
Reference is made to
Data integration 402 receives incoming transactions and initially preprocesses the incoming data. Transaction enrichments 404 may preprocess the transactions. Data integration 402 may preprocess incoming transaction by e.g., data cleaning, filling in missing values, detection outliers, normalization, ETL processes, and transaction enrichments 404 may preprocess by e.g., augmenting (enlarging) the number of fraudulent transactions (e.g., transactions labeled as fraud). The process of getting historical data 406 synchronizes historical data with the new incoming transactions received by data integration 402. AI detection data structures 408 may detect events based on training a machine learning model using incremental learning without forgetting as described according to embodiments of the invention. In an example fraud detection system, each transaction gets its risk score 410. Policy calculation 412 treats the suspicious scores and routes accordingly. Profiles contain aggregated transactions according time period. Profile 414 updates synchronize according to new created/incoming transactions. Risk Control Management (RCM) 416 manages risk score including, e.g., investigation, monitoring, sending alerts, or marking as no risk. Investigation Data Base (IDB) 418 is used to investigate when researching transactional data and policy rules 412 result in an investigation. IDB 418 analyzes historical cases and alert data. Data can be used by the solution or by external applications that can query IDB 418, for example, to produce rule performance reports.
Variables: Analysts can define calculated variables using a comprehensive context such as a current transaction, a history of the main entity associated with the transaction, built-in models results, etc. These variables can be used to create new indicative features. The variables can be exported to the detection log, stored in IDB 418, and exposed to users in user analytics contexts.
Custom Events: Transactions that satisfy certain criteria may indicate occurrences of events that may be interesting for an analyst. The analyst can define events the system identifies and profiles when processing the transaction. This data can be used to create complementary indicative features (e.g., using the custom indicative features mechanism or Structured Model Overlay (SMO)). For example, the analyst can define an event that defined by a transaction of an amount>$100,000. The system may profile aggregations for all transactions that trigger this event (e.g., first time it happened for the transaction party, etc.). For example, the system may aggregate transactions (e.g., sum up numerically) for a specific type of transaction and for a certain period.
Custom Indicative Features: Once custom events are defined, the analyst can use predefined indicative feature templates to enrich built-in model results with new indicative features calculations. Proceeding with the example from the custom events section—The analyst can now create an indicative feature that defines e.g., if it has been more than a year since the customer performed a transaction with amount greater than $100,000, then add 10 points to the overall risk score of the model.
Structured Model Overlay (SMO) is a framework in which the analyst gets all outputs of built-in and custom analytics as input (such as the above) to be used to enhance the detection results with issues and set the risk score of the transaction.
Filter: As described below in reference to
Detection Log: A detection log may contain transactions enriched with analytics data such as indicative features results and variables. An analyst has the ability to configure which data should be exported to the log and use it for both pre-production and post-production tuning.
Detection Flow: The detection flow for transactions may include multiple steps, data fetch for detection (e.g., detection period sets and profile data for the entity), variable calculations, analytics models including different indicative feature instances, and/or SMO (structured model overlay). The detection process may be triggered for each transaction. However, most of the analytics logic relates to entities rather than transactions. For example, all transactions for the same entity (e.g., party) may trigger detection, whilst the detection logic is based on the party activity in the detection period.
Reference is made to
In Phase A Detection shown in
Initial fetch 502 may fetch the profiles and accumulation period data needed for the detection (e.g., for a card). For example, initial fetch 502 may fetch the card profiles and device profiles and the previous activity by the card set. The data which is fetched may be used for Actimize detection (e.g., 408 of
Partial model Calculation 504 may calculate custom events, performs analytics models, both for internal indicative features and custom indicative features. Partial model calculation 504 may determine an analytics risk score.
Variable Enhancements 506 may run phase A variables.
SMO 508 may be an Intelligence Server (IS) exit point that can be used by analytics to enrich out-of-the-box models (e.g., using internal indicative features and/or custom indicative features). SMO 508 may override the analytics risk score. The final step of the SMO model may be to recommend whether or not to proceed to phase B, although the final decision may additionally or alternatively, be made by the filter 510.
Filter 510 may decide whether or not to perform phase B of the detection process of
In Phase B Detection shown in
Second fetch 602 may retrieve data based on more complex queries than initial fetch 502, for example, multiple payees per transaction.
Complete Model Calculation 604 may perform additional internal indicative features and custom indicative features.
Variable Enhancements 606 may perform more calculations based on newly retrieved sets.
SMO 608 may decide the final score for the transaction. This can be based on the same or additional models as SMO 508.
Other or different operations or orders of operations may be used than shown in
Calculating base activities: Activities are a way to logically group together events that occur in the client's systems, e.g., as follows:
Base Activities: Activities may be divided into multiple base activities. Base activities may represent a specific activity the customer performed and determine which detection models are calculated for a transaction. Each transaction may be mapped to one and only one base activity. A base activity may be calculated for each transaction. The default base activity is usually determined according to the channel and the transaction type, and/or additional fields and calculations.
Calculating Base Activities: The tables in this section provide details of example fields used to calculate the base activity for each combination of solution, channel and transaction type. The base activity of a transaction may be set by combining the channel type and the transaction type as mapped in data integration. The definition of some base activities may also be based on value(s) of additional field(s) and/or calculated indicator(s), as detailed in the tables in this section. For an acquirer, the base activity may be calculated by combining the channel type, the message purpose and additional fields, as detailed in the relevant tables.
Financial fraud is an issue with far reaching consequences in various industries including government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Not surprisingly financial institutions have turned to automated process using numerical and computational methods. Data mining based approaches have been shown to be useful because of their ability to identify small anomalies in large data sets. There are many different types of fraud, as well as a variety of data mining methods, and research is continually being undertaken to find the best approach for each case. Financial fraud events take place frequently and then result in huge financial losses. Consequently, financial fraud causes severe problems for government and business. However, detecting such a fraud has always been challenging. With the rapid development of the e-commerce and e-payment, the problem of online transaction fraud has become increasingly prominent. Compared with traditional areas, online transaction is facing a considerably larger volume of fund transfer. Therefore, banks and financial institutions offer online (incremental) fraud detection systems much value and demand. Fraudulent transactions occur in streaming processes where there is a need to be able to detect anomaly behavior rapidly and instantaneously.
Embodiments of the invention may fetch data (e.g., using data integration 402 of
Data Pre-processing may include performing one or more of the following operations:
Fraud review may include determining one or more of the following factors which may impact the quality of the fraudulent dataset, e.g.:
Feature Selection Considerations may review data mapping and data validation documents and/or exclude data elements that are associated with incorrect mapping or with known data issues, for example, as follows:
Other or different system preferences, features, and applications may be used.
Learning without forgetting may refer to a reduction in forgetting (retaining more prior training than if no retention was trained) that allows some degree of forgetting, may refer to a maximum threshold of forgetting (e.g., incorrectly predicting less than 25%, 10%, etc. of old tasks) or minimum threshold of retention (e.g., correctly predicting at least 50%, 80%, etc. of old tasks), and/or may refer to training new tasks based on information (training data) from old tasks.
Model parameters may refer to weights of a neural network, hyper-parameters such as an activation function, or more generally to any other model parameters, or other explicit and implicit parameters.
Embodiments of the invention may be used to train models for various applications, such as, security, image event recognition, computer vision, virtual or augmented reality, speech recognition, text understanding, fraud detection, or other applications of deep learning. In the application of facial recognition, a device may use the model to efficiently perform facial recognition to trigger the device to unlock itself or a physical door when a match is detected. In the application of security, a security camera system may use the model to efficiently detect a security breach and sound an alarm or other security measure. In the application of autonomous driving, a vehicle computer may use the model to control driving operations, e.g., to steer away to avoid a detected object. In the application of fraud detection, an alarm system may use the model to detect, report and take action (e.g., send alarms) when fraud is detected.
Reference is made to
Operating system 115 may be or may include code to perform tasks involving coordination, scheduling, arbitration, or managing operation of computing device 100, for example, scheduling execution of programs. Memory 120 may be or may include, for example, a Random Access Memory (RAM), a read only memory (ROM), a Flash memory, a volatile or non-volatile memory, or other suitable memory units or storage units. Memory 120 may be or may include a plurality of different memory units. Memory 120 may store for example, instructions (e.g. code 125) to carry out a method as disclosed herein, and/or data such as low-level action data, output data, etc.
Executable code 125 may be any application, program, process, task or script. Executable code 125 may be executed by controller 105 possibly under control of operating system 115. For example, executable code 125 may be one or more applications performing methods as disclosed herein. In some embodiments, more than one computing device 100 or components of device 100 may be used. One or more processor(s) 105 may be configured to carry out embodiments of the present invention by for example executing software or code. Storage 130 may be or may include, for example, a hard disk drive, a floppy disk drive, a Compact Disk (CD) drive, a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. Data described herein may be stored in a storage 130 and may be loaded from storage 130 into a memory 120 where it may be processed by controller 105.
Input devices 135 may be or may include a mouse, a keyboard, a touch screen or pad or any suitable input device or combination of devices. Output devices 140 may include one or more displays, speakers and/or any other suitable output devices or combination of output devices. Any applicable input/output (I/O) devices may be connected to computing device 100, for example, a wired or wireless network interface card (NIC), a modem, printer, a universal serial bus (USB) device or external hard drive may be included in input devices 135 and/or output devices 140.
Embodiments of the invention may include one or more article(s) (e.g. memory 120 or storage 130) such as a computer or processor non-transitory readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein.
Reference is made to
In operation 1100, a processor (e.g., controller 105 of
In operation 1110, a processor (e.g., controller 105 of
A process or processor may train the machine learning model in a sequence of a plurality of sequential training iterations respectively associated with the sequence of a plurality of training tasks. In each of the plurality of sequential training iterations the machine learning model is trained by iterating over operations 1120-1130.
In operation 1120, a processor (e.g., controller 105 of
In operation 1130, a processor (e.g., controller 105 of
The subset of shared model parameters may be modified when training all of the plurality of training tasks and the subset of task-specific parameters are modified only when training the specific associated task but not the other non-specifically associated tasks. The task-specific parameters for the current training iteration may be generated based on a compressed encoding of the one or more training samples associated with the current training task and a non-compressed version of the one or more training samples associated with the previous training task. The compressed encoding may be generated by an encoder trained by adding a mean square error reconstruction loss of the one or more training samples associated with the previous training task to a penalized form of a Wasserstein distance between the distribution of the compressed encoding and a multivariate normal distribution of an embedded low dimensional space.
A process or processor may iteratively repeat operations 1110-1130 for each new task (e.g., setting the current task to the previous task and the new task to the current task).
These operations may be executed in a different order, some operations may be skipped or combined, and/or other operations may be added.
Embodiments of the invention may improve the technologies of computer automation, machine learning, computer bots, big data analysis, and computer use and automation analysis by using specific algorithms to analyze large pools of data, a task which is impossible, in a practical sense, for a person to carry out. Embodiments may enable more effectively, quickly and cheaply identifying automation opportunities, and finding longer routines to automate combined.
One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The embodiments described herein are therefore to be considered in all respects illustrative rather than limiting. In detailed description, numerous specific details are set forth in order to provide an understanding of the invention. However, it will be understood by those skilled in the art that the invention can be practiced without these specific details. In other instances, well-known methods, procedures, and components, modules, units and/or circuits have not been described in detail so as not to obscure the invention.
Embodiments may include different combinations of features noted in the described embodiments, and features or elements described with respect to one embodiment or flowchart can be combined with or used with features or elements described with respect to other embodiments.
Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing”, “analyzing”, “checking”, or the like, can refer to operation(s) and/or process(es) of a computer, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that can store instructions to perform operations and/or processes.
The term set when used herein can include one or more items. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
This application claims the benefit of U.S. Provisional Application Ser. No. 63/149,516, filed Feb. 15, 2021, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63149516 | Feb 2021 | US |