The present disclosure relates to the field of digital computer systems, and more specifically, to a method for class-incremental learning of a classifier.
Continual Deep convolutional neural networks (CNNs) have achieved remarkable success in various computer vision tasks, such as image classification, stemming from the availability of large curated datasets as well as huge computational and memory resources. This, however, poses significant challenges for their applicability to smart agents deployed in new and dynamic environments, where there is a need to continually learn about novel classes from very few training samples, and under resource constraints.
Various embodiments of the disclosure are provided. Specifically, a method for continual learning of a classifier, a computer program product, and a system as described by the subject matter of the independent claims. Advantageous embodiments of the disclosure are described in the dependent claims. Embodiments of the disclosure can be freely combined with each other if they are not mutually exclusive.
In an embodiment of the disclosure, a method for continual training of a classifier is provided. The classifier includes a controller and an explicit memory. The method includes pre-training the classifier using a first training dataset that includes data samples of a set of base classes. The method includes using a set of output vectors provided by the controller in response to the controller receiving input data samples for determining a set of prototype vectors that indicate the set of base classes respectively. The controller of the classifier is configured to provide the set of output vectors indicating the base classes in response to receiving the input data samples. The method further includes storing the set of prototype vectors in the explicit memory. The method further includes iteratively: receiving a second training dataset that includes data samples of a set of second classes, adding to the explicit memory a set of output vectors that indicate the set of second classes by providing the second training dataset to the classifier, retraining the classifier using the received second training dataset and previously received training datasets using as target the prototype vectors in the explicit memory, inferring the retrained classifier using the training datasets resulting in an updated set of prototype vectors indicating the base and second classes, and updating the explicit memory with the updated set.
In another embodiment of the disclosure, a computer program product is provided. The computer program product includes a processor and a computer-readable storage medium having computer-readable program code embodied therewith. When called by the processor, the computer-readable program code is configured to cause the processor to pre-train the classifier using a first training dataset that includes data samples of a set of base classes. When called by the processor, the computer-readable program code is further configured to cause the processor to use a set of output vectors provided by the controller in response to the controller receiving input data samples for determining a set of prototype vectors that indicate the set of base classes respectively. The controller of the classifier is configured to provide the set of output vectors indicating the base classes in response to receiving the input data samples. When called by the processor, the computer-readable program code is further configured to cause the processor to store the set of prototype vectors in an explicit memory. When called by the processor, the computer-readable program code is further configured to cause the processor to iteratively: receive a second training dataset that includes data samples of a set of second classes, add to the explicit memory a set of output vectors indicating the set of second classes by providing the second training dataset to the classifier, retrain the classifier using the received second training dataset and previously received training datasets using as target the prototype vectors in the explicit memory, infer the retrained classifier using the training datasets which results in an updated set of prototype vectors that indicates the base and second classes, and update the explicit memory with the updated set.
In another embodiment of disclosure, a computer system for continual training of a classifier is provided. The classifier includes a controller and a memory, herein referred to as explicit memory. The computer system includes a processor and a computer-readable storage medium having computer-readable program code embodied therewith. When called by the processor, the computer-readable program code is configured to cause the processor to pre-train the classifier using a first training dataset that includes data samples of a set of base classes. When called by the processor, the computer-readable program code is further configured to cause the processor to use a set of output vectors provided by the controller in response to the controller receiving input data samples for determining a set of prototype vectors that indicate the set of base classes respectively. The controller of the classifier is configured to provide the set of output vectors indicating the base classes in response to receiving the input data samples. When called by the processor, the computer-readable program code is further configured to cause the processor to store the set of prototype vectors in an explicit memory. When called by the processor, the computer-readable program code is further configured to cause the processor to iteratively: receive a second training dataset that includes data samples of a set of second classes, add to the explicit memory a set of output vectors indicating the set of second classes by providing the second training dataset to the classifier, retrain the classifier using the received second training dataset and previously received training datasets using as target the prototype vectors in the explicit memory, infer the retrained classifier using the training datasets which results in an updated set of prototype vectors that indicates the base and second classes, and update the explicit memory with the updated set.
The second classes of a current second training dataset may be one or more novel classes. The novel classes are classes which were not classes of the previous first training dataset and previous zero or more second training datasets. Optionally, the second classes may comprise one or more novel classes and one or more classes of the previous first training dataset and previous zero or more second training datasets.
In the following, embodiments of the disclosure are explained in greater detail, by way of example only, referring to the drawings in which:
The descriptions of the various embodiments of the present disclosure will be presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Classification may refer to the identification of which of a set of categories a data sample (e.g., observation) belongs to. Classification examples may include classifying a given email to the “spam” or “non-spam” class, classifying an object in an image into one of object classes. The classification may be performed by a classifier. The classifier may, for example, comprise a controller and an explicit memory. The controller may comprise a machine learning model that may be trained to classify input data samples. Thus, training the classifier comprises training the controller. The training may further comprise updating content of the explicit memory. The disclosure may enable applicability of the classifier to smart agents deployed in new and dynamic environments, because it may continually learn about novel classes from very few training samples, and under resource constraints.
The classifier may be trained using a set of training datasets 1, 2, . . . , s. The training datasets 1, 2, . . . , s may, for example, be received sequentially in time, e.g., the training dataset j is received after the training dataset i, if j>i. In order to train or re-train the classifier using a newly received training dataset, the classifier may be used to initialize the content of the explicit memory in order to represent the classes of the new training dataset by prototype vectors. For example, the first training dataset 1 may first be used to obtain initial output vectors of the classifier. The initial output vectors may be combined (e.g., averaged) per class to determine or derive initial prototype vectors that represent the set of base classes respectively. This stage may be referred to as “initial pass” stage. For example, the explicit memory may comprise the set of initial prototype vectors
Moreover, the classifier may be first trained using the first training dataset 1. The training may be performed in accordance with a supervised learning approach. This first training may comprise a pre-training and/or meta learning. Pretraining may be done initially with some epochs using all classes in the first dataset. Meta training may be done later over a higher number of episodes. In each episode, a subset of classes and samples from those classes a randomly selected for the optimization. At the end, the classifier is pretrained and meta learned. This first training may be referred to herein as pre-training. The classifier resulting from the first training may be referred to as pre-trained classifier. The first training dataset may comprise labelled data samples of a set of base classes. For example, the first training dataset may be defined as follows:
with input data samples xn1 e.g., an image, and corresponding ground-truth labels yn1. The labels yn1 ∈ 1 may represent the set of base classes c=|1|, where 1 may be the set of base classes. The total number of samples may be defined as 132 c.k where k is the number of data samples per class. The first training dataset may be large enough to provide a reliable trained classifier. In particular, each base class may be provided with sufficient data samples. The number of data samples per base class in the first training dataset may be higher than a predefined minimum number kmin of data samples e.g., k>kmin. During inference, the pre-trained classifier may receive at the controller a data sample as input and may predict the class of the data sample by the controller. For that, the controller may provide an output vector of dimension d representing the class of the input data sample. The controller may provide the output vectors in a hyperdimensional embedding space whose dimensionality may remain fixed, and may therefore be independent of the number of classes in the past and future. The dimension d may, for example, be provided as d>=256 and preferably d<|s|, where s:=∪j=1s j and S is the total number of training datasets.
After pre-training the controller, the first training dataset 1 may be used again to infer the pre-trained controller. In particular, the input data samples of the first training dataset 1 may be forward propagated through the pre-trained controller. This stage may be named herein as “last pass” stage. The resulting output vectors which are provided by the controller may be used to determine or derive prototype vectors that represent each class of the set of base classes. For example, the provided output vectors for each base class may be averaged to obtain one vector which is the prototype vector of the base class. The prototype vectors
may be stored in the explicit memory. The prototype vectors stored in the explicit memory may, for example, be accessed by comparing the similarities between an output query vector q of the classifier with all the prototype vectors, where said output vector q is obtained by inference of the pre-trained classifier using an input query sample. The similarity li may, for example, be defined for a given class i as follows: li=cos(tanh(q), tanh(pi)). where tanh(.) is the hyperbolic tangent function and cos(.,.) the cosine similarity. Thus, the disclosure may provide a content-based attention mechanism between the controller and the explicit memory by computing a similarity score for each memory entry with respect to a given query. After training the classifier, the explicit memory may be updated with the updated prototype vectors
Thus, upon receiving a current training dataset of current classes, the classifier may be trained by: first executing the “initial pass” stage using the current dataset to initialize the explicit memory with prototypes representing the current classes, followed by training the classifier with the current dataset, before executing the “last pass” stage to update the explicit memory using again the current dataset but also previously received datasets.
After pre-training the classifier with the first training dataset, further training datasets 2, . . . , s named second training datasets, may be received in order to further train the classifier. The second training dataset may comprise labelled data samples of novel classes. The classes of the different training datasets may or may not be mutually exclusive across different training datasets, i.e., ∀i≠j, i ∩ j=∅, where 1 may be the set of base classes, and ∀j≠1, j may be a set of classes which may be named second classes. The second classes j of a given jth training dataset Dj may be novel classes which are different from previous classes 1, 2 . . . and j−1 of the previous datasets D1, D2 . . . Dj−1 respectively. Alternatively, the second classes j of the jth training dataset Dj may comprise novel classes in addition to classes from any one of the previous classes 1, 2 . . . and j−1 of the previous datasets D1, D2 . . . Dj−1 respectively. In the following and for simplification of the description, the second classes may comprise only novel classes; however, the skilled person can implement the method accordingly for the second classes comprising novel classes and one or more of previous classes. This few shot continual learning of the classifier may enable that in any subsequent session, the classifier is prepared to deal with any number of training data samples.
For example, upon receiving the training dataset
with input data samples xn2 e.g., an image, and corresponding ground-truth labels yn2, the explicit memory may be initialized with prototypes of the novel classes by executing the “first pass” stage. For that, the input data samples of the training dataset 2 may be provided to the controller and the resulting output vectors may be used to obtain prototypes representing the novel classes. Thus, the explicit memory may comprise prototype vectors
representing both the base and current novel classes. Po2 comprises the updated prototypes for the base classes 1 and initial prototypes for the novel classes 2. The classifier may be (re)trained using the first and second training datasets 1 and 2. Using all previous training datasets may solve the catastrophic forgetting issue. The retraining of the classifier may be performed using as targets the prototype vectors stored in the explicit memory. The retraining may, for example, be performed by minimizing the distance between the output vectors of a given class of the controller and the corresponding prototype vectors of the explicit memory.
After being retrained, the classifier may be inferred with the training dataset 2 and previous training dataset 1 so that the output vectors of each class may be averaged and stored in the explicit memory as prototype vectors of the classes. The explicit memory may thus comprise the set of prototype vectors
where 2:=∪j=12 j.
Further training datasets may be received and may be processed as described above with reference to the dataset 2. In particular, for each received sth training dataset
representing classes s, the classifier may be (re)trained using the training dataset s as described in the following.
The explicit memory may be initialized with prototypes representing the current classes s in the explicit memory. For that, the “initial pass” stage may be executed using the current dataset s. This may result in the explicit memory having prototypes
representing all classes s. Pos comprises the updated prototypes for all previous classes s−1 and initial prototypes for the current novel classes s. The retraining of the classifier may be performed using as targets the prototype vectors stored in the explicit memory. The retraining may, for example, be performed using the current dataset s and further using the previously received training datasets 1 to s−1 by minimizing the distance between the output vectors of a given class of the controller and the corresponding prototype vectors of the explicit memory. After being retrained, the “last pass” stage may be executed. For that, the classifier may be inferred with the training dataset s and previous training dataset 1 to s−1, wherein the output vectors of each class may be averaged and stored in the explicit memory as prototype vectors of the classes. The explicit memory may thus comprise the set of updated prototype vectors
where s:=∪j=1s s. The disclosure may thus enable a linear growth in the explicit memory size with respect to the encountered classes.
In an example, the controller may include a feature extractor and a classification head. The feature extractor may map the data samples from an input domain X to a feature space: fθ
Thus, the pre-trained controller may comprise a pre-trained feature extractor and pre-trained classification head. Providing the controller with two independent components may enable a flexible retraining of the controller. For example, the retraining of the controller may comprise retraining only one component while freezing the other component. According to one embodiment, the feature extractor may be frozen and the classification head may be retrained using the further received training datasets 2, . . . , s.
In an example, an extra memory may be provided. The extra memory may be configured to store for each class i the averaged activation vector ai. The usage of the extra memory may avoid additional computation performed on the frozen feature extractor and additional storage that would have taken to store the input samples to the feature extractor which may generally be larger in size than the averaged embedding vectors to the classification head. The activation vector ai may be df-dimensional compressed vector that represents the globally averaged activations of class i, and may allow the determination of the corresponding prototype using gθ
For example, for each received sth training dataset
the feature extractor may be inferred with the input data samples xns and thus provide an extracted feature vector fθ
may be stored in the GAAM. The classification head may be retrained by using as input all the activation vectors in the GAAM,
which represent all classes received so far.
In case the feature extractor is not frozen, the content of the GAAM may be initialized and updated with the averaged activation vectors in a similar way as the explicit memory. For example, for a current sth training dataset
representing classes s, the GAAM may be initialized with activation vectors representing the current classes s. For that, during the “initial pass” stage execution, the current dataset s is provided as input to the feature extractor. The outputs of the feature extractor representing each class may be averaged to obtain the corresponding activation vector. This may result in the GAAM having activation vectors representing all classes s. After retraining the classifier, the GAAM may be updated during the execution of the “last pass” stage where the feature extractor is inferred with the training dataset s and previous training dataset 1 to s−1, wherein the output vectors of each class may be averaged and stored in the GAAM as activation vector of the class. The GAAM may thus comprise the set of activation vectors for all classes: s:=∪j=1s s. If the feature extractor is frozen, the initial content of the GAAM may be the optimal one e.g., executing the “last pass” stage may not change the initial content of the GAAM. In this case, the execution of the “last pass” may use the activation vectors which are currently stored in the GAAM and provide them as inputs to the classification head.
The retraining of the classification head may be performed by using a set of target prototype vectors K*. For a current training dataset, the target prototype vectors may, for example, be derived from the prototype vectors
of the explicit memory. In one example, K*=Pos. In another example, the target prototype vectors may be provided to create separation between nearby prototype pairs, which may optimally yield close to zero cross-correlation between the prototype pairs. A computationally cheap yet effective option may be to add some sort of noise to the prototypes Pos, e.g., quantization noise. For example, the prototypes Pos may be quantized to bipolar vectors by applying the element-wise sign operation, to obtain the targets K*=sign(Pos). The classification head may be retrained such that its output aligns with the bipolarized prototypes K*. Instead of attempting to optimize every training sample, this may allow to align the globally averaged activations available in the GAAM with the bipolarized prototypes K*. The classification head may have the task of mapping localist features from the feature extractor to a distributed representation. Thus, updating the parameters θ2 of the classification head may be sufficient, while the parameters θ1 of the feature extraction may be kept frozen during retraining. Due to the averaged prototype-based retraining and the linearity of the classification head, it may be sufficient to pass the averaged activations from the GAAM through the fully connected layer.
The retraining of the classification head may be performed such that a distance between output vectors of the classification head and corresponding prototype vectors in the explicit memory is minimized. For that, the minimization may be performed using the following equation over a number T of iterations:
where F=−Σi=1csh(ki*, gθ
After retraining of the classification head, the explicit memory may be updated. In particular, after T iterations of parameter updates, the final prototype vectors Ps are determined by passing the globally averaged activations As through the retrained classification head one last time to obtain final prototype vectors as follows: pi*=gθ
In an example, the feature extractor comprises non linear layers of a CNN and the classification head comprises a fully connected layer of the CNN. The classification head may be the final fully connected layer of the CNN. The connection of the feature extractor to the fully connected layer may enable to form an embedding network with a hyperdimensional distributed representation.
In an example, the target prototype vectors K* may be provided as nudged prototype vectors. This may provide improved prototype alignment strategy based on solving an optimization problem instead of simply bipolarizing the prototypes by K*=sign(Pos). The nudged prototype vectors may be provided such that they simultaneously a) improve the inter-class separability by attaining a lower similarity between the pairs of nudged prototype vectors, and b) remain close to the initial averaged prototype vectors Ps.
To obtain the nudged prototype vectors from the current prototype vectors Ps stored in the extra memory, the initial nudged prototype vectors may be initialized to the current prototype vectors as follows: K*(0)=Pos. The nudged prototype vectors are then updated U times in a training loop to find an optimal set of nudged prototype vectors unique to the given As available in the GAAM. The updates to the nudged prototype vectors may be based on two distinct loss functions that aim to meet the two aforementioned objectives. The first main objective may be to decrease the inter-class similarity, which may be achieved by minimizing the cross-correlation between the prototypes. In particular, the nudged prototype vectors may be updated using backpropagation on the standard gradient descent algorithm given as:
where O=Σi,j=1;i≠jcexp(sh(ki*(U), ki*(U))) and M=−Σi=1csh(ki*(U), pi), where ki*(u) is the quasi-orthogonal prototype vector obtained in iteration number u for the ith class, and pi is the prototype vector stored in the explicit memory in association with the ith class, and sh denotes a soft hamming distance. The final nudged prototype vectors K*:=K*(U) may thus be used as targets to retrain the classification head for T iterations.
The second objective to keep the updated prototypes similar to the initial prototypes K*(0) may be enabled by the second loss function M. This may avoid significant deviations from the original representations of the initial base categories on which the classifier was trained.
In an example, an in-memory computing core is provided. The in-memory computing core includes a crossbar array structure comprising row lines and column lines and resistive memory elements coupled between the row lines and the column lines at junctions formed by the row and column lines, programming the resistive memory elements of each column line to represent the values of the respective prototype vectors of the explicit memory, inputting to the crossbar array the query vector for performing the similarity search.
The controller 101 may, for example, be defined by a set of trainable parameters θ. The controller 101 may thus be said as implementing a function or model Fθ, where for each input data sample xns, the controller may provide an output vector Fθ(xns) of dimension d. For each input data sample xns of a given class i, the controller 101 may provide an output vector kni that indicates or represents the class i. Those output vectors kni that belong to the same class i may be averaged to obtain the prototype vector pi. The set of prototype vectors Ps may be stored in the explicit memory 103. Those prototype vectors may be used as targets for further training the controller 101.
The explicit memory 103 may be initialized with a set of prototype vectors
using a first training dataset 1. Alternatively, the set of prototype vectors P01 may be user defined. The classifier may be pre-trained in stage 201 using the first training dataset 1. The first training dataset may comprise data samples such as images of a set of base classes. The first training dataset may be defined as follows:
with input data samples xns e.g., an image, and corresponding ground-truth labels yn1. The labels yn1 ∈ 1 may represent the set of base classes c=|1|, where 1 may be the set of base classes, where the total number of samples may be defined as 1=c.k where k is the number of data samples per class. The number of data samples per base class in the first training dataset may be higher than a predefined minimum number kmin of data samples e.g., k>kmin. This pre-training may result in a pre-trained classifier. The training may be performed using the content of the explicit memory 103 as target. For example, the distance between output vectors of the controller 101 and the associated prototype vector of the set of prototype vectors P01 may be minimized during the training to find optimal values of the set of trainable parameters θ of the controller 101.
After pre-training, an updated or optimized set of prototype vectors P1 that represents the base classes 1 may be determined in stage 203. The set of prototype vectors
may be obtained using the output vectors of the controller 101. For example, for each base class i, the prototype vector pi may be determined as the average of the output vectors of the controller 101 for input data samples of the base class i. For each input data sample xn1 of a given base class i the pre-trained controller 101 may provide an output vector kni that indicates or represents the base class i. This may be referred to as forward prorogation of the pre-trained controller with the data sample xn1. Those output vectors kni that belong to the same class i may be averaged to obtain the prototype vector pi. The set of prototype vectors P1 may be stored in stage 205 in the explicit memory 103 by replacing the initial set of prototype vectors P01 with the updated set of prototype vectors P1.
After pre-training the classifier with the first training dataset 1, further training datasets 2, . . . , s, named second training datasets, may be received in order to further train the classifier 100. The second training dataset may comprise labelled data samples of novel classes. The classes of the different training datasets may be mutually exclusive across different training datasets, i.e., ∀i≠j, i ∩ j=∅, where 1 may be the set of base classes, and ∀j≠1, j may be a set of novel classes. The number of data samples per novel class in the further training datasets may be smaller than the predefined minimum number kmin of data samples e.g., k<kmin. The second training datasets 2, . . . , s may be received and/or processed successively and for each current received second training dataset s (in stage 207) the following may be performed.
The explicit memory 103 may be initialized in stage 208 with a set of prototype vectors
for the novel classes s. These set of prototype vectors P0s obtained by executing the “initial pass” stage using the training dataset s. The explicit memory 103 may thus comprise the optimized prototype vectors of the previous classes and the initial set of prototype vectors for the current classes s as follows:
The classifier 100 may be retrained in stage 209 using the received second dataset s but also using previously received training datasets 1 . . . s−1. The current training dataset may be defined as follows
The retraining may be performed using as target the prototype vectors
where s:=∪j=1s s, in the explicit memory 103. In one example, the retraining may be performed by inputting the input data samples
of all the training datasets to the controller 101. The retraining may, for example, be performed by minimizing the distance between the output vectors of a given class of the controller and the corresponding prototype vector
of the explicit memory 103.
The retrained classifier 100 may be inferred in stage 211 using all the received training datasets 1, . . . , s to provide an updated set of prototype vectors
The set of prototype vectors
may replace the set of prototype vectors Pos in the explicit memory 103. For example, the inference may be performed by inputting the input data samples
of all the training datasets to the retrained controller 101 that provides respective output vectors, wherein output vectors that belong to the same class i may be averaged to obtain the prototype vector pi which is stored in the explicit memory 103 to update/replace its content.
The explicit memory may be initialized with a set of prototype vectors
using a first training dataset 1. The classifier may be pre-trained in stage 301 using a first training dataset 1. The first training dataset may comprise data samples such as images of a set of base classes. The first training dataset may be defined as follows:
with input data samples xn1 e.g., an image, and corresponding ground-truth labels yn1. The labels yn1 ∈ 1 may represent a set of base classes c=|1|, where 1 may be the set of base classes, where the total number of samples may be defined as 1=c.k where k is the number of data samples per class. The number of data samples per base class in the first training dataset may be higher than a predefined minimum number kmin of data samples e.g., k>kmin. This pre-training may result in a pre-trained classifier. The training may be performed using the content of the explicit memory 103 as target. For example, the distance between output vectors of the controller 101 and the associated prototype vector of the set of prototype vectors P01 may be minimized during the training in order to find optimal values of the set of trainable parameters θ1 and θ2 of the feature extractor and classification head respectively.
After pre-training the classifier, a set of prototype vectors P1 may be determined in stage 303 and stored in the explicit memory. The set of prototype vectors
may be obtained using the output vectors of the controller (i.e., output vectors of the classification head). For example, for each base class i, the prototype vector pi may be determined as the average of the output vectors of the controller for input data samples of the base class i. For each input data sample xn1 of a given base class i the pre-trained controller may provide an output vector kni that indicates or represents the base class i. Those output vectors kni that belong to the same class i may be averaged to obtain the prototype vector pi. The activation vectors representing the base classes may be determined using feature vectors of the pre-trained feature extractor. For that, the first training dataset 1 may be reused to infer the pre-trained feature extractor in order to provide the averaged activation vectors
associated with the base classes 1. For example, the activation vector may be provided as follows. The pre-trained feature extractor may receive the input data samples xn1 and provide corresponding extracted feature vectors fθ
and the set of prototype vectors P1 may be determined in one go, that is, the same input samples of the first training dataset my be used to provide the activation vectors and the prototype vectors.
The set of prototype vectors P1 may be stored in stage 305 in the explicit memory and the set of activation vectors
may be stored in the activation memory. The pre-trained feature extractor may be frozen for further received training datasets.
After pre-training the classifier with the first training dataset 1, further training datasets 2, . . . , s, named second training datasets, may be received in order to further train the classifier. The second training dataset may comprise labelled data samples of novel classes. The classes of the different training datasets may be mutually exclusive across different training datasets, i.e., ∀i≠j, i ∩ j=∅, where 1 may be the set of base classes, and ∀j≠1, j may be a set of novel classes. The number of data samples per novel class in the further training datasets may be smaller than the predefined minimum number kmin of data samples e.g., k<kmin. The second training datasets 2, . . . , s may be received and/or processed successively and for each current received second training dataset s (in stage 307) the following may be performed.
The explicit memory may be initialized in stage 308 with a set of prototype vectors
obtained by executing the “initial pass” stage using the training dataset s. The explicit memory may thus comprise the optimized prototype vectors of the previous classes and the initial set of prototype vectors for the current classes as follows:
The “first pass” execution may be used to update the content of the activation memory with the activation vectors representing the current novel classes. The activation memory may thus comprise the set of activation vectors
A set of target prototype vectors may, for example, be derived in stage 310 from the current prototype vectors
of the explicit memory. For example, the target prototype vectors may be provided by applying the element-wise sign operation to obtain the targets K*=sign(Pos).
The classifier may be retrained in stage 311 using the received second dataset s but also using previously received training datasets 1 . . . s−1. The retraining may be performed by inputting the activation vectors
previously stored in the activation memory to the classification head. During the retraining, the classification head may receive them from the activation memory. The activation memory may be advantageous because previously produced activation vectors are reused and there is no need to produce them again through the pre-trained and frozen feature extractor.
The current training dataset may be defined as follows
The retraining may be performed by freezing the feature extractor and retraining the classification head. The retraining of the classification head may be performed by using the set of target prototype vectors K*. The retraining may, for example, be performed by minimizing the distance between the output vectors of a given class of the controller and the corresponding prototype vector in K*.
The retrained classifier may be inferred in stage 313 using all the received training datasets 1, . . . , s to provide an updated set of prototype vectors
For example, the inference may be performed by inputting the input data samples
of all the training datasets to the retrained controller that provides respective output vectors, wherein output vectors that belong to the same class i may be averaged to obtain the prototype vector Pi. Alternatively, the activation vectors
currently stored in the activation memory may be input to the retrained classification head to produce the updated set of prototype vectors
The set of prototype vectors
may replace the set of prototype vectors Pos in the explicit memory.
The explicit memory may be initialized with a set of prototype vectors
using a first training dataset 1. The classifier may be pre-trained in stage 401 using a first training dataset 1. The first training dataset may comprise data samples such as images of a set of base classes. The first training dataset may be defined as follows:
with input data samples xn1 e.g., an image, and corresponding ground-truth labels yn1. The labels yn1 ∈ 1 may represent a set of base classes c=|1|, where 1 may be the set of base classes, where the total number of samples may be defined as 1=c.k where k is the number of data samples per class. The number of data samples per base class in the first training dataset may be higher than a predefined minimum number kmin of data samples e.g., k>kmin. This pre-training may result in a pre-trained classifier. The training may be performed using the content of the explicit memory 103 as target. For example, the distance between output vectors of the controller 101 and the associated prototype vector of the set of prototype vectors P01 may be minimized during the training in order to find optimal values of the set of trainable parameters θ1 and θ2 of the feature extractor and classification head respectively.
After pre-training the classifier, a set of prototype vectors P1 may be determined in stage 403 and stored in the explicit memory. The set of prototype vectors
may be obtained using the output vectors of the controller. For example, for each base class i, the prototype vector pi may be determined as the average of the output vectors of the controller for input data samples of the base class i. The output vectors may be obtained during the pre-training of the classifier e.g., for each input data sample xn1 of a given base class i the controller may provide an output vector kni that indicates or represents the base class i. Those output vectors kni that belong to the same class i may be averaged to obtain the prototype vector pi. The activation vectors representing the base classes may be determined using output vectors of the pre-trained feature extractor. The first training dataset 1 may be reused to infer the pre-trained feature extractor in order to provide the averaged activation vectors associated with the base classes. For example, the activation vector may be provided as follows. The pre-trained feature extractor may receive the input data samples xn1 and provide corresponding extracted feature vectors fθ
and the set of prototype vectors P1 may be determined in one go, that is, the same input samples of the first training dataset my be used to provide the activation vectors and the prototype vectors.
The set of prototype vectors P1 may be stored in stage 405 in the explicit memory and the set of activation vectors
may be stored in the activation memory. The pre-trained feature extractor may be frozen for further received training datasets.
After pre-training the classifier with the first training dataset 1, further training datasets 2, . . . , s, named second training datasets, may be received in order to further train the classifier. The second training dataset may comprise labelled data samples of novel classes. The classes of the different training datasets may be mutually exclusive across different training datasets, i.e., ∀i≠j, i ∩ j=∅, where 1 may be the set of base classes, and ∀j≠1, j may be a set of novel classes. The number of data samples per novel class in the further training datasets may be smaller than the predefined minimum number kmin of data samples e.g., k<kmin. The second training datasets 2, . . . , s may be received and/or processed successively and for each current received (in stage 407) second training dataset s the following may be performed.
The explicit memory may be initialized in stage 408 with a set of prototype vectors
obtained by executing the “initial pass” stage using the training dataset s. The explicit memory may thus comprise the optimized prototype vectors of the previous classes and the initial set of prototype vectors for the current classes as follows:
The “first pass” execution may be used to update the content of the activation memory with the activation vectors representing the current novel classes. The activation memory may thus comprise the set of activation vectors
The current training dataset may be defined as follows
A set target prototype vectors K* may be provided in stage 410 as nudged prototype vectors from the current set of prototype vectors
This may provide improved prototype alignment strategy based on solving an optimization problem instead of simply bipolarizing the prototypes by K*=sign(Pos). The nudged prototype vectors may be provided such that they simultaneously a) improve the inter-class separability by attaining a lower similarity between the pairs of nudged prototype vectors, and b) remain close to the initial averaged prototype vectors Pos. To obtain the nudged prototype vectors from the current prototype vectors Pos stored in the extra memory, the initial nudged prototype vectors may be initialized to the current prototype vectors as follows: K*(0)=Pos. The nudged prototype vectors are then updated U times in a training loop to find an optimal set of nudged prototype vectors unique to the given As available in the GAAM. The updates to the nudged prototype vectors may be based on two distinct loss functions that aim to meet the two aforementioned objectives. The first main objective may be to decrease the inter-class similarity, which may be achieved by minimizing the cross-correlation between the prototypes. In particular, the nudged prototype vectors may be updated using backpropagation on the standard gradient descent algorithm given as:
where O=Σi,j=1;i≠jcexp(sh(ki*(U), ki*(U))) and M=−Σi=1csh(ki*(U), pi), where ki*(U) is the quasi-orthogonal prototype vector obtained in iteration number u for the ith class, and pi is the prototype vector stored in the explicit memory in association with the ith class, and sh denotes a soft hamming distance. The final nudged prototype vectors K*:=K*(U) may thus be used to retrain the classification head for T iterations.
The classifier may be retrained in stage 411 using the received second dataset s but also using previously received training datasets 1 . . . s−1. The retraining may be performed by inputting the activation vectors
previously stored in the activation memory to the classification head. During the retraining, the classification head may receive them from the activation memory. The activation memory may be advantageous because previously produced activation vectors are reused and there is no need to produce them again. The retraining may be performed by freezing the feature extractor and retraining the classification head. The retraining of the classification head may be performed by using the set of target prototype vectors K* which are the quasi-orthogonal prototype vectors.
The retrained classifier may be inferred in stage 413 using all the received training datasets 1, . . . , s to provide an updated set of prototype vectors
For example, the inference may be performed by inputting the input data samples
of all the training datasets to the retrained controller that provides respective output vectors, wherein output vectors that belong to the same class i may be averaged to obtain the prototype vector pi. Alternatively, the activation vectors
currently stored in the activation memory may be input to the retrained classification head to produce the updated set of prototype vectors
The classifier comprises a feature extractor (FE) 510 which may be the nonlinear layers of a CNN, a final fully connected layer (FCL) 511 of the CNN and an explicit memory (EM) 512. The FE 510 may have trainable parameters θ1. The FCL 511 may have the trainable parameters θ2. The EM 512 may be configured to store prototype vectors representing classes. The training of the classifier according to the present example may be performed in different phases, a first phase 500 followed by a succession of phases 501.1 to 501.S. The classifier may be provided with an additional memory GAAM 513 that may be used for training in the phases 501.1 to 501.S.
In the first phase 500, the classifier may be first trained using a first training dataset 1 comprising data samples of a set of base classes 1. The first training may comprise pre-training and/or meta-learning. For example, the first training dataset
may be provided with input data samples xn1 e.g., an image, and corresponding ground-truth labels yn1, yn1 ∈ 1. An initial set
of prototype vectors may be provided by executing the “first pass” stage using the input data samples xn1 to the FE 510. After pre-training the classifier using the initial set of prototype vectors as target, a set of updated prototype vectors Pp1 may be determined (e.g., in the first session 501.1 or at the pre-training phase) and stored in the explicit memory 512. The set of prototype vectors
may be obtained using the output vectors of the pre-trained FCL 511. For example, for each base class i, the prototype vector pi may be determined as the average of the output vectors of the FCL 511 for input data samples of the base class i. The output vectors may be obtained after the pre-training of the classifier e.g., for each input data sample xn1 of a given base class i the FCL 511 may provide an output vector kni that indicates or represents the base class i. Those output vectors kni that belong to the same class i may be averaged to obtain the prototype vector pi. Thus, the pre-training in the first phase 500 may result in a pre-trained FE 510 and pre-trained FCL 511. In addition, the EM 512 may store the set of prototype vectors Pp1.
In the following phases 501.1-S which may be referred to as sessions, further training datasets may be received in order to retrain the pre-trained classifier. However, before said further training datasets are processed, the first training dataset 1 may be (re)used in the first session 501.1 to retrain the classifier. Before that, a set of activation vectors
may also be determined by averaging per class the feature vectors which are outputs by the pre-trained FE 510 in response to receiving input samples of the first training dataset 1. The GAAM 513 may be filled with the averaged activation vectors A1 of the base classes 1. For example, the activation vector may be provided as follows. The pre-trained FE 510 may receive the input data samples xn1 and provide corresponding extracted feature vectors fθ
In the first session 501.1, the retraining occurs in two stages 503.1 and 505.1. In the first stage 503.1, the FCL 511 is retrained with the samples of the first training dataset 1. The retraining of the FCL 511 may be performed as follows. The FCL 511 may receive as input all stored activation vectors ai associated with the base classes 1 in order to be retrained using as target the prototype vectors K*. The retraining of the FCL 511 may be performed such that a distance between output vectors of the FCL 511 and corresponding prototype vectors in the EM 512 is minimized. For that, the minimization may be performed using the following equation over a number T of iterations:
where F=−Σi=1csh(ki*, gθ
After the first session 501.1 is completed, in each further sth session 501.s, the classifier may be retrained with a new received training dataset
In each further session, the retraining is done in two stages 503.s and 505.s in a similar way as described with the first session 501.1. For example, in the second session i.e., s=2, the EM 512 may be initialized with the set of prototype vectors
and the GAAM 513 may be appended with activation vectors
This may be performed by providing as input the samples xn2 of the training dataset 2 to the frozen FE 510, and the retrained FCL 511 may use the activation vectors A2 as inputs and provide output vectors which may be averaged per class to obtain set of prototype vectors
The current content P02 of the EM 512 may comprise
Prototype vectors K* may be obtained using the sign function, e.g., K*=sign(P02). The FCL 511 may be retrained using the training datasets 1 and 2 using as targets the current content of the EM 512, namely containing the set of prototype vectors K*. The FCL 511 may be retrained with the activation vectors in the GAAM 513 which represent all processed classes, namely 1 and 2. After the FCL 511 is retrained, the EM 512 may be updated again as described above with the first session to obtain the set of prototype vectors P2. These two stages of the retraining may be repeated for each received training dataset. It is to be noted that the classes of the different training datasets may be mutually exclusive across different training datasets, i.e., ∀i≠j, i ∩ j=∅, where 1 may be the set of base classes, and ∀j≠1, j may be a set of novel classes.
The classifier comprises a feature extractor (FE) 610 which may be the nonlinear layers of a CNN, a final fully connected layer (FCL) 611 of the CNN and an explicit memory (EM) 612. The FE 610 may have trainable parameters θ1. The FCL 611 may have the trainable parameters θ2. The EM 612 may be configured to store prototype vectors representing classes. The training of the classifier according to the present example may be performed in different phases, a first phase 600 followed by a succession of phases 601.1 to 601.S. The classifier may be provided with an additional memory GAAM 613 that may be used for training in the phases 601.1 to 601.S.
In the first phase 600, the classifier may be pre-trained or meta learned using a first training dataset 1 comprising data samples of a set of base classes 1. For example, the first training dataset
may be provided with input data samples xn1 e.g., an image, and corresponding ground-truth labels yn1, yn1 ∈ 1. An initial set
of prototype vectors may be provided by executing the “first pass” stage using the input data samples xn1 to the FE 610. After pre-training the classifier using the initial set of prototype vectors as target, a set of prototype vectors Pp1 may be determined and stored in the explicit memory 612 (e.g., in the first session 601.1 or at the pre-training phase). The set of prototype vectors
may be obtained using the output vectors of the pre-trained FCL 611. For example, for each base class i, the prototype vector pi may be determined as the average of the output vectors of the pre-trained FCL 611 for input data samples of the base class i. The output vectors may be obtained after the pre-training of the classifier e.g., for each input data sample xn1 of a given base class i the FCL 611 may provide an output vector kni that indicates or represents the base class i. Those output vectors kni that belong to the same class i may be averaged to obtain the prototype vector pi. Thus, the pre-training in the first phase 600 may result in a pre-trained FE 610 and pre-trained FCL 611. In addition, the EM 612 may store the set of prototype vectors Pp1.
In the following phases 601.1-S which may be referred to as sessions, further training datasets may be received in order to retrain the pre-trained classifier. However, before said further training datasets are processed, the first training dataset 1 may be (re)used in the first session 601.1 to retrain the classifier. Before that, a set of activation vectors
may also be determined by averaging per class the feature vectors which are outputs by the pre-trained FE 510 in response to receiving input samples of the first training dataset 1. The GAAM 513 may be filled with the averaged activation vectors A1 of the base classes 1. For example, the activation vector may be provided as follows. The pre-trained FE 510 may receive the input data samples xn1 and provide corresponding extracted feature vectors fθ
In the first session 601.1, the retraining occurs in three stages 602.1, 603.1 and 605.1. In the first stage 602.1, the FE 610 and the FCL 611 are frozen such that the content of the EM 612 is updated using the first training dataset 1 in order to obtain quasi-orthogonal prototype vectors as the new content K* of the EM 612. The quasi-orthogonal prototype vectors may be referred to as nudged prototype vectors. To obtain the nudged prototype vectors from the current prototype vectors Pp1 stored in the explicit memory 612, the initial nudged prototype vectors may be initialized to the current prototype vectors as follows: K*(0)=Pp1. The nudged prototype vectors are then updated U times in a training loop to find an optimal set of nudged prototype vectors unique to the given activation vectors available in the GAAM 613. The updates to the nudged prototype vectors may be based on two distinct loss functions. In particular, the nudged prototype vectors may be updated using backpropagation on the standard gradient descent algorithm given as:
where O=Σi,j=1;i≠jcexp(sh(ki*(U), ki*(U))) and M=−Σi=1csh(ki*(U), pi), where ki*(u) is the quasi-orthogonal prototype vector obtained in iteration number u for the ith class, and pi is the prototype vector stored in the explicit memory in association with the ith class, and sh denotes a soft hamming distance. The final nudged prototype vectors K*:=K*(U) may be stored in the EM 612 that may be frozen for the next stage 603.1.
In the second stage 603.1, the FCL 611 is not frozen as it may be retrained with the samples of the first training dataset 1. The retraining of the FCL 611 may be performed as follows. The FCL 611 may receive as input all stored activation vectors ai associated with the base classes 1 in order to be retrained using as target the prototype vectors K*. The retraining of the FCL 611 may be performed such that a distance between output vectors of the FCL 611 and corresponding prototype vectors in the EM 612 is minimized. For that, the minimization may be performed using the following equation over a number T of iterations:
where F=−Σi=1csh(ki*, gθ
In the third stage 605.1, the retrained FCL 611 is again frozen, and the EM 612 may be updated using the re-trained classifier. The final prototype vectors P1 are determined by passing the activation vectors A1 in the GAAM 613 through the retrained FCL 611 one last time in order to obtain the prototype vectors as follows: pi1=gθ
After the first session 601.1 is completed, in each further sth session 601.s, the classifier may be retrained with a new received training dataset
In each further session, the retraining is done in three stages 602.s. 603.s and 605.s in a similar way as described with the first session 601.1. For example, in the second session i.e., s=2, the EM 612 may be initialized with the set of prototype vectors
and the GAAM 613 may be appended with activation vectors
This may be performed by providing as input the samples xn2 of the training dataset 2 to the frozen FE 610, and the retrained FCL 611 may use the activation vectors
as inputs and provide output vectors which may be averaged per class to obtain set of prototype vectors
The current content of the EM 612 may comprise
The current content EM 612 may be updated in stage 602.2 with nudged prototype vectors which are obtained using the current dataset 2 but also the previous dataset 1. The GAAM 613 may have the activation vectors for all so far received datasets 1 and 2. The new nudged prototype vectors that represent classes 1 and 2 may be obtained as described with reference to stage 602.1 of the first session 601.1. Furthermore, the FCL 611 may be retrained in stage 603.2 using the training datasets 1 and 2 and using as targets the current content of nudged vectors of the EM 612. The FCL 611 may be retrained with the activation vectors in the GAAM 613 which represent all processed classes namely 1 and 2. After the FCL 611 is retrained, the EM 612 may be updated again in stage 605.2 as described above with the first session. These three stages of the retraining may be repeated for each received training dataset. It is to be noted that the classes of the different training datasets may be mutually exclusive across different training datasets, i.e., ∀i≠j, i ∩ j=∅, where 1 may be the set of base classes, and ∀j≠1, j may be a set of novel classes.
The in-memory core may, for example, comprise a crossbar array structure 700 comprising row lines (or wordlines) and column lines (or bitlines) and resistive memory elements coupled between the row lines and the column lines at junctions formed by the row and column lines. The columns of the crossbar array structure 700 are programmed using progressive crystallization scheme such that prototype vectors (e.g., with dimension d=256) corresponding to few training example prototypes are accumulated in-situ. Output vectors may directly be written to the crossbar using the progressive crystallization scheme, exploiting the fact that crystallization acts as a summation function. In the end, the prototype vectors which are average (or summed) version of the output vectors per class are internally prepared e.g., no need to externally compute the average and write. Each column of the crossbar array structure 700 may be associated with respective class. In this example, results are obtained after the classifier is meta-learned on base classes and evaluated through series of sessions first involving base classes and later sessions involving novel classes. Each session of novel classes includes 5 shots per novel class and 5 novel classes i.e., |i|=5, for i>1. The data samples of the training datasets may be images from CIFAR100 database.
As shown in
After the crossbar array structure 700 is programmed with the prototype vectors of the classes, a similarity search between query vector and prototype vectors may be performed using in-memory MVM.
Computing environment 800 contains an example of an environment for the execution of at least some of the computer code involved in performing the inventive methods, such as continual classifier's learning code 900. In addition to continual classifier's learning code 900, computing environment 800 includes, for example, computer 801, wide area network (WAN) 802, end user device (EUD) 803, remote server 804, public cloud 805, and private cloud 806. In this embodiment, computer 801 includes processor set 810 (including processing circuitry 820 and cache 821), communication fabric 811, volatile memory 812, persistent storage 813 (including operating system 822 and continual classifier's learning code 900, as identified above), peripheral device set 814 (including user interface (UI), device set 823, storage 824, and Internet of Things (IoT) sensor set 825), and network module 815. Remote server 804 includes remote database 830. Public cloud 805 includes gateway 840, cloud orchestration module 841, host physical machine set 842, virtual machine set 843, and container set 844.
COMPUTER 801 may take the form of a desktop computer, laptop computer, tablet computer, smart phone, smart watch or other wearable computer, mainframe computer, quantum computer or any other form of computer or mobile device now known or to be developed in the future that is capable of running a program, accessing a network or querying a database, such as remote database 830. As is well understood in the art of computer technology, and depending upon the technology, performance of a computer-implemented method may be distributed among multiple computers and/or between multiple locations. On the other hand, in this presentation of computing environment 800, detailed discussion is focused on a single computer, specifically computer 801, to keep the presentation as simple as possible. Computer 801 may be located in a cloud, even though it is not shown in a cloud in
PROCESSOR SET 810 includes one, or more, computer processors of any type now known or to be developed in the future. Processing circuitry 820 may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry 820 may implement multiple processor threads and/or multiple processor cores. Cache 821 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 810. Cache memories are typically organized into multiple levels depending upon relative proximity to the processing circuitry. Alternatively, some, or all, of the cache for the processor set may be located “off chip.” In some computing environments, processor set 810 may be designed for working with qubits and performing quantum computing.
Computer readable program instructions are typically loaded onto computer 801 to cause a series of operational stages to be performed by processor set 810 of computer 801 and thereby effect a computer-implemented method, such that the instructions thus executed will instantiate the methods specified in flowcharts and/or narrative descriptions of computer-implemented methods included in this document (collectively referred to as “the inventive methods”). These computer readable program instructions are stored in various types of computer readable storage media, such as cache 821 and the other storage media discussed below. The program instructions, and associated data, are accessed by processor set 810 to control and direct performance of the inventive methods. In computing environment 800, at least some of the instructions for performing the inventive methods may be included in continual classifier's learning code 900 in persistent storage 813.
COMMUNICATION FABRIC 811 is the signal conduction paths that allow the various components of computer 801 to communicate with each other. Typically, this fabric is made of switches and electrically conductive paths, such as the switches and electrically conductive paths that make up busses, bridges, physical input/output ports and the like. Other types of signal communication paths may be used, such as fiber optic communication paths and/or wireless communication paths.
VOLATILE MEMORY 812 is any type of volatile memory now known or to be developed in the future. Examples include dynamic type random access memory (RAM) or static type RAM. Typically, the volatile memory is characterized by random access, but this is not required unless affirmatively indicated. In computer 801, the volatile memory 812 is located in a single package and is internal to computer 801, but, alternatively or additionally, the volatile memory may be distributed over multiple packages and/or located externally with respect to computer 801.
PERSISTENT STORAGE 813 is any form of non-volatile storage for computers that is now known or to be developed in the future. The non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer 801 and/or directly to persistent storage 813. Persistent storage 813 may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid state storage devices. Operating system 822 may take several forms, such as various known proprietary operating systems or open source Portable Operating System Interface type operating systems that employ a kernel. The code included in continual classifier's learning code 900 typically includes at least some of the computer code involved in performing the inventive methods.
PERIPHERAL DEVICE SET 814 includes the set of peripheral devices of computer 801. Data communication connections between the peripheral devices and the other components of computer 801 may be implemented in various ways, such as Bluetooth connections, Near-Field Communication (NFC) connections, connections made by cables (such as universal serial bus (USB) type cables), insertion type connections (for example, secure digital (SD) card), connections made though local area communication networks and even connections made through wide area networks such as the internet. In various embodiments, UI device set 823 may include components such as a display screen, speaker, microphone, wearable devices (such as goggles and smart watches), keyboard, mouse, printer, touchpad, game controllers, and haptic devices. Storage 824 is external storage, such as an external hard drive, or insertable storage, such as an SD card. Storage 824 may be persistent and/or volatile. In some embodiments, storage 824 may take the form of a quantum computing storage device for storing data in the form of qubits. In embodiments where computer 801 is required to have a large amount of storage (for example, where computer 801 locally stores and manages a large database) then this storage may be provided by peripheral storage devices designed for storing very large amounts of data, such as a storage area network (SAN) that is shared by multiple, geographically distributed computers. IoT sensor set 825 is made up of sensors that can be used in Internet of Things applications. For example, one sensor may be a thermometer and another sensor may be a motion detector.
NETWORK MODULE 815 is the collection of computer software, hardware, and firmware that allows computer 801 to communicate with other computers through WAN 802. Network module 815 may include hardware, such as modems or Wi-Fi signal transceivers, software for packetizing and/or de-packetizing data for communication network transmission, and/or web browser software for communicating data over the internet. In some embodiments, network control functions and network forwarding functions of network module 815 are performed on the same physical hardware device. In other embodiments (for example, embodiments that utilize software-defined networking (SDN)), the control functions and the forwarding functions of network module 815 are performed on physically separate devices, such that the control functions manage several different network hardware devices. Computer readable program instructions for performing the inventive methods can typically be downloaded to computer 801 from an external computer or external storage device through a network adapter card or network interface included in network module 815.
WAN 802 is any wide area network (for example, the internet) capable of communicating computer data over non-local distances by any technology for communicating computer data, now known or to be developed in the future. In some embodiments, the WAN may be replaced and/or supplemented by local area networks (LANs) designed to communicate data between devices located in a local area, such as a Wi-Fi network. The WAN and/or LANs typically include computer hardware such as copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and edge servers.
END USER DEVICE (EUD) 803 is any computer system that is used and controlled by an end user (for example, a customer of an enterprise that operates computer 801), and may take any of the forms discussed above in connection with computer 801. EUD 803 typically receives helpful and useful data from the operations of computer 801. For example, in a hypothetical case where computer 801 is designed to provide a recommendation to an end user, this recommendation would typically be communicated from network module 815 of computer 801 through WAN 802 to EUD 803. In this way, EUD 803 can display, or otherwise present. the recommendation to an end user. In some embodiments, EUD 803 may be a client device, such as thin client, heavy client, mainframe computer, desktop computer and so on.
REMOTE SERVER 804 is any computer system that serves at least some data and/or functionality to computer 801. Remote server 804 may be controlled and used by the same entity that operates computer 801. Remote server 804 represents the machine(s) that collect and store helpful and useful data for use by other computers, such as computer 801. For example, in a hypothetical case where computer 801 is designed and programmed to provide a recommendation based on historical data, then this historical data may be provided to computer 801 from remote database 830 of remote server 804.
PUBLIC CLOUD 805 is any computer system available for use by multiple entities that provides on-demand availability of computer system resources and/or other computer capabilities, especially data storage (cloud storage) and computing power, without direct active management by the user. Cloud computing typically leverages sharing of resources to achieve coherence and economics of scale. The direct and active management of the computing resources of public cloud 805 is performed by the computer hardware and/or software of cloud orchestration module 841. The computing resources provided by public cloud 805 are typically implemented by virtual computing environments that run on various computers making up the computers of host physical machine set 842, which is the universe of physical computers in and/or available to public cloud 805. The virtual computing environments (VCEs) typically take the form of virtual machines from virtual machine set 843 and/or containers from container set 844. It is understood that these VCEs may be stored as images and may be transferred among and between the various physical machine hosts, either as images or after instantiation of the VCE. Cloud orchestration module 841 manages the transfer and storage of images, deploys new instantiations of VCEs and manages active instantiations of VCE deployments. Gateway 840 is the collection of computer software, hardware, and firmware that allows public cloud 805 to communicate through WAN 802.
Some further explanation of virtualized computing environments (VCEs) will now be provided. VCEs can be stored as “images.” A new active instance of the VCE can be instantiated from the image. Two familiar types of VCEs are virtual machines and containers. A container is a VCE that uses operating-system-level virtualization. This refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances, called containers. These isolated user-space instances typically behave as real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can utilize all resources of that computer, such as connected devices, files and folders, network shares, CPU power, and quantifiable hardware capabilities. However, programs running inside a container can only use the contents of the container and devices assigned to the container, a feature which is known as containerization.
PRIVATE CLOUD 806 is similar to public cloud 805, except that the computing resources are only available for use by a single enterprise. While private cloud 806 is depicted as being in communication with WAN 802, in other embodiments a private cloud may be disconnected from the internet entirely and only accessible through a local/private network. A hybrid cloud is a composition of multiple clouds of different types (for example, private, community or public cloud types), often respectively implemented by different vendors. Each of the multiple clouds remains a separate and discrete entity, but the larger hybrid cloud architecture is bound together by standardized or proprietary technology that enables orchestration, management, and/or data/application portability between the multiple constituent clouds. In this embodiment, public cloud 805 and private cloud 806 are both part of a larger hybrid cloud.
Various aspects of the present disclosure are described by narrative text, flowcharts, block diagrams of computer systems and/or block diagrams of the machine logic included in computer program product (CPP) embodiments. With respect to any flowcharts, depending upon the technology involved, the operations can be performed in a different order than what is shown in a given flowchart. For example, again depending upon the technology involved, two operations shown in successive flowchart blocks may be performed in reverse order, as a single integrated stage, concurrently, or in a manner at least partially overlapping in time.
A computer program product embodiment (“CPP embodiment” or “CPP”) is a term used in the present disclosure to describe any set of one, or more, storage media (also called “mediums”) collectively included in a set of one, or more, storage devices that collectively include machine readable code corresponding to instructions and/or data for performing computer operations specified in a given CPP claim. A “storage device” is any tangible device that can retain and store instructions for use by a computer processor. Without limitation, the computer readable storage medium may be an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or any suitable combination of the foregoing. Some known types of storage devices that include these mediums include: diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash memory), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanically encoded device (such as punch cards or pits/lands formed in a major surface of a disc) or any suitable combination of the foregoing. A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media. As will be understood by those of skill in the art, data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, defragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored.