POST-PROCESSING OUTPUT DATA OF A CLASSIFIER

Information

  • Patent Application
  • 20210397900
  • Publication Number
    20210397900
  • Date Filed
    June 15, 2021
    3 years ago
  • Date Published
    December 23, 2021
    2 years ago
Abstract
Provided is a computer-implemented method for post-processing output data of a classifier, including the steps: a. providing a validation data set with a plurality of labelled sample pairs, wherein each labelled sample pair comprises a model input and a corresponding model output; b. providing a plurality of perturbation levels; c. generating at least one perturbated sample pair for each labelled sample pair of the plurality of labelled sample pairs using a perturbation method based on the respective labelled sample pair and at least one perturbation level of the plurality of perturbation levels; d. determining a post-processing model based on the plurality of perturbated sample pairs; e. applying the determined post-processing model on testing data to post-process the output data of the classifier; and f. providing the post-processed output data of the classifier. Also provided is a corresponding technical unit and computer program product.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to European Application No. 20181134.6, having a filing date of Jun. 19, 2020, the entire contents of which are hereby incorporated by reference.


FIELD OF TECHNOLOGY

The following relates to a computer-implemented method for post-processing output data of a classifier. Further, the following relates to a corresponding technical unit and a computer program product.


BACKGROUND

Artificial intelligence (“AI”) systems for decision making in dynamically changing environments are known from the conventional art. Such AI systems require not only a high predictive power, but also uncertainty awareness. A meaningful and trustworthy predictive uncertainty is particularly important for real-world applications where the distribution of input samples can drift away from the training distribution. Continuously monitoring model performance and reliability under such domain drift scenarios can be facilitated by well calibrated confidence scores—that is, if model accuracy decreases due shifts in the input distribution, confidence scores change in a coordinated fashion, reflecting the true correctness likelihood of a prediction.


Previous attempts to obtain well-calibrated estimates of predictive uncertainties have focused on training intrinsically uncertainty-aware probabilistic neural networks or post-processing unnormalized logits to achieve in-domain calibration. However, the known approaches cannot provide consistently well-calibrated predictions under dataset shifts.


SUMMARY

An aspect relates to a computer-implemented method for post-processing output data of a classifier in an efficient and reliable manner.


This problem is solved by a computer-implemented method for post-processing output data of a classifier, comprising the steps:

  • a. Providing a validation data set with a plurality of labelled sample pairs, wherein each labelled sample pair comprises a model input and a corresponding model output;
  • b. Providing a plurality of perturbation levels;
  • c. Generating at least one perturbated sample pair for each labelled sample pair of the plurality of labelled sample pairs using a perturbation method based on the respective labelled sample pair and at least one perturbation level of the plurality of perturbation levels;
  • d. Determining a post-processing model based on the plurality of perturbated sample pairs;
  • e. Applying the determined post-processing model on testing data to post-process the output data of the classifier; and
  • f. Providing the post-processed output data of the classifier.


Accordingly, embodiments of the invention are directed to a method for post-processing output data of a classifier in the context of machine learning. In other words, the method is directed to a post-processing algorithm. In an embodiment, the output logits of the classifier are post-processed.


Thereby, the classifier is a trained machine learning model, in particular AI (“Artificial Intelligence”) model. Exemplary classifiers are listed further below, including neural networks.


In the first steps a. to b., the input data sets are provided or received as input for step c., namely the validation data set and the perturbation levels.


The validation data set comprises a set of labelled sample pairs, also referred to as samples. Thereby, the validation set comes from the same distribution as the training set. In context of machine learning, each sample is a pair. The pair comprises an input object, in particular a vector or matrix, and a desired output value or label (also called the supervisory signal). According to this, the model input can be equally referred to as input object and the model output can be equally referred to as output value or label.


The perturbation levels can be interpreted as perturbation strength. Thereby, a perturbation level quantifies how far away from the training distribution a perturbed sample is. The perturbation levels are chosen to span the entire spectrum of domain shift, from in-domain to truly out-of-domain (OOD; for OOD samples a model has random accuracy). The perturbation levels can be denoted as values Epsilon e.g. between 0 and 1. The perturbation levels can be randomly sampled or selected via alternative methods.


The input data sets can be received via one or more interfaces and/or can be stored in a storage unit for data storage and data transmission from the storage unit to a computing unit with respective interfaces for data transmission. The transmission includes receiving and sending data in two directions.


Next, in step c., the perturbation method is applied on the received input data sets resulting in perturbated sample pairs. In other words, perturbated sample pairs of varying perturbation strength are generated using in particular the Fast Gradient Signed Method (FGSM) based on the validation data set.


In the next steps d. to f., the post-processing of the output data of the classifier is performed. The post-processing can be parametric or non-parametric. Accordingly, a monotonic function can be used to transform the unnormalized logits of the classifier into post-processed logits of the classifier, such as piecewise temperature scaling.


In more detail, a post-processing model is determined based on the plurality of perturbated sample pairs. In other words, the post-processing model is trained. Thereby, optimizers can be applied, including optimizers and other calibration metrics, such as Nelder Mead, log likelihood, Brier score and ECE.


Then, the determined post-processing model is applied on testing data to post-process the output data of the classifier. This step of applying the post-processing model can be repeated, in particular whenever a new classification is made, throughout the life-cycle of the model. In other words, the trained post-processing model can be applied any time a prediction is made. Hence, once the postprocessing model is trained, no more perturbed sample pairs are needed.


In the last step, the post-processed output data of the classifier is provided.


The advantage of the method according to embodiments of the invention is that the trained classifiers can be post-processed in an efficient and reliable manner without the need of retraining.


Another advantage is that the method has no negative effect on the accuracy. The method ensures that the classifier is well calibrated not only for in-domain predictions but yields well calibrated predictions also under domain drift.


In one aspect the classifier is a trained machine learning model selected from the group comprising: SVM, xgboost, random forest and neural network. Accordingly, the classifier or trained machine learning model can be selected in a flexible manner according to the specific application case, underlying technical system and user requirements.


In another aspect the perturbation method is a noise function selected from the group comprising: Fast gradient sign method (FGSM) and Gaussian function. The FGSM has proven to be particular advantageous due to the fact that not only the direction, but also the strength of the domain drift that may occur after model deployment remains unknown, the adversarials can be generated at a variety of noise levels covering the entire spectrum from in-domain to truly out-of domain.


A further aspect of embodiments of the invention is a technical unit for performing the aforementioned method.


The technical unit may be realized as any device, or any means, for computing, in particular for executing a software, an app, or an algorithm. For example, the unit may comprise a central processing unit (CPU) and a memory operatively connected to the CPU.


The unit may also comprise an array of CPUs, an array of graphical processing units (GPUs), at least one application-specific integrated circuit (ASIC), at least one field-programmable gate array, or any combination of the foregoing. The unit may comprise at least one module which in turn may comprise software and/or hardware. Some, or even all, modules of the unit may be implemented by a cloud computing platform.


A further aspect of embodiments of the invention is a computer program product (non-transitory computer readable storage medium having instructions, which when executed by a processor, perform actions) directly loadable into an internal memory of a computer, comprising software code portions for performing the steps according to the aforementioned method when the computer program product is running on a computer.





BRIEF DESCRIPTION

Some of the embodiments will be described in detail, with references to the following FIGURES, wherein like designations denote like members, wherein:



FIG. 1 illustrates a flowchart of the method according to embodiments of the invention.





DETAILED DESCRIPTION


FIG. 1 illustrates a flowchart of the method according to embodiments of the invention. The method is directed to a post-processing approach for classifiers, such as deep neural networks and gradient boosted decision trees (xgboost) that ensures that output scores are well calibrated in the case of any gradual domain shift, covering the entire spectrum from in-domain to truly out-of-domain samples.


The method can be split into three distinct stages, as listed in the following:

  • Stage 1: Generating perturbed samples according to method steps S1 to S3
  • Stage 2: Fitting the post-processing model using e.g. Nelder Mead to determine parameters in generalized Temp scaling according to method step S4
  • Stage 3: Apply postprocessing model to outputs of classifier according to method step S5.


Stage 1 and 2 are performed once only. Stage 3 can be performed repeatedly.


Generation of the Perturbated Sample Pairs S1-S3

A set of samples are generated which cover the entire spectrum from in-domain samples to truly out-of-domain samples in a continuous and representative manner. According to this, the fast gradient sign method (FGSM) is used on the basis of the validation data set with sample pairs to generate perturbated samples pairs S3, with varying perturbation strength. More specifically, for each sample pair in the validation data set, the derivative of the loss is determined with respect to each input dimension and the sign of this gradient is recorded. If the gradient cannot be determined analytically (e.g., for decision trees), it can be resorted to a 0th-order approximation and the gradient can be determined using finite differences. Then, noise $epsilon$ is added to each input dimension in the direction of its gradient. For each sample pair, a noise level can be selected at random, such that the adversarial validation set comprises representative samples from the entire spectrum of domain drift, as shown in the pseudo code of algorithm 1 and explanation.














  \begin{algorithm}[H]


   \caption{PORTAL  with  trained  neural  network  $f(x)$,  a  set  of  perturbation  levels


$\mathcal{E}=\{ 0.001,0.002,0.004,0.008,0.016,0.032,0.064,0.128,0.256,0.512\}$ , complexity


parameter  $\zeta=1$,  validation  set  $(X,  Y)$,  and  empty  perturbed  validation  set


$(X_\mathcal{E},Y_\mathcal{E}, Z_\mathcal{E}, Z{circumflex over ( )}r_\mathcal{E})$.


   }\label{alg1}


   \begin{algorithmic}[1]


   \For{(x, y) in (X,Y)}


    \For{$\epsilon\; \mathrm{in}\;\mathcal{E}$}


     \State Generate adversarial sample $x_\epsilon$ using $\epsilon_\zeta=\epsilon/\zeta$


     \State  Use  neural  network  $f(x_\epsilon)$  to  compute  unnormalized  logits


$\bm{z_\epsilon}$ and logit range $z_\epsilon{circumflex over ( )}r$


     \State   Add   $(x_\epsilon,   y,   \bm{z_\epsilon},   z_\epsilon{circumflex over ( )}r)$   to


$(X_\mathcal{E},Y_\mathcal{E}, Z_\mathcal{E}, Z{circumflex over ( )}r_\mathcal{E})$


    \EndFor


   \EndFor


    \State Initialize $\bm{\theta}$


    \State Optimize $\bm{\theta}$ using Nelder-Mead optimizer for log-likelihood of perturbed


validation  set  $\mathcal{L}(\bm{\theta})  =  -  \sum_{i=1}{circumflex over ( )}{N_\mathcal{E}}  y_i  \log


\hat{Q}_i(\bm{\theta})   =   -   \sum_{i=1}{circumflex over ( )}{N_ \mathcal{E}}   y_i    \log


\sigma_{SM}(\mathbf{z}_i/T(z{circumflex over ( )}r_i;\bm{\theta}))$


  \end{algorithmic}


\end{algorithm{





















Algorithm 1 Generation of adversarial validation set Vadv based on validation V, consisting of a


collection of labelled samples {(x, y)}, with x being model inputs any model outputs. N denotes


the number of samples in V, ε = {0,0.05,0.1,0.15,0.2,.025,0.3,0.35,0.4,0.45} the set of


perturbation levels.


Require: Validation set V and empty adversarial set Vadv


1: for i in 1:N do


2:   Read sample pair (xi, yi) from V


3:   Randomly sample ϵi from ε


4:   Generate adversarial sample pair (xadv, y) using the FGSM method based on ϵi


5:   Add (xadv, y) to Vadv


6: end for


xadv denotes an adversarial input generated from x using the FGSM method.









According to an alternative embodiment, the formulation of Algorithm 1 differs in that not only one adversarial sample is generated per sample pair; but instead FGSM is applied for all available epsilons. Thereby the size of the adversarial validation set can be significantly increased by the size of the set of epsilons. In other words, different perturbation strategies can be used e.g., based on image perturbation. The advantage is that the method according to embodiments of the invention can be applied on black box models where it is not possible to compute the gradient.


Generation of the Post-Processed Logits S4-S6

The third stage covers the Generation of the post-processed model. According to this, a strictly monotonic parameterized function is used to transform the unnormalized logits of the classifier. For example, Platt scaling, temperature scaling, other parameterizations of a monotonic function, or non-parametric alternatives can be used. In an embodiment according to the following equation a novel parameterization is used, which adds additional flexibility to known functions by introducing range-adaptive temperature scaling. While in classical temperature scaling a single temperature is used to transform logits across the entire spectrum of outputs, a range-specific temperature is used for different value ranges.


The following is a formula of an embodiment:










T


(


z
r

;
θ

)


=

exp_id


(



θ
1



(


z
r

+

θ
2


)


θ
3



+

θ
0


)






(
5
)







with θ=[θ0, . . . θ3] parameterizing the temperature T (zr; θ) and zr=max(z)−min(z) being the range of an unnormalized logits tuple z. θ0 can be interpreted as an asymptotic dependency on zr. The following function can be used


exp_id: x->{x+1, x>0; exp(x), else} to ensure a positive output. This parameterized temperature is then used to obtain calibrated confidence scores {circumflex over (Q)}i for sample i based on unnormalized logits:












Q
^

l

=

max




σ

S

M




(


z
i

/

T


(


z
i
r

;
θ

)



)



(
c
)









c







c

,

T
:












c




,

T
:

x


{







x



T
1








if





x

<

C
1









x
-

C
h

-
1



Th



+




i
=
1


h
-
1






C
l

-

C

l
-
1





Th










if






C

h
-
1




x
<

C
h


,









h

=
2

,





,


H





with





H

:=



dim


(
T
)








c
0


:=
0


,






c
H

:=










(
6
)







Sigma_SM denotes the softmax function. The parameters of the function (theta) are then determined by optimizing a calibration metric based on the adversarial validation set. Calibration metrics can be the log likelihood, the Brier score or the expected calibration error, see also Algorithm 2.














Algorithm 2 Fit parameterized post-processing model u = ∫(z, T), where ∫ is a strictly


monotonic function parameterized by parameters T and maps the unnormalized logits z = C(x)


of a classifier C to transformed (still unnormalized) logits u. Let g denote a calibration metric that


is used to compute a scalar calibration measure w based on a set of logits along with ground truth


labels.


Require: Adversarial set Vadv (from algorithm 1), function ∫ with initial parameters T, calibration


metric g.


1: repeat


2:   Read sample pairs {(xadv, y)} from Vadv. Let Y be the set of all labels.


3:   Compute post-processed logits u = ∫(z, T) for all z = C(xadv), comprising set U.


4:   Perform optimization step and update T to optimize g(U, Y)


5: until Optimisation converged


6: return Optimized T


In an alternative embodiment of a blackbox classifier where logits are not available, Algorithm 2


can be adapted such that unnormalized logits are generated by computing z = log(C(x)).


Optimizers can advantageously be selected according to the form of the metric (e.g., Nelder Mead


for piecewise temperature scaling) in a flexible manner.









REFERENCE SIGNS



  • S1 to S6 Method steps 1 to 6


Claims
  • 1. A computer-implemented method for post-processing output data of a classifier, comprising: a. providing a validation data set with a plurality of labelled sample pairs, wherein each labelled sample pair comprises a model input and a corresponding model output;b. providing a plurality of perturbation levels;c. generating at least one perturbated sample pair for each labelled sample pair of the plurality of labelled sample pairs using a perturbation method based on the respective labelled sample pair and at least one perturbation level of the plurality of perturbation levels;d. determining a post-processing model based on the plurality of perturbated sample pairs;e. applying the determined post-processing model on testing data to post-process the output data of the classifier; andf. providing the post-processed output data of the classifier.
  • 2. The computer-implemented method according to claim 1, wherein the classifier is a trained machine learning model selected from the group comprising: SVM, xgboost, random forest and neural network.
  • 3. The computer-implemented method according to claim 1, wherein the perturbation method is a noise function selected from the group comprising: Fast gradient sign method (FGSM) and Gaussian function.
  • 4. A technical unit for performing the method steps according to claim 1.
  • 5. A computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method directly loadable into an internal memory of a computer, comprising software code portions for performing the steps according to claim 1 when the computer program product is running on a computer.
Priority Claims (1)
Number Date Country Kind
20181134.6 Jun 2020 EP regional