The present invention relates to the field of Artificial Intelligence (AI). More particularly, the present invention relates to a system and a method for robustness (fairness) assessment of Artificial Intelligence (AI)-based Machine Learning (ML) models.
Various based systems and applications are widely used, based on Artificial Intelligence (AI) and Machine learning (ML). It is common for data scientists to induce many ML models while attempting to provide a solution for different artificial intelligence (AI) tasks. In order to evaluate the fairness and robustness of any AI-based ML models of systems and applications, several steps are required: it is required to detect and measure various properties of ethical bias from different ethical points of view. Then, it is required to aggregate those different ethical perspectives to one final fairness score. This final fairness score provides the data scientists with quantitative estimation, i.e., assessment for the fairness of the examined ML model and can assist them in evaluations and comparisons of different models.
Nowadays, data scientists are mainly focused on improving the performance of ML methods. There are several conventional performance measurements that are used for evaluating ML models. The most popular performance measures are accuracy, precision, recall, etc. However, these performance measures evaluate the performance of AI systems and applications, with no consideration for possible non-ethical consequences. The non-ethical consequences refer to sensitive information about the entities (usually user-related data) which might trigger discrimination towards one or more data distribution groups. Therefore, it is required to define performance measurements for evaluating possible ethical discrimination of AI systems and applications based on ML models.
Bias in machine learning (ML) models is the presence of non-ethical discrimination towards any of the data distribution groups. For example, bias may exist if male and female customers with the same attributes are treated differently. Fairness is defined as the absence of any favoritism toward an individual or a group, based on their inherent or acquired characteristics. An unfair (biased) ML model is a model whose predictions are prone toward a data-specific group [1]. Fairness and bias are considered opposite concepts. When the ML model is completely fair, it means that it has no underlying bias (and vice versa).
A protected feature is a feature that can present unwanted discrimination towards its values. For example, gender/race is a possible protected feature. Privileged value is a distribution sub-group that historically had a systematic advantage [2]. For example, “man” is a privileged value in the protected feature “gender”.
Underlying bias may originate from various sources. Examining the fairness of various AI-based models requires examining what the ML model has learned. Generally, ML algorithms rely on the existence of high-quality training data. Obtaining high-quality labeled data is a time-consuming task, which usually requires human effort and expertise. Obtaining sufficient data for a representative dataset, which covers the entire domain properties in which the AI system or application is implemented, is not an easy task. Therefore, ML models are trained using a subsample of the entire population, assuming that any learned patterns on this small subsample can generalize to the entire population. When data instances are chosen non-randomly or without matching them to the nature of the instances used for prediction, the predictions of the ML models become biased toward the dominating group in the training population [1]. An additional source of bias may be the training dataset [1], out of which the bias is inherited. This implies that the data itself contains protected features with a historically privileged value.
Nowadays, various statistical measurements can be used in order to examine the fairness of an ML model. The statistical measurements provide binary results for the existence of bias or a non-scaled bias estimation. For example, the demographic parity measure [4] returns whether the probabilities of a favorable outcome for the protected feature groups are equal, i.e. binary results for the existence or nonexistence of bias. Several measurements provide a non-scaled bias estimation, such as normalized difference [5] and mutual information [6]. There are over twenty-five fairness measurements in the literature, each examines the ML model from a different ethical point of view [3].
It is therefore an object of the present invention to provide a system and method for detecting an underlying bias and the fairness level of an ML model, which can be integrated into Continuous Integration/Continuous Delivery processes.
Other objects and advantages of the invention will become apparent as the description proceeds.
A system for the assessment of robustness and fairness of AI-based ML models, comprising:
The system may be a plugin system that is integrated to Continuous Integration/Continuous Delivery) processes.
For a given ML model, the system may be adapted to:
The properties of the model and the data may be one or more of the following:
The structural properties may be one or more of the following:
Each test in the test execution environment may output a different result in the form of a binary score representing whether underlying bias was detected, or a numeric unscaled score for the level of bias in the examined ML model.
All the tests results of one protected feature may be combined by the final fairness score aggregation module, according to the minimal test score of a protected feature.
The final fairness score may be the minimal final score of the protected feature.
The above and other characteristics and advantages of the invention will be better understood through the following illustrative and non-limitative detailed description of preferred embodiments thereof, with reference to the appended drawings, wherein:
The present invention provides a system for robustness (fairness) assessment according to underlying bias (discrimination) of Artificial Intelligence (AI) based machine learning (ML) models. The system (in the form of a plugin, for example) can be integrated into a larger system, for examining the fairness and robustness of ML models which try to fulfill various AI-based tasks. The system detects underlying bias (if exists), by providing an assessment for the AI system/application or for the induced ML model, which the system or application is based on. The proposed system (plugin) evaluates the ML model's tendency for bias in its predictions.
The present invention provides generic fairness (robustness to bias and discrimination) testing the system's (the plugin's) environment, which can be integrated into CI/CD Continuous Integration (CI)/Continuous Delivery (CD) processes. The proposed system (plugin) is designed to serve data scientists during their continuous work of developing ML models. The system performs different tests to examine the ML model's fairness levels. Each test is an examination of a different fairness measurement and estimation for bias, according to the test results. For a given ML model, the system first chooses the suitable bias tests, according to the model and data properties. Second, the system performs each test for each protected feature of the provided ML model and quantifies several bias scores. Then, the system generates a fairness score for each protected feature, using the corresponding bias scores. Finally, the system aggregates the fairness scores of all the protected features to a single fairness score, using a pre-defined aggregation function.
The Data/Model Profiler
The data/model profiler 101 creates an evaluation profile, based on the dataset and the model's properties. The profiler 101 allows the test recommendation engine 102 to recommend the most appropriate tests. The properties of the model and the data are derived from various tests requirements. If one of the test requirements is not provided, the test recommendation engine will recommend tests, which can be performed without the missing test requirements.
The properties of the model and the data are:
Additional properties that are gathered by the data/model profiler are the provided data structural properties. The data structural properties guide the test execution environment during the test execution. Such properties are:
Test Recommendation Engine
The test recommendation engine 102 receives the data and model profiles from the data/model profiler 101 and recommends the relevant tests to be selected from the test's repository 103. The tests repository 103 contains all the tests that can be examined. Currently, the tests repository contains 25 different tests that are gathered from the literature and being updated constantly. The currently existing tests in the test repository 103 are specified below. Each test determines whether underlying bias exists in the ML model. The following example explains how each of the current 25 tests is used in order to detect the existence of bias (discrimination):
Consider an AI task for classifying individuals to be engineers given their properties, such as gender, education and other background features. In this example, “Gender” (male/female) is considered as the protected feature.
Test Execution Environment
The test execution environment 104 gathers all the tests that were recommended by the test recommendation engine. Each test outputs a different result in the form of a binary score representing whether underlying bias was detected, or a numeric unscaled score for the level of bias in the model. Thus, following the execution, the test execution environment 104 transforms each of the test's outputs to a scaled numeric fairness score. The output transformation is performed according to the type of the test result:
In table 1 below, each test (from the 25 tests which are currently used by the proposed system) is categorized to its corresponding process.
In addition, in the case of non-binary protected features, the proposed system will perform the test for each protected feature value in the form of one vs. all. For example, the case of the feature “disability” that contains the values of “no disability”, “minor disability” and “major disability”. The system will execute the test three times: considering the classes “no disability” vs. not “no disability”, “minor disability” vs. not “minor disability” and “major disability” vs. not “major disability”. In order to consider the worst discrimination, the test output will be the minimum test result out of the three.
In the next parts of the description, there is an elaboration on the specific process for each test evaluation and use in the following notation:
Statistical Parity Difference—this test originally produces a binary score, therefore processed by binary score process. Statistical parity measurement yields the statistical parity difference that states:
Statistical Parity Difference=SPD=P(y=ci|fp≠vf)−P(y=ci|fp=vf)
The Statistical Parity Difference test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=1−MAX(|SPD|)
Disparate Impact—this test originally produces an unscaled score, therefore processed by unscaled score process. Disparate impact states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Sensitivity (TP rate) —this test originally produces a binary score, therefore processed by binary score process. Sensitivity (TP rate) states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Specificity (TN rate) —this test originally produces a binary score, therefore processed by binary score process. Specificity (TN rate) states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Likelihood ratio positive (LR+) —this test originally produces a binary score, therefore processed by binary score process. Likelihood ratio positive (LR+) states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Balance Error Rate (BER) —this test originally produces a binary score, therefore processed by binary score process. Likelihood ratio positive (LR+) states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Calibration—this test originally produces a binary score, therefore processed by binary score process. Calibration states:
P(y=1|s(x),fp=vf)=P(y=1|s(x),fp≠vf)
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
CLvar for ∀s∈S=variance(P(y=1|S=s,fp=vf))
result=1−MIN(CLvar)
Prediction Parity—this test originally produces a binary score, therefore processed by binary score process. Prediction Parity states:
P(y=1|S>sHR,fp=vf)=P(y=1|S>sHR,fp≠vf)
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=variance(P(y=1|S>sHR,fp=vf))
Error rate balance with score (ERBS) —this test originally produces a binary score, therefore processed by binary score process. Error rate balance with score (ERBS) states:
P(S>sHR|ŷ=0,fp=vf)=P(S>sHR|ŷ=0,fp≠vf)
and
P(S≤sHR|ŷ=1,fp=vf)=P(S≤sHR|ŷ=1,fp≠vf)
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=MIN(variance(P(S>sHR|ŷ=0,fp=vf)),variance(P(S≤sHR|ŷ=1,fp=vf)))
Equalized odds—this test originally produces a binary score, therefore processed by binary score process. Equalized odds states:
P(y=1|fp=vf,yt=ci)=P(y=1|fp≠vf,yt=ci)
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
EOvar for ∀ci∈C=variance(P(y=1|fp=vf,yt=ci))
result=1−MIN(EOvar)
Equal opportunity—this test originally produces a binary score, therefore processed by binary score process. Equal opportunity states:
P(y=1|fp=vf,yt=1)=P(y=1|fp≠vf,yt=1)
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=variance(P(y=1|fp=vf,yt=1))
Treatment equality—this test originally produces a binary score, therefore processed by binary score process. Treatment equality states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Conditional statistical parity—this test originally produces a binary score, therefore processed by binary score process. Conditional statistical parity states:
P(y=1|fp=vf,L)=P(y=1|fp≠vf,L)
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
CSP=|P(y=1|fp=vf,L)−P(y=1|fp≠vf,L)|
result=1−MAX(CSP)
Positive prediction value (precision) —this test originally produces a binary score, therefore processed by binary score process. Positive prediction value (precision) states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Negative prediction value—this test originally produces a binary score, therefore processed by binary score process. Negative prediction value states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
False positive rate—this test originally produces a binary score, therefore processed by binary score process. False positive rate states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
False negative rate—this test originally produces a binary score, therefore processed by binary score process. False negative rate states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Accuracy—this test originally produces a binary score, therefore processed by binary score process. Accuracy states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Error rate balance (ERB) —this test originally produces a binary score, therefore processed by binary score process. Error rate balance (ERB) states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=MIN(FPR,FNR)
Normalized difference—this test originally produces an unscaled score, therefore processed by unscaled score process. Normalized difference states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=1−MAX(|ND|)
Elift ratio—this test originally produces an unscaled score, therefore processed by unscaled score process. Elift ratio states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Odds Ratio—this test originally produces an unscaled score, therefore processed by unscaled score process. Odds Ratio states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
Mutual Information—this test originally produces an unscaled score, therefore processed by unscaled score process. Mutual Information states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=1−max(MI)
Balance residuals—this test originally produces an unscaled score, therefore processed by unscaled score process. Balance residuals states:
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
result=1−max(BR)
Conditional use accuracy equality—this test originally produces a binary score, therefore processed by binary score process. Conditional use accuracy equality states:
TPf
And
TNf
The test performs the following calculation for the protected feature values, in order to produce a single scaled fairness score result:
CUAE=MAX(|TPf
result=1−MAX(CUAE)
Final Fairness Score Aggregation
the final fairness score aggregation module (component) 105 aggregates the executed test results into a final fairness score of the examined model and dataset. The aggregation component 105 first aggregates its final score for each protected feature, and then aggregates them to a single overall fairness score.
In order to combine all the tests results of one protected feature, many different mathematical functions can be used. For example, the system considers the protected feature's minimal test score. In order to combine all the final scores from all the protected features which were examined, the system might consider the protected feature's minimal final score as the final fairness score.
The above examples and description have of course been provided only for the purpose of illustrations, and are not intended to limit the invention in any way. As will be appreciated by the skilled person, the invention can be carried out in a great variety of ways, employing more than one technique from those described above, all without exceeding the scope of the invention.
Number | Date | Country | |
---|---|---|---|
63075304 | Sep 2020 | US |