MACHINE LEARNING MODEL FAIRNESS AND EXPLAINABILITY

Information

  • Patent Application
  • 20230105547
  • Publication Number
    20230105547
  • Date Filed
    September 12, 2022
    2 years ago
  • Date Published
    April 06, 2023
    a year ago
Abstract
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for machine learning model fairness and explainability. In some implementations, a method includes obtaining data relating to a plurality of potential borrowers; providing the data to the trained machine learning model; obtaining, by the trained machine learning model’s processing of the provided data, the one or more outputs of the trained machine learning model; and automatically generating a report that explains the one or more outputs of the trained machine learning model with respect to one or more fairness metrics and one or more accuracy metrics; and providing the automatically generated report for display on a user device.
Description
FIELD

This specification generally relates to enhancing the accuracy and fairness of decisions informed by machine learning models, and to automated methods for explaining the outputs of those models, and assessing their suitability for use in decision-making.


BACKGROUND

Machine learning is widely used to solve complex problems. Neural network models, for example, generally include one or more layers that incrementally process input data to generate output data, with connections between layers being adjustable through incremental training to achieve target results. In this and other ways, machine learning models may be adapted to solve difficult problems arising across disciplines, including image recognition, language processing and translation, navigation, and the like.


SUMMARY

Although machine learning models may be effective at processing large amounts of data and providing outputs that can be used toward the solution of complex problems, the models are widely understood to lack transparency, and to suffer from the potential for algorithmic bias. In that regard, a trained machine learning model is widely understood to resemble a “black box,” in the sense that it produces outputs from inputs, but that the actual processing involved in producing those outputs is neither observable nor knowable to the user. For example, although model configuration and parameters may be known, the complexity of machine learning models prevents direct understanding of how a given model may have produced one output value as opposed to another when executed. This problem, which fundamentally arises from the nature of machine learning models themselves, is particularly concerning when the outputs of the models are used to inform difficult decisions impacting human lives. Among other reasons, this is because machine learning models may harbor hidden and unintended biases, and may also suffer from inaccuracies that are difficult to diagnose due to their lack of transparency.


The disclosure that follows generally relates to enhancing the capabilities of computer systems that train and/or execute machine learning models, by enabling those systems to automatically identify and explain potential solutions to the problems of bias and inaccuracy that frequently arise in the machine learning context, in terms that human users can understand and act upon.


Model developers often focus solely on predictive accuracy and, likewise, many machine learning algorithms are designed to minimize predictive loss. The result are models that make accurate predictions but that may also lead to unfair outcomes. One way to achieve fairer outcomes is to randomly assign an outcome instead of making a model-based prediction. However, such outcomes would be far less accurate than outcomes resulting from an accurate model, which can lead to undesired results. For example, in the lending business, this approach would amount to randomly approving or denying loans. While such a practice would result in perfect fairness, it would lead to undesirable business outcomes and harm to consumers who have been offered loans they cannot repay, and could result in the denial of loans to consumer who can repay. What is needed is a way to more systematically explore the model design space in consideration of both objectives: predictive accuracy and fairness of outcomes. The methods disclosed herein provide this ability for any kind of model.


For example, statistical analyses on the inputs and outputs of machine learning models may be employed for purposes of training a de-biased model, and for purposes of providing explanations of model outputs, both with respect to the training process and with respect to execution during production. For instance, statistical analyses may be applied using an adversarial model operating on the output of a primary model trained to generate predictions. The adversarial model may be trained to predict an undesirable outcome from the predictions of the primary model (e.g., a regression model, a classification model, among others), and may be used during training operations, such as backpropagation or gradient boosting, to update the weights and parameters of the primary model based on the predictions of the adversary, reducing undesirable outcomes while also maintaining accuracy. Such undesirable outcomes could include discrimination or disparate impact based on race and ethnicity, gender, age, military status, and other protected bases, for example: lower approval rates for women applicants than male applicants, or more erroneous property value predictions for majority Black and Latinx neighborhoods than for white, non-Hispanic neighborhoods.


The described analytics and automated reporting provide transparency to modern machine learning models leveraged in regulated industries. For instance, predictive machine learning models employed for purposes of loan underwriting may cause inequities by rejecting (or recommending the rejection of) loans for qualified candidates. However, and as described in more detail below, machine learning models in lending and other contexts can be adversarially trained to reduce inequitable outcomes when processing data, and can be used in conjunction with techniques for automatically identifying and explaining factors involved in adverse actions. It is particularly important that lending decisions be fair with respect to race and ethnicity, gender, age, and other protected attributes, and these techniques can be employed to mitigate bias and ensure fairness in credit decisions.


Underwriting decisions are typically made using probability of default models, typically formulated as a binary classification problem. That is, the dependent variable modeled or outcome to be predicted is a binary outcome (e.g., will the loan default or not?). But this issue extends to other modeled aspects of the lending process and to other modeling problems, including regression problems, where the dependent variable modeled is a continuous outcome or predicted value (e.g., what is the most likely price of the asset used as collateral in the loan?). In regression problems, it is undesirable if the model makes more accurate predictions for unprotected or majority populations, and less accurate predictions for protected status or minority populations as the latter can lead to two unwanted results: suboptimal business outcomes and discrimination against certain groups. It is especially undesirable if the regression prediction errors for protected groups yield larger or smaller predictions on average than their unprotected counterparts, which can lead to e.g., majority minority house prices having higher estimated values or lower than average credit lines being assigned to minority-owned businesses or protected-status consumer applicants.


Lenders may employ the analytics described herein in support of a detailed fair lending review of any model, and to de-bias models employed in underwriting loans, marketing financial products to consumers, estimating the probability of default for an already-booked loan, estimating a house price, estimating the value of a business, assigning credit line amounts for small business and consumer loans, and the like. The methods may be employed, for example, to assess disparate treatment and disparate impact on protected class borrowers, and to offer potential mitigation strategies that satisfy business objectives. For example, computerized statistical analyses may be employed to ensure that disparate treatment does not occur, to quantify the degree to which disparate impact does occur, and/or to identify and produce less discriminatory alternative models. The systems may also automatically generate and provide reports that explain the outputs of predictive machine learning models with respect to fairness and accuracy metrics.


These methods may be used, for example, with respect to classification models that provide information relating to predictions as to whether a potential borrower might default on a loan, and can be used to de-bias decision-making related to whether to offer the loan to the borrower. Similarly, these methods may be used with respect to regression models that provide information relating to an amount of credit that should be extended to a potential borrower, and can be used to de-bias decision-making related to the amount of credit to extend to the borrower.


The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features and advantages of the invention will become apparent from the description, the drawings, and the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A-1C are schematic representations of systems, in accordance with embodiments.



FIGS. 2A-2B are schematic representations of methods, in accordance with embodiments.



FIGS. 3-7 are representations of models, in accordance with embodiments.



FIG. 8 shows a schematic depicting an example of simultaneous model training process.





DETAILED DESCRIPTION

The following description of the preferred embodiments is not intended to limit the disclosure to these preferred embodiments, but rather to enable any person skilled in the art to make and use such embodiments.


1 Overview

In determining whether to deploy a model (e.g., a predictive model) in a real-world scenario that impacts people’s lives, fairness in how such a model impacts people’s lives can be a consideration in determining whether to deploy the model, or continue model development. For example, whether a model favors a certain class of people (e.g., a class based on race, ethnicity, age, sex, national origin, sexual orientation, demographics, military status, etc.) over other classes of people may be a consideration in determining whether to deploy the model.


Fairness can also be a concern in deciding whether to deploy models that do not directly impact people’s lives, or that do not affect people’s lives at all. For example, for a predictive model that predicts a value related to efficacy of various drugs, it might be desired to train the model such that it does not favor drugs produced by a specific manufacturer.


Fairness shows up in many modeling problems, in subtly different ways. In classification problems, where a model is used to predict the probability that an observation belongs to a binary class (e.g., good loan/bad loan), it may be desirable to ensure the model achieves similar prediction accuracy between protected and unprotected demographic groups, e.g., with a sensitive attribute (women/men). It may also be desirable to ensure the distribution of probabilities assigned by the model are similar when comparing protected and unprotected groups as measured e.g., by a secondary model’s prediction loss as measured by binary cross-entropy or by a suitable distribution comparison metric such as maximum Kolmogorov Smirnov (max K-S), Wasserstein distance, and the like. Likewise, it is often desirable for a model-based outcome, such as a rate of approved loan applications to be made as equal as practical between protected and unprotected groups. The methods described herein enable lenders to explore models that achieve these outcomes.


In regression problems, where a model is used to predict a continuous value such as house price, income, value of a business, value of a car, optimal credit line, interest rate, and the like, fairness may be measured differently. One approach is to consider the error rates achieved by the model disaggregated by segment. In some implementations, a measure of fairness includes determining the ratio of mean squared errors between a protected group and its unprotected counterpart (e.g., the ratio of mean squared error for women to mean squared error for men). Error ratios close to 1 mean the regressor is just as good at predicting values for protected groups as their unprotected counterparts, which is the desired outcome. In some implementations the ratio of errors is used to preserve the sign of the error (under-estimate vs over-estimate). In other variations, other suitable regression fairness metrics are employed. Several other examples exist in which it is useful to train a model to be fair with respect to a certain class of data sets.


Similarly, in many cases it is desirable to train a model such that it is invariant (at least within a degree) to changes in one or more selected features (e.g., sensitive attributes). For classification problems, the degree of the model’s invariance to a sensitive attribute is measured based on a ratio of outcome metrics, such as approval rates at a fixed risk threshold. For regression problems, the degree of the model’s invariance to a sensitive attribute is measured based on a ratio of error metrics, such as a ratio of mean squared errors.


Embodiments herein address the foregoing by providing new and useful systems and methods of training a model.


In some variations, the system (e.g., 100) includes at least one of a model training system (e.g., 110), a model (e.g., 111), and an adversarial classifier (e.g., 112).


In some variations the model (e.g., 111) can be a classification model. In other embodiments the model (e.g., 111) can be a regression model.


In some variations, the method (e.g., 200) includes at least one of: generating a model (e.g., S210); pre-training the model (e.g., S220); selecting suspect features (e.g., S230); pre-training an adversarial classifier (e.g., S240); evaluating the model by using the adversarial classifier (e.g., 250); generating a new model by using the adversarial classifier (e.g., S260); comparing the new model to a pre-existing model (e.g., S270); and providing a user interface (e.g., S280). In some variations, evaluating the model at S250 includes, predicting a value of one or more sensitive attributes associated with an input data set used by the model 111 (or 111a-d) to generate an output (e.g., a score, a prediction, etc.), by using the output generated by the model.


In some examples, the adversarial classifier can be trained to predict values of sensitive attributes, and the model can be trained minimize accuracy of sensitive attribute value perditions determined by the adversarial classifier 112.


In some variations, the method can be used to train any suitable type of model. In some variations, the method can be used to train a more fair model that is combined with the original model in an ensemble to produce a more fair outcome.


In some variations, the method can be used to train models that satisfy business constraints (e.g., credit decisioning business constraints, hiring business constraints, etc.)


Embodiments herein provide a practical application of adversarial training techniques that includes practical systems and methods for: performing adversarial training for various types of predictive models; adversarial training of the predictive model in accordance with model constraint parameters that specify one or more features whose presence in the model and/or values should not be changed during adversarial training; selection of an adversarial-trained model that better satisfies accuracy and fairness metrics; automatic generation of reports justifying selection of an adversarial-trained model for use in production; and an operator device user interface that displays fairness metrics, accuracy metrics, and economic projections for adversarial-trained models and that receives user input for parameters used during adversarial training. In some variations, the system (e.g., 110) provides a Software As A Service (SAAS) (e.g., via an application server 114) that allows an operator to perform adversarial training on a model provided by the operator, and providing the operator with an adversarial-trained model that meets specified fairness and accuracy constraints. In some variations the system (e.g., 110) generates reports describing the original model, the adversarial training process, the resulting model, and model analysis and comparisons such as: showing the difference in fairness and accuracy between the original model and the more fair alternative, the importance of each input variable in the original model and the more fair model, and other comparisons and analyses related to the original model and fair alternatives.


The adversarial training techniques described herein can be applied to models used to make predictions in which fairness is a factor for deciding whether to permit the model for use in production. The embodiments herein can be applied to predictive models for use in decisions related to: credit lending, residential leasing, insurance applications, hiring, employment, fraud detection, admissions (e.g., school admissions), scholarship awards, advertising, home sales, drug testing, scientific research, medical results analysis, and the like.


2 Benefits

Variations of this technology can afford several benefits and/or advantages.


First, by performing adversarial-training for various types of models, fairness of predictive models of various types can be improved.


Second, by virtue of providing a user interface (as described herein) for adversarial training of a model, usability can be improved in generation of more fair models.


Third, by virtue of automatic generation of model selection reports that include fairness and accuracy metrics, and economic projections (as described herein), decisions to deploy a model into production can be more easily justified.


Fourth, by performing adversarial training of logistic regression models (as described herein), existing logistic regression models can be retrained for improved fairness, without the need to migrate to neural network models.


Fifth, by performing adversarial training of tree models (as described herein), existing tree models can be retrained for improved fairness, without the need to migrate to neural network models.


Sixth, by performing adversarial training in accordance with specified model constraints, computational complexity and performance of adversarial training can be improved. Moreover, adversarial training can constrained to training of models that satisfy constraints for deploying the model into a production environment.


3 System

Various systems are disclosed herein. In some variations, the system can be any suitable type of system that uses one or more of artificial intelligence (AI), machine learning, predictive models, and the like. Example systems include credit systems, drug evaluation systems, college admissions systems, human resources systems, applicant screening systems, surveillance systems, law enforcement systems, military systems, military targeting systems, advertising systems, customer support systems, call center systems, payment systems, procurement systems, and the like. In some variations, the system functions to train one or more models. In some variations, the system functions to use one or more models to generate an output that can be used to make a decision, populate a report, trigger an action, and the like.


The system can be a local (e.g., on-premises) system, a cloud-based system, or any combination of local and cloud-based systems. The system can be a single-tenant system, a multi-tenant system, or a combination of single-tenant and multi-tenant components.


In some variations, the system (e.g., 100) (or a component of the system, e.g., the model training system 110) can be an on-premises modeling system, a cloud-based modeling system, or any combination of on-premises and cloud-based components. In some embodiments, the modeling system includes model development and model execution systems. In some embodiments, the model development system provides a graphical user interface (e.g., 115) which allows an operator (e.g., via 120) to access a programming environment and tools such as R or python, and contains libraries and tools which allow the operator to prepare, build, explain, verify, publish, and monitor machine learning models. In some embodiments, the model development system provides a graphical user interface (e.g., 115) which allows an operator (e.g., via 120) to access a model development workflow that guides a business user through the process of creating and analyzing a predictive model. In some embodiments, the model execution system provides tools and services that allow machine learning models to be published, verified, executed and monitored. In some embodiments, the modeling system includes tools that utilize a semantic layer that stores and provides data about variables, features, models and the modeling process. In some embodiments, the semantic layer is a knowledge graph stored in a repository. In some embodiments, the repository is a storage system. In some embodiments, the repository is included in a storage medium. In some embodiments, the storage system is a database or file system and the storage medium is a hard drive.


In some variations, the system is a model training system.


In other variations, the system includes a model training system.


In some variations, the system functions to train a model to reduce impact of one or more identified model attributes (inputs) on output values generated by the model (and optionally use the trained model). In some variations, the system can re-train the model based on information computed by using an adversarial classifier (e.g., adversarial classifier prediction loss information).


In some variations, the system (e.g., 100) includes one or more of: a model training system (e.g., 110), a model (e.g., 111), an adversarial classifier (e.g., 112), a data storage device (e.g., 113), an application server (e.g., 114), a user interface (e.g., 115), and an operator device (e.g., 120). In some variations, the components of the system can be arranged in any suitable fashion.


In some variations, the system includes an adversarial network. In some variations, the model (e.g., 111) and the adversarial classifier (e.g., 112) form an adversarial network in which the model 111 generates a prediction based on features included in an input data set, and the adversarial classifier 112 predicts a value for a sensitive attribute (associated with the input data set) based on the prediction. In some implementations the sensitive attribute is not available to the model 111. In some implementations, an alternative model is generated in which the alternative model’s training objective is to increase the error rate of the adversarial classifier. In some implementations, adversarial training data (e.g., a data set that identifies historical model predictions and corresponding values for one or more sensitive attributes) serves as the initial training data for the adversarial classifier (e.g., at S240), and training the adversarial classifier includes presenting the adversarial classifier with samples from the adversarial training data, until the adversarial classifier achieves acceptable accuracy. In some variations, the model (e.g., 111) is pre-trained with an initial training data set (e.g., at S220) (e.g., at 262). After pre-training, both the model and the adversarial classifier can be adversarially trained (e.g., at S262 and S261, respectively). In some variations, one or more of the model and the adversarial classifier can be trained (e.g., at S262 and S261, respectively) by iteratively calculating a gradient of an objective function and reducing a value of at least one parameter of the model by an amount proportional to the calculated gradient (e.g., by performing a gradient decent process, or any other suitable process). In some variations, gradients computed during adversarial training (e.g., gradients for a gradient decent process) (e.g., at S261, S262) are computed by performing a process disclosed in U.S. Application No. 16/688,789 (e.g., a generalized integrated gradients method, in some embodiments, using a Radon-Nikodym derivative and a Lebesgue measure). However, any suitable training process can be performed.



FIGS. 1A-1C show exemplary systems 100 in accordance with variations.


The model training system 110 functions to train the model (e.g., 111) (e.g., pre-training, adversarial training, etc.). In some variations, the model training system 110 functions to train the model (e.g., 111) by using an adversarial classifier model (e.g., 112). The model training system can be any suitable type of model training system that uses data from the adversarial classifier 112 (e.g., data identifying a prediction loss of the adversarial classifier) to train a model. Example model training systems can include Python modelling systems, R modelling systems, and the like. In some variations, the model training system includes a training set selector that selects training data based on data received from the adversarial classifier 112. In some implementations, the training set selector removes attributes from the training set based on information received from the adversarial classifier (e.g., information identifying a prediction accuracy of the adversarial classifier for the removed attribute). In some implementations, the training set selector includes one or more rules that are used by the training selector to remove attributes from the training data sets based on the information received from the adversarial classifier. For example, if the adversarial classifier can accurately predict a value of a sensitive attribute from an output generated by the model, the model training system can remove the sensitive attribute from the training data used to train the model (e.g., 111).


In some variations, the adversarial classifier 112 functions to predict a value of one or more sensitive attributes associated with an input data set used by the model 111 (or 111a-d) to generate an output (e.g., a score, a prediction, etc.), by using the output generated by the model. For example, the adversarial classifier can be trained to determine whether a credit prediction output by the model 111 relates to a female credit applicant (in this example the sensitive attribute is “sex”, and the value is one of “male” or “female”).


In some variations, the input to the adversarial classifier is a model output and a sensitive attribute. In some variations, the sensitive attribute is a feature associated with each row in the training data for the pre-existing model (M) but not contained in the training data for the pre-existing model (M).


In some variations, the adversarial classifier is a machine learning model. However, the adversarial classifier can be any suitable type of system that can predict sensitive attribute values.


In some variations, the adversarial classifier 112 is a model that predicts sensitive attributes based on a model score of a model being evaluated (e.g., 111, 111a-d).


A sensitive attribute can be a feature that identifies a class of individuals (e.g., a class based on race, ethnicity, age, sex, national origin, sexual orientation, demographics, military status, etc.), a manufacturer of a product, or any type of information that should not affect output (e.g., a prediction) generated by the model. In some variations, a sensitive attribute need is not an input feature to the model (e.g., as is required, for example in fair lending applications wherein the applicant’s protected class membership status is prohibited from being included as a model input variable). In some variations, the method disclosed herein provides a way to make the model more fair under the conditions required by ECOA and other fair lending regulation.


In some embodiments, a fair alternative model is trained simultaneously with the adversarial classifier. In some embodiments, the fair alternative model is trained on a subset of training rows, and then invoked to produce a score for each row, each score being evaluated by one or many adversarial classifiers each designed to predict a protected attribute or combination of attributes based on the fair alternative model score. In some embodiments the adversarial classifiers predict a protected attribute based on the model score. In some embodiments, the adversarial classifiers are trained simultaneously with the fair alternative model; the adversarial classifier is trained based on the fair alternative model score and a known protected attribute, each corresponding to the same row used to generate the score from the fair alternative model. In some embodiments, the error rate of the adversarial classifier is combined in the objective function of the fair alternative model after the initial training epoch, prior to updating the fair alternative model (through back propagation or other means), and the process continues by selecting successive samples of training data, training the fair alternative model, and training the adversarial classifier as described, until the training data is exhausted. In some embodiments, the objective function in the fair alternative model is a linear combination of the output of the model’s original objective function and the error rate of the adversarial classifier(s). In some embodiments there is an adversarial classifier for each protected attribute. In other embodiments, there is one adversarial classifier predicting a binary flag representing all the protected attributes. In some variations the adversarial classifier is a neural network. In some variations the more fair alternative model is a neural network. In some variations, a series of more fair alternative models are produced by adjusting the linear combination of the fair alternative model’s loss function and the adversarial model’s accuracy. In some embodiments, the number of fair alternative models and the linear combination parameters are selected by a user operating a graphical user interface. In some embodiments each fair alternative model is analyzed and reports are generated to help a user determine whether each model produces stable results over time, produces the desired business results, and is otherwise suitable for use in production. In some embodiments, a best alternative model is selected using pre-defined selection criteria based on attributes of the model and the business problem.


In some variations, one or more of the components of the system are implemented as a hardware device that includes one or more of a processor (e.g., a CPU (central processing unit), GPU (graphics processing unit), NPU (neural processing unit), etc.), a display device, a memory, a storage device, an audible output device, an input device, an output device, and a communication interface. In some variations, one or more components included in hardware device are communicatively coupled via a bus. In some variations, one or more components included in the hardware system are communicatively coupled to an external system (e.g., an operator device 120) via the communication interface.


The communication interface functions to communicate data between the hardware system and another device (e.g., the operator device 120, a model execution system, etc.) via a network (e.g., a private network, a public network, the Internet, and the like).


In some variations, the storage device includes the machine-executable instructions of one or more of a model 111, an adversarial classifier 112, a user interface 115, an application server 114, and a training module that functions to perform at least a portion of the method 200 described herein.


In some variations, the storage device includes data 113. In some variations, the data 113 includes one or more of training data, outputs of the model 111, outputs of the adversarial classifier 112, accuracy metrics (as described herein), fairness metrics (as described herein), economic projections (as described herein) and the like.


The input device functions to receive user input. In some variations, the input device includes at least one of buttons and a touch screen input device (e.g., a capacitive touch input device).


4 Method

In some variations, the method functions to train at least one model (e.g., 111). In some variations, the method functions to train at least one model such that it is invariant (at least within a degree) to changes in one or more selected features (attributes).


In some variations, the method (e.g., 200) includes at least one of: generating a model (e.g., S210); pre-training the model (e.g., S220); selecting suspect features (e.g., S230); pre-training an adversarial classifier (e.g., S240); evaluating the model by using the adversarial classifier (e.g., 250); generating a new model by using the adversarial classifier (e.g., S260); comparing the new model to a pre-existing model (e.g., 270); and providing a user interface (e.g., S280). In some variations, S260 includes one or more of: re-training (e.g., adversarial training) the adversarial classifier (e.g., S261); modifying the model (e.g., adversarial training) (e.g., S262) (e.g., generated at S210); evaluating the modified model (e.g., S263); determining whether the modified model satisfies constraints (e.g., S264); and providing a new model (e.g., S265). In some variations, if constraints are not satisfied at S264, processing returns to S261, and another training iteration is performed. In some variations, if constraints are satisfied at S264, processing proceeds to S265. In some variations, evaluating the model at S263 includes predicting a value of one or more sensitive attributes associated with an input data set used by the model 111 (or 111a-d) to generate an output (e.g., a score, a prediction, etc.), by using the output generated by the model.


In some variations, at least one component of the system (e.g., 100 performs at least a portion of the method (e.g., 200).



FIGS. 2A-2B are representations of a method 200, according to variations.


In some variations, S210 functions to generate an initial model (e.g., 111a shown in FIG. 1B) to be evaluated at S250 by using an adversarial classifier (e.g., 112). In some variations, the initial model generated at S210 is modified (e.g., model parameters are modified to generate one or more new models 111b-d) at S260 (e.g., based on the evaluation at S250). In some implementations, generating the initial model includes defining the model (e.g., by using Python, R, a model development system, a text editor, a workflow tool, a web application, etc.).


The initial model (e.g., 111a) generated at S210 can be any suitable type of model (e.g., as shown in FIGS. 3-7, or any other suitable type of model). The initial model has an initial set of model parameters (e.g., weights, support vectors, coefficients, etc.) that can be adjusted (e.g., during adversarial training) at S260.


In some variations, the initial model 111a can include one or more of: a tree model, a logistic regression model, a perceptron, a feed-forward neural network, an autoencoder, a probabilistic network, a convolutional neural network, a radial basis function network, a multilayer perceptron, a deep neural network, or a recurrent neural network, including: Boltzman machines, echo state networks, long short-term memory (LSTM), hierarchical neural networks, stochastic neural networks, and other types of differentiable neural networks, or any suitable type of differentiable or non-differentiable model. In some variations, the initial model is a supervised learning model. In some variations, the initial model is a discriminative model. In some variations, the initial model is a predictive model. In some variations, the initial model is a model that functions to predict a label given an example of input variables.


In some variations, the initial model 111a is a single model.


In some variations, the initial model 111a is an ensemble model (e.g., heterogeneous, homogeneous) that performs ensembling by any one of a linear combination, a neural network, bagging, boosting, and stacking. In some variations, the initial model 111a is an ensemble that includes differentiable and non-differentiable models. However, the initial model can be any suitable type of ensemble.


The initial model 111a can be a credit model. However, in some variations, the initial model 111a can used for any suitable purpose, such as credit lending, residential leasing, insurance applications, hiring, employment, fraud detection, admissions (e.g., school admissions), scholarship awards, advertising, home sales, drug testing, scientific research, medical results analysis, and the like.


In some variations, the initial model 111a is a pre-existing model (M) (e.g., a credit model currently used in a production environment to generate credit scores).


In some variations, the initial model 111a is an alternative model (F) (e.g., a fair alternative) (e.g., an alternative model to a currently used credit model, etc.) that is an alternative to a pre-existing model (M), and F is trained based on input variables (x).


The alternative model (F) can be any suitable type of model. In some variations, the alternative model (F) includes one or more of a linear model, neural network, or any other differentiable model. In some variations, M and F are both differentiable models. In other variations, M and F are piecewise constant. In other variations M and F are piecewise differentiable. In other variations, M and F are ensembles of differentiable and non-differentiable models.


In some variations, the initial model 111a is an ensemble (E) that is trained based on input variables (x). The ensemble (E) can be any suitable type of ensemble. In some variations, the ensemble (E) includes one or more of a tree model and a linear model. In some variations, the ensemble (E) includes one or more of a tree model, a linear model, and a neural network model (either alone or in any suitable combination).


In an example, in the case of a pre-existing model (M) that does not have a gradient operator, or a model for which adversarial training is not directly applicable, an alternative model (F) is generated from the pre-existing model (M), such that the alternative model (F) is trainable by using the adversarial classifier 112. In a first example, a neural network model is generated from a logistic regression model, wherein the neural network is trained on the logistic regression model inputs x to predict the logistic regression model’s score. In a second example, a neural network model is generated from a tree-based model, wherein the neural network is trained on the tree-based model inputs x to predict the tree-based model’s score. Because these proxy models are differentiable it is possible to compute a gradient (e.g., of an objective function, etc.), which can be used during adversarial training (e.g., at S260) to adjust parameters of the proxy models.


In some variations, the initial model is piecewise differentiable. In such cases, a gradient used during adversarial training (e.g., at S260) (to adjust parameters of the initial model) is computed using a generalized integrated gradients process (as disclosed herein). The gradient can be based on a Radon-Nikodym derivative and a Lebesgue measure. In some variations, the initial model is an ensemble of a continuous model and a piecewise constant model. In some variations, the initial model is an ensemble of a continuous model and a discontinuous model. In some variations, the ensemble function is a continuous function such as a neural network.


S210 can include accessing a pre-existing model (M), using the pre-existing model (M) to generate model output values for at least one set of the training data set, and training the alternative model (F) (e.g., 111a) to predict the model output values generated by the pre-existing model (M) based on the data sets (and features) used to train the pre-existing model (M). In this manner, the alternative model (F) is generated such that it predicts model outputs that would have been generated by the pre-existing model (M).


S210 can include: selecting a model type of the alternative model (F), wherein training the alternative model (F) includes training the alternative model (F) as a model of the selected model type. In some variations, the model type of the alternative model (F) is the same as the pre-existing model (M). In some variations, the model type is automatically selected. In some implementations, the model type is automatically selected based on a list of candidate model types and a set of optimization and selection criteria, in some variations, provided by an operator. In some variations, the optimization criteria includes a fairness metric and an accuracy metric (or a fairness and an economic metric), and selection criteria include thresholds set based on the optimization criteria applied to the original model, or any computable function on optimization criteria, such as the ratio of the fairness metric improvement and the decrease in profitability, economic criteria, demographic criteria, etc. In some variations, economic criteria includes one or more of: expected net profit from a loan, a value at risk calculation, a predefined economic scenario, etc. In some variations, demographic criteria includes one or more of: race, ethnicity, gender, age, military status, disabled status, marriage status, sexual orientation, geographic criteria, membership in a group, religion, political party, and any other suitable criteria. In some variations, the model type is selected based on received user input (e.g., via the operator device 120, the user interface 115, etc.). In some variations, the alternative model (F) is a neural network. In some variations, the alternative model (F) is an ensemble model. In some variations, the alternative model (F) is an XGBoost model. In some variations, the alternative model (F) is then evaluated at S250.


In embodiments, S210 includes training a pre-existing regression model (M). In other embodiments, S210 includes training at least one alternative regression model (F). In some embodiments M and F are neural network regressors. In other embodiments M and F are gradient boosting models such as XGBoost and lightGBM regressors. In variations, the fairness metric is a ratio of mean squared errors (MSE) between protected and control groups. With such a metric, a value of 1 implies that the model is equally good at predicting outcomes for control and protected populations, which is the desired outcome. Values less than 1 imply that the model is better at predicting outcomes for the control group relative to the protected group. Therefore, it is advantageous to increase this fairness metric as near as possible to 1, while also maximizing the accuracy metric, which can include the overall MSE. In embodiments the system can generate reports including the accuracy and fairness metrics for regression models, e.g.,:

  • Accuracy:
    • train MSE: 3.267
    • test MSE: 4.077
  • Fairness:
    • Gender, train (MSE_control/MSE_protected): 0.385
    • Gender, test (MSE_control/MSE_protected): 0.377
    • Race, train (MSE_control/MSE_protected): 0.359
    • Race, test (MSE_control/MSE_protected): 0.355


S210 can include recording a model type of the initial model.


In some variations, the method optionally includes S220, which functions to perform non-adversarial pre-training for an initial model (e.g., generated at S210, accessed from a storage device, etc.) on a training data set (e.g., a full training data set) for the model, to generate the historical model output values used to train (e.g., pre-train at S240) the adversarial classifier.


S220 can include accessing the initial model (e.g., 111a) from a storage device (e.g., 113) of the system 100. Alternatively, S220 can include accessing the initial model (e.g., 111) from a system external to the system 100.


In some variations, the method optionally includes S230, which functions to select suspect features. In some variations, S230 functions to select features that are known to be (or suspected to be) sensitive attributes for training the adversarial classifier 112. In some variations, the sensitive attributes are predetermined. In some variations, the sensitive attributes are dynamically selected during training. In some variations, information identifying sensitive attributes is received via a user input device (e.g., of the operator device 120). In some variations, information identifying the sensitive attributes is generated by using a model (e.g., a machine learning model, etc.). In some variations, the model (e.g., initial model) implements the BISG method as described in the Consumer Finance Protection Bureau publication, “Using publicly available information to proxy for unidentified race and ethnicity”. In some variations, the sensitive attributes include at least one of: race, ethnicity, gender, age, military status, and a demographic attribute.


In variants, for each identified sensitive attribute corresponding ground truth labels (Zlabel) for the sensitive attribute are accessed for each training data row. In some implementations, a data object that represents the ground truth labels for each training data row prediction (e.g., a Ylabel object class) includes the ground truth labels (Zlabel) for the sensitive attributes. For example, the Ylabel object class can be extended to include additional pieces of information, such as the ground truth labels (Zlabel) for each sensitive attribute. In this manner, the ground truth labels (Zlabel) for each identified sensitive attribute can be passed to the training as components of the object used as the training data parameter. By virtue of the foregoing, an existing training function can be extended to implement fairness-based training without changing the interface used by the training function. This simple yet effective design allows existing gradient-boosted tree implementations (e.g., AdaBoost, XGBoost, Catboost, LightGBM, etc.) to be extended to implement the method as described herein. It will be appreciated by data science practitioners that the method disclosed herein is easily implemented as an elegant extension to existing machine learning algorithm implementations, and does not require a full re-coding re-implementation of the base algorithm.


S240 functions to perform non-adversarial pre-training for an adversarial classifier (e.g., 112) that is used to evaluate the initial model (e.g., 111a). In some variations, S240 includes pre-training an adversarial classifier that predicts sensitive attribute values (for sensitive attributes) from model output generated by the pre-trained model (e.g., trained at S220, or otherwise pre-trained) (e.g., the initial model). In some variations, S240 includes pre-training an adversarial classifier that predicts values for each suspect feature selected at S230, from model output generated by the pre-trained model (e.g., trained at S220, or otherwise pre-trained) (e.g., the initial model).


In some variations, the adversarial classifier 112 is trained by using historical model output values generated by the initial model, and corresponding historical sensitive attribute values (for each historical model output value). In some variations, the historical model output values and corresponding sensitive attribute values are stored (e.g., stored in storage device 113).


In some variations, sensitive attribute values used to train the adversarial classifier 112 are included in (or associated with) the training data set used to pre-train the initial model. In some variations, sensitive attribute values used to train the adversarial classifier 112 (e.g., at S240) are not included in the training data set used to pre-train the initial model. In some variations, the sensitive attribute values used to train the adversarial classifier 112 are based on the training data set used to pre-train the initial model. In some variations, the sensitive attribute values used to train the adversarial classifier 112 are generated based on a machine learning model and the training data set used to pre-train the initial model.


In some variations, the sensitive attribute values that correspond to historical model output values are determined by performing the BISG method as described in the Consumer Finance Protection Bureau publication, “Using publicly available information to proxy for unidentified race and ethnicity”. In some variations, the sensitive attribute values are provided by the direct report of the applicant or user. In some variations, the sensitive attribute values are computed using a predictive model. In some variations, the sensitive attribute values are boolean values (e.g., African American=TRUE). In some variations, boolean sensitive attributes are determined based on a likelihood estimated by a model and a configured threshold value (e.g., P(African American) > 0.8 implies African American=TRUE). In other variations, the sensitive attribute values are the probabilities generated by a model (e.g., 85% likely African American; 5% likely White, Non-Hispanic). In some variations, the sensitive attribute values are retrieved from a database or web service. In some variations, the models and analysis are run based on each protected attribute identification method, and based on various confidence thresholds of the protected attribute identification method outputs, considering each data set, in combination, to provide a thorough evaluation of all the options.


In some variations, S250 functions to evaluate the initial model by using the adversarial classifier trained at S240. In some variations, S250 includes determining whether the initial model satisfies one or more constraints. In some variations, the constraints include fairness constraints. In some implementations, fairness constraints include prediction accuracy thresholds for one or more sensitive attributes whose values are predicted by the adversarial classifier based on outputs from the initial model, and the initial model satisfies the fairness constraints if prediction accuracy of the adversarial classifier 112 for the initial model are below one or more of the thresholds. However, the initial model can otherwise be evaluated by using the adversarial classifier 112.


In some variations, responsive to a determination at S250 that the initial model satisfies the constraints, the initial model is used in a production environment (e.g., provided to one or more of an operator device and a model execution system.).


In some variations, responsive to a determination at S250 that the initial model does not satisfy one or more constraints, a new model is generated (e.g., at S260).


In some variations, S260 functions to generate at least one new model (e.g., 111b-d) by using the adversarial classifier 112. In some implementations, the model training system 110 generates at least one new model. The new model can be a version of the initial model with new model parameters, a new model constructed by combining the initial model with one or more additional models in an ensemble, a new model constructed by adding one or more transformations to an output of the initial model, a new model having a different model type from the initial model, or any other suitable new model having a new construction and/or model parameters (examples shown in FIGS. 3-7).


In a first variation, at S260, the model training system 110 generates the new model by re-training the initial model (e.g., 111a) by performing an adversarial training process (e.g., at S262). In some implementations, re-training the initial model includes selecting a new set of model parameters for the new model.


In a second variation, at S260, the model training system 110 generates the new model based on the initial model (e.g., as shown in FIGS. 3-7), and initially trains the new model (or one or more sub-models of the new model) by using training data (e.g., by using the training data set used to pre-train the initial model at S220, another training data set, etc.). In some variations, after initially training the new model (that is based on the initial model), the new model is re-trained by performing an adversarial training process (e.g., at 262) (e.g., by selecting new model parameters for the new model). FIGS. 3-7 show examples of models that can be generated based on the initial model.


In a first example, the new model (e.g., shown in FIG. 4) includes a transformation (e.g., a smoothed approximate empirical cumulative distribution function (ECDF)) that transforms the distribution of output values of the initial model.


In a second example, the new model (e.g., shown in FIG. 5) is a compound model in which the outputs of the initial model and one or more submodels are ensembled together (e.g., using a simple linear stacking function).


In a third example, the new model (e.g., shown in FIG. 6) is a compound model in which the outputs of the initial model and one or more submodels are ensembled together (e.g., using a simple linear stacking function), and the distribution of output values of the ensemble is transformed (e.g., by a smoothed approximate empirical cumulative distribution function (ECDF)).


In a fourth example, the new model (e.g., shown in FIG. 7) is a compound model in which the outputs of the initial model and one or more submodels (and optionally the input data (base signals)) are ensembled together (e.g., using neural network stacking function), and the distribution of output values of the ensemble is transformed (e.g., by a smoothed approximate empirical cumulative distribution function (ECDF)).


In a fifth example, the new model (e.g., 111e) is a modified version (e.g., 111b-d) of the initial model (e.g., 111a) (e.g., a version of the initial model having different parameters).


In a sixth example, the new model (e.g., 111e) is an ensemble of a pre-existing model (M) and a modified version (F) (e.g., 111b-d) of the initial model (e.g., 111a). In some variations, M and F are both differentiable models. In some variations, M and F are piecewise constant. In some variations, M and F are piecewise differentiable. In some variations, the ensemble includes at least one differentiable model and at least one non-differentiable model. In a first example of the ensemble, the ensemble is a linear combination of F and M model outputs. In a second example of the ensemble, the ensemble is a composition of F and M (e.g., F(M(x), x)). However, any suitable ensemble of F and M can be generated as the new model (e.g., 111e).


In a seventh example, the initial model (e.g., 111a) is an ensemble (E), and generating the new model (e.g., 111e) includes learning a new model F(E(x), x) that maximizes the AUC (area under the curve) of F while minimizing the accuracy of the adversarial classifier 112. In embodiments, 111e includes learning a new model F that minimizes mean-squared error while also minimizing the accuracy of the adversarial classifier 112. (Recall that if the adversarial classifier is accurate, the adversarial classifier is detecting bias in the results, so it is advantageous the minimize the accuracy of the adversarial classifier.) In some implementations, learning the new model includes constructing a model that generates an output based on 1) the input data x, and 2) an output generated by the ensemble from the input data x (e.g., E(x)); the new model F(E(x), x) (or one or more sub-models of F(E(x), x)) is trained by using training data (e.g., by using the training data set used to pre-train the initial model at S220, another training data set, etc.). In some variations, after initially training the new model F(E(x), x), the new model F(E(x), x) is re-trained by performing an adversarial training process (e.g., at 262) by selecting new model parameters for the new model F(E(x), x) (and E(x)) that maximize the AUC of E (or minimizes the MSE of E) while minimizing the accuracy of the adversarial classifier 112.


In a seventh example, the initial model (e.g., 111a) is an ensemble (E), and generating the new model (e.g., 111e) includes learning a new model F(x) that includes E as a submodel, and combining F and E within an ensemble (FE) to produce a model score.


In some implementations, learning the new model F(x) includes initially training F(x) by using training data (e.g., by using the training data set used to pre-train the initial model at S220, another training data set, etc.); and after initially training the new model F(x), the new model F(x) is re-trained by selecting new model parameters for the new model F(x) (and E(x)) that maximize the AUC of E (for classification problems), or MSE of E (regression problems), while minimizing the accuracy of the adversarial classifier 112.


In some variations, the adversarial classifier is an ensemble of classifiers, including at least one classifier for each protected status or sensitive attribute.


In some variations, the ensemble FE is a linear combination (e.g., w*F(x) + (1-w)*E(x)). In some variations, the coefficients of the linear combination FE are determined based on a machine learning model (e.g., a ridge regression, etc.). In some variations, the ensemble FE is ensembled based on a neural network, including, without limitation: a perceptron, a multilayer perceptron, or a deep neural network, etc.


In some implementations, the model is a tree ensemble, and executing the training function (e.g., “xgboost.train()” ) includes performing a tree boosting training process (e.g., a gradient tree boosting training process) that includes iteratively adding tree sub-models to a tree ensemble until output generated by the tree ensemble for each of a plurality of training data rows satisfies training stopping criteria. In some implementations, the training function sequentially adds additional trees to the tree ensemble, i.e., a “warm start”. In some implementations, the training function removes all present trees and rebuilds from scratch, i.e., a “cold start”.


The tree boosting training process can include training the initial tree model to fit the training data, and performing one or more training iterations. Each training iteration can include: training at least one adversarial classifier to predict a sensitive attribute value for an input row based on a prediction (generated by the tree ensemble) for the input row; training a new tree sub-model to predict a combination of a tree ensemble loss function value and one or more adversarial classifier loss function values for a given row; and adding the new tree sub-model to the tree ensemble. If a stopping condition is satisfied, then the training process ends. Otherwise, another training iteration can be performed. In variations, the stopping condition can be related to model fairness and/or model accuracy.


In some implementations, the adversarial classifier loss function can be: -P * [Zlabellog(Zpred) + (1-Zlabel)log(1-Zpred)], wherein P is a scalar that represents a fairness penalty parameter. A same fairness penalty parameter can be used for all adversarial classifier loss functions included in the evaluation metric. Alternatively, one or more of the adversarial classifier loss functions can have different fairness penalty parameters. In this example, Zpred = σ(αYpred+β), where σ represents the sigmoid function and α and β are the model parameters that are learned during training of the adversarial classifier, by using the sensitive attribute ground truth labels (Zlabel). In this implementation, the gradient function for the adversarial classifier loss function is: -P (Zpred - Zlabel) αYpred(1-Ypred), and the hessian function for the adversarial classifier loss function is the second-order derivative of the adversarial classifier loss function.


Computing a gradient value and a hessian value for a row using the custom loss function can include: determining a gradient value (first-order gradient) for the tree ensemble loss function; determining a hessian value (second-order gradient) for the tree ensemble loss function; for each adversarial classifier loss function, determining a gradient value; for each adversarial classifier loss function, determining a hessian value; determining a combined gradient value for the custom loss function; and determining a combined hessian value for the custom loss. In variants, the gradient values for a row can be combined as a linear combination with any suitable selection of coefficients. However, the gradient values for a row can otherwise be combined. In variants, the hessian values for a row can be combined as a linear combination with any suitable selection of coefficients. However, the hessian values for a row can otherwise be combined.


In variants, the components of the loss function, which include the gradients and hessians for the accuracy or fairness objective(s), or various combinations of these objectives, can be weighted according to user-provided sample weights for observations in the model training dataset. In one implementation, the sample weights can be set to control the influence of each data point on the model during the training process, so that the underlying information passed to the model training system can be more reflective of expectations for when the model will be deployed, e.g., matching future expected time--variant behavior. In variants, sample weights can be specific to model classification/regression targets (Ylabel), demographic targets (Zlabel), or both. In variants, these cases are supported by linearly scaling the gradient, hessian, and/or combinations of these components by the sample weights provided such that the risk objective (for accuracy), fairness objective (for sensitive attributes), or combinations thereof, receive selective influence from specific training observations. In variants, all sample weights may be set to unity if they are not required.


In variants, in a case where the hessian value for an adversarial classifier loss function (for a given row) is not greater than or equal to zero, the hessian value can be set to a zero value. Alternatively, the hessian for the adversarial classifier can always be set to a zero value in cases where the hessian value must always be greater than or equal to zero. These and similar modifications ensure that the hessian values are positive semi-definite, which in experimentation has resulted in improved efficiency and robustness of the optimization process.


In variants, generating the new tree model includes: defining a new tree model by using the combined gradient values and the combined hessian values. In variants, the new tree model is defined by using the combined gradient values and the combined hessian values (for the custom loss function) to determine a tree structure for the new tree model. In some implementations, determining a new tree model includes determining which features of the prediction model to split, and determining which feature value to use to define a feature split.


In some variations, fairness penalty parameters (P) of the adversarial classifier loss functions can be adjusted to change the balance between improving accuracy and improving fairness of the tree ensemble, or other objectives. In some implementations, the model can serve members with multiple sensitive attributes (e.g., African American, Hispanic, female, elderly, etc.), and the evaluation metric can include a loss metric for an adversarial classifier for each sensitive attribute.


In some variations, the new model (e.g., 111e) has a model type that is different from a model type of a pre-existing model (M).


In some variations, the new model (e.g., 111e) has a model type that is the same as a model type of a pre-existing model (M). In some variations, the initial model has a model type that is different from a model type of a pre-existing model (M), and generating the new model includes generating a new model that corresponds an adversarial-trained model (e.g., 111b-d) (trained by using the adversarial classifier 112) and has the same model type as the pre-existing model (M). By virtue of generating a new model that has a same model type as the pre-existing model (M), existing systems and processes that depend on the model having the model type of the pre-existing model (M) can operate with the new model that has improved fairness in at least one aspect as compared to the pre-existing model (M).


In some variations, S260 includes performing adversarial training by iteratively training the adversarial classifier 112 and the model (e.g., 111a-d). In some variations, iteratively training the adversarial classifier and the model includes: during each of a plurality of iterations, first training the adversarial classifier 112 (e.g., by adjusting model parameters of the adversarial classifier to minimize a loss function for the adversarial classifier) for a single neural network training epoch while keeping the model (e.g., 111a) fixed (e.g., at S261), then training the model (e.g., 111b) (e.g., by adjusting model parameters of the model to minimize a loss function for the model) on one or more samples of the training data set (e.g., used to pre-train the model, e.g., 111a) while keeping the adversarial classifier fixed (e.g., at S262).


In some variations, iteratively training the adversarial classifier and the model includes: during each of a plurality of iterations, first training the adversarial classifier 112 (e.g., by adjusting model parameters of the adversarial classifier to minimize a loss function for the adversarial classifier) for a single boosting round while keeping the model (e.g., 111a) fixed (e.g., at S261), then training the model (e.g., 111b) (e.g., by adjusting model parameters of the model to minimize a loss function for the model) on one or more samples of the training data set (e.g., data used to pre-train the model, such as training performed in operation 111a) while keeping the adversarial classifier fixed (e.g., at S262).


In some variations, adversarial training (e.g., S261, S262) includes iteratively adjusting parameters of the adversarial classifier and parameters of the model. In a first variation, adversarial training of a model (e.g., the model 111, the adversarial classifier (e.g., 112) includes increasing a value of one or more model parameters. In a second variation, adversarial training includes decreasing a value of one or more model parameters. In a third variation, adversarial training includes decreasing a value of one or more model parameters, and increasing a value of one or more model parameters.


In some variations, at least one model parameter is adjusted based on an objective function (e.g., a cost function, a loss function). In some variations, at least one model parameter is adjusted to decrease a value of the objective function (e.g., by decreasing the parameter based on a multiple of a value of the objective function).


In some variations, adjusting at least one model parameter includes determining at least one of a gradient and a derivative of the objective function. In some variations, at least one model parameter is adjusted to decrease a gradient of the objective function (e.g., by decreasing the parameter based on a multiple of a value of the gradient of the objective function). In some variations, at least one model parameter is adjusted to decrease a derivative of the objective function (e.g., by decreasing the parameter based on a multiple of a value of the derivative of the objective function).


In some variations, for the model (e.g., 111) the objective function is computed based on the current model parameters for the model (e.g., Wf) and the current parameters for the adversarial classifier (e.g., Wa).


In some variations, for the model (e.g., 111) the objective function is a difference between a model prediction loss metric for the model (e.g., 111) and an adversarial classifier prediction loss metric for the adversarial classifier 112 (e.g., Lossy -Lossz). In some variations, for the model (e.g., 111) the objective function is a difference between a model prediction loss metric for the model (e.g., 111) and a multiple (e.g., L) of the adversarial classifier prediction loss metric for the adversarial classifier 112 (e.g., Lossy -LLossz). In some implementations, L is a parameter that specifies the tradeoff between fairness and accuracy, as a higher value of L steers the model towards more fair predictions, while sacrificing prediction accuracy. In some variations, the prediction loss metric for the model is a difference between an actual target value and a target value predicted by the model using a sample of training data and the current model parameters for the model (e.g.,. Wf). In some variations, the prediction loss metric for the adversarial classifier is a difference between an actual sensitive attribute value and a sensitive attribute value predicted by the adversarial classifier using an output of the model (e.g., 111) and the current model parameters for the adversarial classifier (e.g.,. Wa).


In some variations, for the adversarial classifier (e.g., 112) the objective function is the adversarial classifier’s prediction loss metric for the adversarial classifier 112 (e.g., Lossz). In some variations, the prediction loss metric for the adversarial classifier is computed based on the current model parameters for the adversarial classifier (e.g., Wa). In some variations, the prediction loss metric for the adversarial classifier is a difference between an actual sensitive attribute value and a sensitive attribute value predicted by the adversarial classifier using an output of the model (e.g., 111) and the current model parameters for the adversarial classifier (e.g.,. Wa).


In some variations, adjusting at least one model parameter includes determining at least one of a gradient and a derivative of one or more of the model and the adversarial classifier.


In some variations, adjusting model parameters W (e.g., Wf, Wa) of the model or the adversarial classifier includes: calculating the gradients G of the objective function J(W); updating the parameters W by an amount proportional to the gradients G (e.g., W = W - nG, wherein n is the learning rate); and repeating until a stopping condition is met (e.g., the value of the objective function J(W) stops reducing).


In some variations, the gradient G of the objective function J(W) is computed by using one or more of a gradient operator of the model and a gradient operator of the adversarial classifier. In some variations, gradient Gof the objective function J(W) is computed by performing a process disclosed in U.S. Application No. 16/688,789 (e.g., a generalized integrated gradients method, in some embodiments, using a Radon-Nikodym derivative and a Lebesgue measure). In some variations, gradient G of the objective function J(W) is computed by: identifying a reference input data set; identifying a path between the reference input data set and an evaluation input data set (e.g., a current sample of the training data set, a current output of the model 111, etc.); identifying boundary points (e.g., points at which discontinuities of the objective function occur) of the objective function J(W) (e.g., discontinuities of the objective function) along the path by using model access information obtained for at least one of the model and the adversarial classifier; identifying a plurality of path segments by segmenting the path at each identified boundary point; for each segment, determining a segment contribution value for each feature of the sample by determining an integral of a gradient for the objective function along the segment; for each boundary point, determining a boundary point contribution value for the boundary point, and assigning the boundary point contribution value to at least one of the features of the input space; for each endpoint of the path between the reference input data set and the sample, assigning a contribution of each feature at the endpoint; and for each feature, combining the feature’s segment contribution values and any boundary point and endpoint contribution values assigned to the feature to generate the feature contribution value for the feature with respect to at least two data points, wherein the gradient is G of the objective function J(W) is the set of feature contribution values. In some implementations, a Radon-Nikodym derivative and a Lebesgue measure are used to determine the integral of a gradient for the objective function along each segment. In some variations, the gradient is the gradient of a neural network model.


In some variations, when computing the gradient (for the model 111) for each sample of the training data set, a zero vector is selected as the reference input data set for the first sample; for each subsequent sample, a previous sample is used as the reference input data set. In some variations, the reference data set is previously used sample. In some variations, the reference data set is a randomly selected sample of the training data set. In some variations, the reference data set is a data set with one or more randomly generated values. However, the reference input data set for the model 111 can be identified by using any suitable process.


In some variations, when computing the gradient (for the adversarial classifier 112) for each model output, a zero vector is selected as the reference input data set for the first model output; for each subsequent model output, a previous model output is used as the reference input data set. In some variations, the reference data set is a randomly selected sample of the model outputs. In some variations, the reference data set is a data set with one or more randomly generated values. However, the reference input data set for the adversarial classifier can be identified by using any suitable process.


In some variations, any suitable type of process can be used to determine the integral of the gradient for the objective function.


In some variations, the adversarial classifier and the model are iteratively trained for each selected sensitive attribute (e.g., selected at S230). In some variations, sensitive attributes are combined into a single feature that characterizes any subset feature (such as, for example, by computing the logical OR of a set of protected class membership statuses).


In some variations at least one sensitive attribute is computed based on a second model, in some embodiments, this secondary model is a machine learning model, in other embodiments, it is the Bayesian Improved Surname Geocoding (BISG) method, for example, as described in the Consumer Finance Protection Bureau publication, “Using publicly available information to proxy for unidentified race and ethnicity”. In one variation, a sensitive attribute is represented by a probability. In other embodiments a sensitive attribute is represented by a Boolean flag.


Various approaches to using BISG for race and ethnicity estimation can be used. For example, each applicant can be assigned to a single race and ethnicity category with the highest BISG-assigned probability, provided that the probability exceeds 80%. In some implementations, protected status probabilities can be used instead of binary flags as it can be advantageous to use the protected class probabilities directly, instead of forcing an applicant into a single race/ethnicity category, especially when a significant number of applicants receive low protected class probabilities and thus must remain uncategorized (race/ethnicity “unknown”).


In some implementations, the risk classifier is an XGBoost model, in which model parameters (feature splits) are selected to minimize a loss function. An iterative training process minimizes a quadratic approximation of the true loss function, given an analytic gradient and hessian. In some implementations, model loss functions can consider two objectives by adding a fairness-related component in combination with the original classification loss. Since the XGBoost gradient must be analytically provided, this specification provides it directly. These modifications are described below.


Some implementations use a binary representation of a sensitive attribute or protected class membership status. In this specification, fairness can be measured during the XGBoost risk model estimation process by iteratively comparing the risk score distributions of each protected group with its unprotected counterpart. Risk models that yield similar score distributions for protected and unprotected groups, such that a classifier could not determine the protected status or sensitive attribute of an individual based solely on their risk model’s score, are considered more fair. The loss of a secondary classifier that attempts to separate protected from unprotected applicants based solely on the risk model’s score can therefore be used as a measure of fairness. The inverse of this secondary “fairness loss” can be used by the risk model during the training process as a component of a custom loss function during the iterative risk model estimation process.


To measure fairness of the risk model scores during the model training process, some implementations build a secondary protected class predictor model in parallel with the XGBoost risk classifier. In some implementations, this secondary model is a logistic regression trained based on the XGBoost model’s risk scores from the previous boosting round for each protected and unprotected class (sensitive/non-sensitive) pair. The loss objective from the logistic regression can represent a measure of fairness. A greater loss indicates fairer risk model score distributions, as indicated by lower predictive accuracy (greater loss) of the logistic model predicting protected class membership status. By introducing a penalty term into the XGBoost model objective in the form of the secondary logistic regression model’s loss (multiplied by a free parameter), the risk model can be tailored to pursue a customized degree of fairness by minimizing the risk loss while maximizing the protected class logistic regression model’s loss.


In some implementations, the adversarial classifier 112 is trained using probabilistic sensitive attributes (e.g., P(race/ethnicity=“African American”)=85%). Protected class probabilities are incorporated into the less discriminatory alternative (LDA) search through probabilistic sampling, and the adverse impact ratio (AIR) is calculated by multiplying sample weights. The probabilistic approach leverages the binary approach described above, but in this approach, sample weights are leveraged. Binary labels are used for the protected and unprotected classes, but the protected class probabilities as sample weights are incorporated into the training process. During each gradient boosting round, each row is sampled and assigned a class label based on the probability of that label as assigned by BISG.


Some implementations use the following adversarial training process 262 for each boosting round in an XGBoost model training process or each training epoch for a neural network.


First, the AIR, which represents the ratio between protected and unprotected approval rates (AR), is used as an example to demonstrate how fair lending analytics can directly leverage probabilistic labels through the application of sample weights. Below, consider the simple case with self-reported information. In this example, FemaleAIR = 66.7%, given that FemaleAR = 66.7% and MaleAR = 100%.










Label
Approved Flag




Female
0


Male
1


Female
1


Male
1


Female
1






A similar case with probabilistic labels can be:










Label
Approved Flag




Female (0.8), Male (0.2)
0


Female (0.2), Male (0.8)
1


Female (0.9), Male (0.1)
1


Female (0.1), Male (0.8), Other (0.1)
1


Female (0.85), Male (0.15)
1






In this example, FemaleAIR = 79.7%, given that the FemaleAR = 71.9%, from (0.2 + 0.9 + 0.1 + 0.85)/ (0.8 + 0.2 + 0.9 + 0.1 + 0.85), and MaleAR = 90.2%.


Second, in some implementations, to support probabilistic labels for the LDA search process, the construction of the adversary can be modified such that it (that is, the set of binary classifiers) can support non-binary class labels. Note that the construction of the risk model (and its loss function) and the dual-objective optimization procedure are all identical to the earlier case with binary class labels. Additionally, in both cases, the adversary will continue to predict protected class status on [0,1] from the risk model score. The key difference is that in the earlier case, the adversary is trained with protected class binary labels, while here, it uses probabilistic labels.


To support probabilistic labels, the adversarial loss function is modified to use the continuous rather than discrete (true) labels:







O

p
e
n
a
l
t
y


(
δ
)
=

γ
(

y

p
r
o
t
e
c
t
e
d


log
(



y
^



i
r


)
+

y

u
n
p
r
o
t
e
c
t
e
d


log
(
1




y
^



i
r


)
)




Compared with the original case, yclass ∈ {0,1} has been replaced by yprotected ∈ [0,1] and yunprotected E [0,1]. For example, yclass = 1 would represent a female in the original case, while, here, yprotected = 0.8 and yunprotected = 0.2 would represent a person with a 80% likelihood of being female and 20% likelihood of being male. Intuitively, this calculation acts as a weighting term on the loss function, in a similar manner as was explained previously for the AIR metric.


In addition, the predicted class probabilities ŷlr generated by the adversarial predictive models must support the continuous protected labels as its predictor. This computation can be performed using a probabilistic sampling procedure, for example, stochastic or iterative solution procedures.


In some implementations, when protected class information is probabilistic, observations can be assigned a binary class label based on the likelihood assigned by the probabilistic protected class estimation method (e.g., the race/ethnicity probabilities assigned by BISG). At each XGBoost boosting round, for each observation, a stochastic binary protected class assignment can occur, such that the proportion of binary assignments reflects the probabilistic class assignments.


For example, consider applicants with the following probabilistic labels:













Applicant ID
White
AA
Hispanic
Other/Unknown




1
0.998
0.001
0.001
0.0


2
0.33
0.33
0.33
0.0


3
0.5
0.5
0.0
0.0


4
0.25
0.25
0.0
0.5






This approach can be understood as follows. Applicant 1 is most likely White (non-Hispanic), and so during the training process, Applicant 1 should often be labeled as White (non-Hispanic). Applicant 2 is equally likely to be White (non-Hispanic), African American, or Hispanic, and so Applicant 2 will be labeled as White, non-Hispanic, African American, and Hispanic with equal frequency during the adversarial training process. By assigning binary labels with frequencies that reflect the probabilistic labels, probabilistic class assignments can be transformed into binary class assignments.


Working the rest of the examples, the following boosting rounds could occur:
















Applicant ID
Rd. 1
Rd. 2
Rd. 3
···
Rd. K-2
Rd. K-1
Rd. K




1
White
White
White
···
White
White
White


2
AA
White
Hispanic
···
White
AA
Hispanic


3
AA
White
AA
···
AA
White
White


4
Other
White
Other
···
White
Other
AA






The table above demonstrates the sampling process used in some implementations simply reflects the underlying protected status label assignment probabilities. To compute these values empirically, a uniform random number generator along with cutoffs based on the exact assignment probabilities can be used. This approach can be compared to segmenting parts of the probabilistic intervals into different groups, somewhat analogous to an approach based on a segmented empirical cumulative distribution function (ECDF).


The following example further explains how the sampling process used in variations that consider probabilistic sensitive attributes can be performed. First, the method assigns uniform random numbers in the open interval (0,1) at each boosting round for each applicant:
















Applicant ID
Rd. 1
Rd. 2
Rd. 3
···
Rd. K-2
Rd. K-1
Rd. K




1
0.151
0.533
0.913
···
0.494
0.366
0.692


2
0.724
0.422
0.628
···
0.121
0.828
0.287


3
0.090
0.421
0.726
···
0.264
0.772
0.987


4
0.113
0.824
0.819
···
0.944
0.454
0.735






Then, the method transforms the protected status probabilities for each applicant into sub-intervals of (0, 1), such that the length of the interval is equal to the protected status probability:













Applicant ID
White
AA
Hispanic
Other/Unknown




1
(0, 0.998]
(0.998, 0.999]
(0.999, 1)
x


2
(0, 0.33]
(0.33, 0.66]
(0.66, 1)
x


3
(0, 0.5]
(0.5, 1)
x
x


4
(0, 0.25]
(0.25, 0.5]
x
(0.5, 1)






Next, the method assigns a binary label based on whether the random number assigned in the first step is contained within the interval computed in the second step. The result is the table presented below:
















Applicant ID
Rd. 1
Rd. 2
Rd. 3
···
Rd. K-2
Rd. K-1
Rd. K




1
White
White
White
···
White
White
White


2
Hispanic
AA
Hispanic
···
White
Hispanic
White


3
White
White
AA
···
White
AA
AA


4
White
Unknown
Unknown
···
Unknown
AA
Unknown






Using this method, the distribution of the resulting samples will reflect the distribution of the population. The sampled binary labels can then be used in the adversarial training process described in U.S. Pat. 10,977,729 (the contents of which are incorporated herein by reference) and U.S. Pat. Application 63/117,696 (the contents of which are incorporated herein by reference), thus enabling the learning algorithm to create a model that is fairer in a probabilistic sense. In other words, labels with high confidence, e.g., 99.9% Hispanic and 0.01% White (non-Hispanic), will strongly influence the fair objective, while labels with lower confidence, e.g., 65% Hispanic and 35% White (non-Hispanic), will have a lessened impact. In the extreme hypothetical case, where the protected labels are random (equal likelihood of any label), the fair objective will not materially contribute to the LDA search process, and would create a model that is statistically indistinguishable from the baseline risk model.


In some implementations, probabilistic sensitive attributes are leveraged and the resulting models are analyzed using a probabilistic AIR metric, where the probability of protected class membership or sensitive attribute is used directly as a sample weight multiplier. Then, instead of calculating the approval rate as the proportion of applicants with binary class membership above the approval threshold, the expected proportion of all applicants above the approval threshold in the protected and unprotected classes can be calculated.


Generating the prediction system can include first training a model to maximize one objective, e.g., predictive accuracy (for example, for classification problems as measured by the F statistic, AUC, max K-S, Gini coefficient, etc., , or for regression problems, as measured by mean squared error, root mean squared error, mean absolute error, quantile loss or other suitable regression loss function). Next a tree-based predictive model can be trained by using a fairness-enabled tree-based boosting module. Such a fairness-enabled tree-based boosting module can include a boosting module (e.g., AdaBoost, XGBoost, Catboost, LightGBM, etc.) that includes a custom loss function module, that is constructed to train tree-based models to incorporate a fairness metric that considers outcomes for at least one value of a sensitive attribute and the predictive performance (loss, as characterized by mean squared-error, log-loss, and the like) with respect to known outcomes for the model’s predictive target (e.g., delinquent for 90 days, charge-off, etc.). The custom loss function module can include at least one adversarial model that attempts to predict the value of at least one sensitive attribute value based on a fairness-enabled model output. For classification models, the adversarial model can receive (as input) predictions for input rows corresponding to the model outputs (Ypred), and corresponding ground truth labels containing the actual outcomes (Ylabel). For regression models, the adversarial model can receive (as input) the mean squared error, root mean squared error, or other regression loss metric, for the predictions for input rows corresponding to the model outputs (Ypred), and corresponding ground truth values containing the actual values (Yact). The predictions can be outputs of the prediction model, and the labels or values can be the actual values that would be predicted by a perfectly accurate prediction model. Accuracy of the prediction model can be identified by comparing the prediction for a row with the corresponding label. In some implementations, for classification models, the comparing step can include computing a log loss, binary cross-entropy, or other suitable classification loss function. In some implementations, for regression models, the comparing step can include computing a mean squared error, root mean squared error and other suitable error metric. For each selected sensitive attribute, the custom loss function module can also receive (for each selected sensitive attribute) a sensitive attribute label for each received prediction (Zlabel). Using the received predictions (for classifiers) or received errors (for regressors) and the received sensitive attribute labels for a sensitive attribute, the custom loss function module can define a metric that identifies the accuracy of predicting a sensitive attribute value (for the sensitive attribute) from predictions generated by the prediction model. Using this metric, the custom loss function module can compute information to guide the iterative solution employed by the tree-based boosting-solution procedure, which can be based on the computation of a gradient and/or Hessian. In some implementations, the custom loss function module can include a predictive model that is trained simultaneously with the prediction model based on sensitive attribute values and the prediction model outputs.


In some implementations, the LDA search method can re-estimate the risk model using feedback from a secondary classifier (known as the adversary) that is trained simultaneously with the risk model. The adversary can attempt to predict protected class status based solely on the risk model’s score after each boosting round. If the adversary is effective at guessing protected class status (Y/N) from the risk scores (that is, the adversary correctly determines the protected class status more often than a configured threshold), it indicates that the risk scores have some degree of disparity. The stronger the correlation, the stronger the disparity, and the stronger the impact the adversary will have on the overall dual-objective loss function.


In embodiments, for each targeted protected class group, e.g., females, African Americans, Hispanics, the adversary can construct a simple classifier (Logistic regression) to predict protected class status from risk score. One adversarial classifier can be built for each protected/unprotected pair, e.g., a classifier to predict females v. males, African Americans v. White (non-Hispanics), Hispanics v. White (non-Hispanics), and so on. Once the adversary’s parameters (coefficients) are solved, the derivatives with respect to the model scores can be formulated, allowing the fairness loss to be computed.


The mathematical formulation for the risk classifier is shown below:


Let δ be the margin score of the xgboost classifer and









y
^



x
g
b



x

=
σ

δ





Where






σ

δ

=
1
/


1
+

e


δ














d
σ


d
δ


=
σ


1

σ



a
n
d




d
2

σ


d

δ
2



=
σ


1

σ




1

2
σ






The logistic loss objective is:






O

δ

=
y
log




y
^


x
g
b




+


1

y


log


1



y
^


x
g
b












O

δ

=
y
l
o
g


σ

δ



+


1

y


log


1

σ

δ







XGBoost can use a second order Taylor expansion of the objective where the custom objective callable returns the gradient and the Hessian:








d
O

δ



d
δ


=
σ

δ


y


a
n
d





d
2

O

δ



d

δ
2



=
σ

δ



1

σ

δ







Similarly, the formulation for the adversarial classifiers can be the forms shown below:







O

p
e
n
a
l
t
y



δ

=
γ



y

c
l
a
s
s




log




y
^


l
r




+


1


y

c
l
a
s
s






log




1



y
^


l
r












The gradient and the Hessian are:








d

O

p
e
n
a
l
t
y

δ





d
δ


=

γ
a




y
^


l
r




y

c
l
a
s
s






y
^


x
g
b




1



y
^


x
g
b















d
2


O

p
e
n
a
l
t
y



δ



d

δ
2



=

γ
a
+






y
^


l
r




y

c
l
a
s
s






1

2


y
^


x
g
b






+
a


y
^


l
r




1



y
^


l
r






y
^


x
g
b




1

















y
^


x
g
b








y
^


x
g
b




1



y
^


x
g
b








The two loss functions can be linearly combined to mathematically define the dual-objective (risk and fairness) task, such that the learning algorithm finds models that satisfy both objectives.


Fairer models will score protected and unprotected applicants more similarly. When that occurs, the adversary predicts protected class status from model risk score less accurately -- in other words, a poorly performing adversarial classifier represents better fairness. Because of this fact, the method attempts to minimize risk log loss and maximize adversarial log loss. This dual-objective and bi-directional optimization problem (max and min) can be simplified into one solution that is dual-objective and uni-directional (min and min) by negating the sign on the adversarial loss. The adversarial parameter y is provided so the weight of the fairness loss component of the dual-objective loss function can be linearly scaled. A zero value for y cancels out the fairness loss component, returning a model identical to the baseline (without consideration of fairness), while larger values of γ places additional weight on the fairness component, yielding fairer models.


In some variations, the adversarial classifier and the model are iteratively trained (e.g., at S260) in accordance with model constraint parameters.


In some variations, the model constraint parameters specify features whose model parameters are to be unchanged during the adversarial training process (e.g., at S260). For example, the search of a new model can be constrained by specifying features whose parameters are not to be changed. In some variations, model constraint parameters specify features whose parameters are to be unchanged during the training process (e.g., at S220, S262). In some variations, model constraint parameters specify features that are to remain in the model during the training process (e.g., features whose parameters should have a parameter value greater than zero). In some variations, model constraint parameters specify features that are not to be added to the model during the training process (e.g., features whose parameter values are to remain zero). In this manner, the search space for a new model can be reduced, thereby reducing computational complexity and achieving the desired business outcome. In some variations, the constraint parameters specify features for which the model score should move monotonically with respect to the feature value. Monotonic constraints enable a model to achieve specific business outcomes, such as preventing a model score from decreasing when a core credit attribute improves. In this manner, model constraints can be set such that the training does not result in a new model that deviates from specified model constraints. Model constraints can limit adversarial training to produce trained models that meet model constraints for models to be used in a production environment, such that models unsuitable for a production environment are not trained during adversarial training.


In some variations, the model training process (e.g., 262) applies sample parameters (e.g., weights, support vectors, coefficients, etc.). In some embodiments, the sample weights are based on a temporal attribute. In other variations, the sample weights correspond 1-1 with each training data row, and are provided in a file. In some variations, the sample weights are provided in a user interface. In some variations, the user interface is comprised of an interactive tool that enables the analysis of outcomes based on a user-selected and configurable: demographic criteria, model accuracy or economic criteria, sample weights, data files, and analysis and reporting templates. However, the analysis of outcomes can otherwise be performed.


In some variations, S263 includes computing an objective function used (e.g., at S262) to modify parameters for the model 111. In some variations, S263 includes computing an objective function used (e.g., at S261) to modify parameters for the adversarial classifier 112.


In some variations, S263 includes computing at least one accuracy metric (e.g., used by an objective function used to modify parameters for one or more of the model 111 and the adversarial classifier 112) for each training iteration. Accuracy metrics can include one or more of Area-Under-the-Curve (AUC) metrics, gini, KS, F1 score, Mean Squared Error (MSE), Mean Absolute Error (MAE), Quantile Loss, and other accuracy values and statistics relating to the predictive accuracy of model outputs generated by the trained model (e.g., 111) and/or the adversarial classifier 112. In some variations, S263 includes updating each accuracy metric during each new training iteration. In some variations, the accuracy metric is selected by an operator from a user interface providing analysis capabilities to an analyst. In some variations, the accuracy metric is a function provided by an operator, including an economic projection of approved loan profitability based on a credit policy, or other computable function.


In some variations, S263 includes computing at least one fairness metric for each training iteration. Fairness metrics can include a correct prediction percentage for at least one sensitive attribute relating to sensitive attribute predictions generated by the adversarial classifier (e.g., 112). In some variations, S263 includes updating each fairness metric during each new training iteration. In some variations, the fairness metric is the EEOC fairness ratio, given by the percentage of approvals for a protected class divided by the percentage of approvals for the unprotected class. In some variations, the fairness metric can be the fairness metric described in The U.S. Equal Employment Opportunity Commission, FEDERAL REGISTER, / VOL. 44, NO. 43 / FRIDAY, MARCH 2, 1979 [6570-06-M], the contents of which is hereby incorporated by reference. In other variations, when the model can be a regression model, operation S263 can include computing a ratio of errors for populations with protected status or sensitive attributes versus their unprotected/non-sensitive counterpart, the error given by the mean squared error:






M
S
E
=

1
n






1
n







y
i





y
l


^




2









where n is the number of observations or data points, Y are the observed values and Ŷ are the predicted values. In some implementations, other fairness metrics based on other loss functions may be employed, such as: Mean absolute error, Huber loss, Log-Cosh loss, Quantile loss, or other loss function, without limitation. In some implementations, when the model is a regression model, the loss function of the original predictive model can be the mean squared error or MSE, which can be combined with the fairness loss. In some implementations, the fairness loss can be a combination of binary cross-entropy loss functions associated with each protected class binary classifier.


In some implementations, the fairness metric can be based on Boolean sensitive attributes (e.g., African American = TRUE). Approval rate ratios can be calculated by first determining the approval rate for the protected and unprotected applicants. The approval rate can be calculated by dividing the count of the protected approvals by the count of protected applicants (or count of unprotected approvals by count of unprotected applications). The approval rate ratio can then be given by the approval rate for the protected applicants divided by the approval rate for the corresponding unprotected applicants (e.g., approval rate for men divided by the approval rate for women). In implementations that use probabilistic sensitive attributes, for instance as assigned by a model (e.g., P(African American) = 0.85), the ratios can be calculated based on the protected attribute probabilities. Here, each applicant can contribute partially to each approval rate and approval rate ratio based on the probabilistic attribute assignment. For example, a single approved applicant A with P(African American) = 0.6, P(Hispanic, non-white) = 0.4 would contribute 0.6 of an African American approval and 0.4 of an Hispanic, non-white approval, 0.6 of an African American applicant and 0.4 of an Hispanic, non-white applicant. Approval rates and ratios for probabilistic sensitive attributes can then be calculated as described above for Boolean sensitive attributes.


For regression models, implementations can determine the error ratios similarly, both for Boolean -sensitive attributes and for probabilistic ones. For example if the MSE for a model prediction related to applicant A is 3, then the African American MSE would be incremented by 3*0.6 = 1.8, the count of African Americans would be incremented by 0.6, the non-white Hispanic MSE would be incremented by 3*0.4 = 1.2, and the count of Hispanic, non-whites would be incremented by 0.4, and so on for each applicant, prior to computing the ratio of MSEs (or other suitable regression loss functions) for the populations corresponding to those with each sensitive attribute and their unprotected counterparts.


S263 can include recording final accuracy and fairness metrics for the adversarial-trained model (e.g., 111) (e.g., the model provided at S265).


S263 can include recording model metadata for the adversarial-trained model (e.g., 111) (e.g., generated at one or more of steps S210, S220, S230, S240, S250, S261 to S265). Model metadata for the adversarial-trained model can include features used by the model, information identifying training data used to train the model, model constraint parameters used during training of the model, model parameters, sample weights, selected metrics, analysis and reporting templates, and the like.


In some variations, S260 is performed for a plurality of fairness-versus-accuracy parameter (L) values (e.g., used in the objective function to modify parameters for the model 111) for at least one sensitive attribute, and final accuracy and fairness metrics are recorded for each iteration of S260, and the final accuracy and fairness metrics (e.g., determined at S263) are stored in association with the respective versions of the adversarial-trained model. In some variations, the plurality of analyses are performed on a compute cluster. In some variations, the plurality of analyses are distributed within a cloud computing environment. In some variations, cloud computing resources are deployed based on a policy, such as: as fast as possible/ unlimited budget and budgeted. S260 can include recording model metadata for each adversarial-trained model (e.g., 111), as described herein. In this manner, several adversarial-trained models are generated, each one being generated with a different fairness-versus-accuracy parameter (L) value.


In some variations, an efficient selection of L is determined based on a gradient and a common search algorithm.


S265 can include selecting one of a plurality of adversarial-trained models based on final accuracy and fairness metrics stored in association with the plurality of adversarial-trained models. In some variations, a single model is automatically selected based on received user input (e.g., received via the operator device 120, the user interface 115, etc.). In some variations, a single model is automatically selected based on predetermined model selection criteria and the recorded accuracy and fairness metrics. In some variations, selection criteria includes at least one of an accuracy threshold, and a fairness threshold for at least one sensitive attribute.


In some variations, automatic selection includes automatic generation of a selection report that includes at least the recorded accuracy and fairness metrics for the selected model. In some variations, automatic selection includes automatic generation of a selection report that includes recorded accuracy and fairness metrics for each adversarial-trained selected model. In some variations, the system 100 (e.g., using 110) provides the selection report to the operator device. In this manner, an operator is notified of the selected model, fairness and accuracy metrics for the selected model, and fairness and accuracy metrics for models not selected. In some variations, the report includes an economic analysis including a comparison of profitability metrics such as loan losses and interest collected for a plurality of model variations. By virtue of providing this information, an operator can be notified of information justifying selection of the selected model. In some variations, the selection report also includes fairness-versus-accuracy parameter (L) values for each model. In some variations, the selection report includes model input contributions, quantifying the influence of a model input variable on the model’s decisions overall and for each protected class, for any model in the analysis. In some variations the selection report includes the contribution of two-way or n-way combinations of input variables, for any model in the analysis. In some variations, the selection report includes a histogram of adverse action reason codes or model explanations for each alternative model. In some variations, the selection report includes partial dependence plots, ICE plots, and other charts showing the influence of each model input variable over a range of values, with respect to each model and disaggregated by protected attribute.


In some variations, S265 includes: accessing the adversarial-trained model (e.g., 111a-d), using the adversarial-trained model to generate model output values for a training data set, and training a new model to predict the model output values generated by the adversarial-trained model based on the data sets (and features) used to train the adversarial-trained model. In this manner, a new model is generated that predicts model outputs that would have been generated by the adversarial-trained model. In some variations, the model type of the new model is a model type recorded for the initial model (e.g., 111a) (e.g., at S210). S265 can include: selecting a model type of the new model, wherein training the new model includes training the new model as a model of the selected model type. In some variations, the model type is automatically selected. In some variations, the model type is selected based on received user input. In some variations, the new model is a logistic regression model. In some variations, the new model is a neural network model. In some variations, the new model is tree model. In some variations, the new model is a non-differentiable model.


S270 functions to compare the new model (e.g., generated at S260) with a pre-existing model (M). In some embodiments, the system 100 compares (e.g., by using the model training system 110) the new model generated at S260 with the pre-existing model (M) based on a model decomposition. Model decomposition is described in U.S. Application Nos. 16/297,099 (“SYSTEMS AND METHODS FOR PROVIDING MACHINE LEARNING MODEL EVALUATION BY USING DECOMPOSITION”), filed, 8-MAR-2019, 16/434,731 (“SYSTEMS AND METHODS FOR DECOMPOSITION OF NON-DIFFERENTIABLE AND DIFFERENTIABLE MODELS”), filed 7-JUN-2019, and 16/688,789 (“SYSTEMS AND METHODS FOR DECOMPOSITION OF DIFFERENTIABLE AND NON-DIFFERENTIABLE MODELS”), filed 19-NOV-2019, the contents of each of which are incorporated by reference herein. However, any suitable type of model decomposition can be used. By combining the adversarial training method to produce fair alternative models with model decomposition, the disclosed embodiments provide new and useful reports that explain why the new model, generated during the adversarial training process, is more fair than the pre-existing model. In some variations, the explained model report includes a comparison of the fair model with the original model. In some variations, the comparison includes the contributions of a model input variable to the model score with respect to a baseline population. In some variations, the baseline population is the set of approved loans. In some variations, the baseline population is a random sample. In some variations, the baseline population is selected based on a demographic criteria. In some variations, the baseline population is selected based on a demographic criteria and an economic criteria. In some variations, the baseline population is selected based on attributes provided by an operator in a user interface. In some variations the feature contributions are reported for each model overall, in other variations the feature contributions are reported for each sensitive attribute.


In some variations the comparison includes an economic comparison. In some embodiments the economic comparison includes an approval rate comparison, a default rate comparison, and an annual loss projection. In some variations the comparison is disaggregated based on a demographic attribute or a sensitive attribute. In some variations, the comparison is disaggregated based on a user selection of an attribute in a user interface.


S280 functions to provide a user interface (e.g., 115). In some variations, the user interface includes the selection report. In some variations, the user interface is a graphical user interface. In some variations, the user interface is provided by an application server (e.g., 114 shown in FIG. 1C). In some variations, the user interface displays information for each adversarial-trained model trained by the system 110. In some variations, information for an adversarial-trained model includes: model metadata (as described herein), accuracy metrics, fairness metrics, and the like. In some variations, the user interface includes a user-input element for receiving user-selection of at least one of: an adversarial-trained model trained by the system 110; a fairness-versus-accuracy parameter (L); a sensitive attribute; a model to be trained by the adversarial classifier; the model type for the output model; the model type for the model to be trained by the system 110; the model selection criteria; information identifying one or more features that are to be unchanged during adversarial training; model constraint parameters, and any other suitable information.


By virtue of the displayed user interface, an operator of the operator device (e.g., 120) can determine whether an adversarial-trained model satisfies fairness and accuracy requirements, as well as other model constraints and/or business requirements.


In some variations, the user interface displays information for the original model (e.g., model metadata, model type, features used, etc.). In this manner, newly trained models can be compared to the original model. In some variations, the user interface displays features used by the original model and features used by each adversarial-trained model.


In some variations, the system produces reports that document the analysis and model selection process in order to enable compliance with ECOA. In some embodiments the system produces reports that document the analysis and model selection process in order to enable compliance with other regulations including, GDPR, GLBR, FCRA, and other regulations, as required or recommended by the municipal, county, state, regional, national or international levels, without limitation. In some variations, the system produces reports that enable enterprise risk managers, governance bodies, auditors, regulators, judges and juries to assess model risk, the risk of unfair outcomes from the adoption of models, and to audit the process businesses use to measure and mitigate algorithmic bias.


In a first example, the method includes: preparing an initial model F (e.g., at S210), a fair alternative model to the pre-existing model (M), and training F (e.g., S262) based on input variables x and an adversary A (e.g., the adversarial classifier 112), wherein A is a model that predicts the sensitive attribute based on model F’s score, and wherein the alternative model F includes one or more of a linear model, neural network, or any other differentiable model, and wherein the model F is a replacement to the pre-existing model (M).


In a second example, the method includes: preparing an initial model F (e.g., at S210), a fair alternative model to the pre-existing model (M), training F (e.g., S262) based on variables x and an adversary A, wherein A is a model that predicts the sensitive attribute based on model F’s score, wherein the alternative model F includes one or more of a linear model, neural network, or any other differentiable model; and the new model generated by S260 is an ensemble of F and M, such as a linear combination of F and M model scores.


In a third example, the method includes: preparing an initial model F (e.g., at S210), a fair alternative model to the pre-existing model (M), training F (e.g., S262) based on variables x and an adversary A, wherein A is a model that predicts the sensitive attribute based on model F’s score, wherein the alternative model F includes one or more of a linear model, neural network, or any other differentiable model, wherein the new model generated by S260 is an ensemble of F and M, such as a composition of F and M, e.g., F(M(x), x). In some variations, M and F are both differentiable models. In other variations, M and F are piecewise constant. In other variations M and F are piecewise differentiable. In other variations, M and F are ensembles of differentiable and non-differentiable models. In some variations, the gradient used in adversarial training is computed based on the generalized integrated gradients decomposition method described in U.S. Application No. 16/688,789 (“SYSTEMS AND METHODS FOR DECOMPOSITION OF DIFFERENTIABLE AND NON-DIFFERENTIABLE MODELS”), filed 19-NOV-2019, the contents of which is incorporated by reference herein. In some variations, the gradient used in adversarial training is accessed directly from the model (for example by accessing the gradient in a neural network).


In a fourth example, the method includes: preparing an ensemble model (E) (e.g., at S210), training E (e.g., at S220, S262) based on input variables x, wherein the ensemble model E is at least one of a tree model and a linear model; preparing an adversarial model (A) to be trained to predict the sensitive attribute based on the score generated by E (e.g., at S240); and learning the model F(E(x), x) (e.g., at S260) which maximizes the AUC (Area Under the Curve) of E (or for regression models, minimizes the MSE (Mean Squared error) of E) while minimizing the accuracy of A, wherein x are the input features, based on the adversarial training method (e.g., the method 200) described herein.


In a fifth example, the method includes: preparing an ensemble model (E) (e.g., at S210), training E (e.g., at S220, S262) based on input variables x, wherein the ensemble model E is at least one of a tree model and a linear model; preparing an adversarial model (A) to be trained to predict the sensitive attribute based on the score generated by E (e.g., at S240); wherein S260 includes (1) learning a third model F(x), wherein F minimizes the accuracy of A, and maximizes the AUC of E (or for regression models, minimizes the MSE (Mean Squared Error) of E), and (2) combining F and E within an ensemble (FE) to produce a model score that is more fair than the pre-existing model (M). In some variations, the ensemble FE is a linear combination (e.g., w*F(x) + (1-w)*E(x)). In some variations, the coefficients of the linear combination FE are determined based on a machine learning model. In some variations, the ensemble FE is ensembled based on a neural network, including, without limitation: a perceptron, a multilayer perceptron, or a deep neural network. In some embodiments the gradient required for adversarial training is computed using generalized integrated gradients. In some embodiments, the gradient is retrieved from the model or computed based on the model parameters.


In a sixth example, the method includes: preparing an ensemble model (E) (e.g., at S210), training E (e.g., at S220, S262) based on input variables x, wherein the ensemble model E is at least one of a tree model, a neural network model, a linear model and combination thereof; preparing an adversarial model (A), a model trained to predict the sensitive attribute based on the score generated by E (e.g., at S240); wherein S260 includes: (1) learning a third model F(x), wherein F minimizes the accuracy of A, and maximizes the AUC of E (or for regression models, minimizes the MSE of E), and (2) combining F and E within an ensemble (FE) to produce a model score that is more fair than the pre-existing model (M). In some variations, the ensemble FE is a linear combination (e.g., w*F(x) + (1-w)*E(x)). In some variations, the coefficients of the linear combination FE are determined based on a machine learning model, including, for example, a ridge regression. In some variations, the ensemble FE is computed based on a neural network, including, without limitation: a perceptron, a multilayer perceptron, or a deep neural network.


In some variations, the initial model (e.g., generated at S210, pre-trained at S220, etc.), or any new model generated at S260, can be any ensemble model which can be separated into two parts, a discontinuous sub-model, d(x), and a continuous model of the form f(x, d(x)) including both the elements of the input space directly and indirectly through the discontinuous model. In some variations, even if f() is itself continuous and possibly well-behaved, the composition of f() with d() might not be continuous if d() itself is not continuous. Schematics of several such models are shown in FIGS. 3-7. In some embodiments analysis of models including feature importance is computed by performing a process disclosed in U.S. Application No. 16/688,789, (the generalized integrated gradients method).



FIG. 3 shows a pass-through model in which a collection of base features or “signals” is passed through a gradient boosted tree forest (GBM) and the result of that operation presented as a score. In some variations, the model f() shown in FIG. 3 can be represented as f(x,y), where f(x,y) is the identify function of y, and y is represented as d(x), which is the gradient boosted tree model. In some variations, f(x,y) itself is well-behaved. as it is just the identity on one variable, but the resulting ensemble model is discontinuous and ill-behaved, at least when considered as a machine learning model.



FIG. 4 shows a pass-through model in which the output of a GBM is then subsequently transformed through a “Smoothed approximate ECDF”. An empirical cumulative distribution function (ECDF) is a function which, among other things, transforms the distribution of output values of a function in such a way that the fraction of items with values below a certain level in the ECDF is exactly that level: that is, if E is the ECDF associated with a model function f, then exactly 10% of all inputs will be such that E(f(x)) < 0.1, 20% will be such that E(f(x)) < 0.2, etc. A Smoothed approximate ECDF, S, is a continuous function which closely approximates a real ECDF but is continuous and almost everywhere differentiable. That is, almost exactly 10% of all inputs will be such that S(f(x)) < 0.1, 20% will be such that S(f(x)) < 0.2, etc. In some implementations, the ECDF’s are not continuous, much less differentiable, but one can build a smooth approximate ECDF which arbitrarily closely approximates the original ECDF by the standard expedient of approximating the ECDF with any suitable technique. In some variations, this technique is at least one of: a piecewise linear approximation, a polynomial interpolant, a monotone cubic spline, the output of a general additive model, etc.


By composing the output of a GBM through a smoothed ECDF, S, one obtains a model of the form f(x, d(x)) = S(d(x)), which meets the functional requirement for the Generalized integrated gradients decomposition method described herein. This modified form is useful, however, as lenders or other underwriters usually wish to approve only a fixed percentage of loans and such a transformation through a smoothed ECDF makes this possible. The methods described herein, however, are the first methods to correctly provide explanation information for ensemble models of this type.



FIG. 5 displays a compound model in which the outputs of three submodels, a GBM, a neural network (NN), and an Extremely Random Forest (ETF) are ensembled together using a simple linear stacking function. Such ensembles provide very powerful machine learning models and are used frequently in machine learning models. Such a model can be presented in the form f(n(x), g(x), e(x)), where f denotes the final linear ensembling function, n denotes the continuous output of the neutral network submodel, g denotes the discontinuous output of the GBM, and e denotes the discontinuous output of the ETF. Despite the apparent difference in formalism, such models can be seen to be of the form to which the Generalized Integrated Gradients decomposition method (described in U.S. Application No. 16/688,789, “SYSTEMS AND METHODS FOR DECOMPOSITION OF DIFFERENTIABLE AND NON-DIFFERENTIABLE MODELS”, filed 19-NOV-2019, the contents of which is incorporated by reference herein) applies.



FIG. 6 shows the schematic of a model which combines aspects of the models shown in FIGS. 4 and 5: it contains three submodels, a neutral network (NN), a GBM, and an ETF and a linear ensembling layer, as shown in FIG. 5, but subsequently reprocesses the output of that linear ensembling layer through a Smoothed ECDF. This class of models is useful, because it not only achieves the high discriminative power of the model shown in FIG. 5, but also provides the very desirable uniform output properties of a model which produces outputs through a smoother ECDF, as in the model shown in FIG. 4.



FIG. 7 shows the schematic of a model similar to the model shown in FIG. 6, but replaces the linear stacking layer in the model shown in FIG. 6 with a neural network model. Networks with this form can preserve the representational power and desirable output structure of the model shown in FIG. 6, but can add greater flexibility in their final step. This greater flexibility allows the construction of models which meet specific fairness criteria, provide local confidence estimates, or exhibit combinations of those along with other desirable properties. In some variations, the deep stacking neural network model shown in FIG. 6 can be replaced with any suitable type of continuous machine learning model, such as, for example, a radial basis function layer, a Gaussian mixture, a recurrent neural network, an LSTM, an autoencoder, and the like.



FIG. 8 shows a schematic depicting simultaneous model training process in which an underwriting model (e.g., “Classifier f”) and an adversary (e.g., “Adversary r”) are trained together (e.g., the adversarial classifier 112) to find fair alternatives. The adversary can have access to protected class information (depicted in the diagram as Z) to perform LDA searches and perform numerical optimization approaches to determine values for one or more (e.g., one, ten, millions) parameters of a model (e.g., the adversarial classifier 112).


Input 802, shown as “X” is training data (e.g., a matrix of rows and columns). Primary model 804 can include a classifier f (e.g., a probability of a default classifier). The primary model 804 operates according to parameters 806 (e.g., one or more weights affecting one or more layers of a machine learning model of the model 804). The primary model 804 generates model predictions 808 for each instance of input data 802. There is loss 810 generated (e.g., by a computing system) using the parameters 806. An adversarial model 812, e.g., a classifier predicting protected class, obtains the predictions 808 for each instance of input data 802 and can generate a membership for one or more classes Z. The model 812 operates according to one or more parameters 814. The model 812 generates one or more predicted likelihoods 816 of protected class membership.


A distribution 818 can be parameterized on output 816 of the model 812, e.g., protected class likelihoods. Using the distribution 818, a computing device (e.g., one or more computers that computes operations shown in FIG. 8) generates likelihoods 822 predicted by the model 812 using the output 808. The likelihoods 822 can include indications of one or more protected classes. The computing device can also generate a loss 820 using the parameters 806 and 814.


It will be obvious to one of usual familiarity with the art that there is no limitation on the number or types of the inputs to these models, and that the use previously of an example function with domain a subset of R2 was merely presented for clarity. It will also be obvious to one of reasonable skill in the art that the presentation of a single layer of discrete machine learning models with outputs being fed into a single ensembling layer is purely for pedagogical clarity; in fact, in some variations of these systems, a complex and complicated network of ensemblers can be assembled. Machine learning models of that type are routinely seen performing well in machine learning competitions, and have also been used at Facebook to construct and improve face recognition and identification systems. In some variations, the methods described herein teach the generation of more fair models and the analysis of the input feature contributions for models so that they can be reviewed and used in applications for which fairness outcomes, and model drivers must be well-understood and scrutinized.


Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein.


As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the disclosed embodiments without departing from the scope of this disclosure defined in the following claims.

Claims
  • 1. A method for explaining one or more first outputs of a trained machine learning model comprising: obtaining data relating to a plurality of potential borrowers;providing the data to the trained machine learning model, the trained machine learning model being trained to predict a credit value for each of the potential borrowers;providing the data to a classifier model, the classifier model being trained to identify one or more sensitive attributes for each of the potential borrowers from the data;obtaining, by the trained machine learning model’s processing of the data, the one or more first outputs from the trained machine learning model, the one or more first outputs indicating the credit value for each of the potential borrowers;obtaining, by the classifier model’ s processing of the data, one or more second outputs, the one or more second outputs indicating whether the data indicates any sensitive attributes for each of the potential borrowers;automatically generating a report that explains the one or more first outputs of the trained machine learning model with respect to one or more fairness metrics and one or more accuracy metrics by processing the first output and the second output to determine an impact ratio that indicates a fairness of credit values predicted for a first subset of potential borrowers as compared with a second subset of potential borrowers, wherein borrowers in the first subset of potential borrowers are indicated as having one or more sensitive attributes based on the second output; andproviding the automatically generated report for display on a user device.
  • 2. The method of claim 1, wherein the trained machine learning model is a classification model, and wherein the one or more first outputs of the trained machine learning model provide information relating to a prediction as to whether a potential borrower will default on a loan.
  • 3. The method of claim 2, further comprising determining whether to offer the loan to the potential borrower based on the one or more first output of the trained machine learning model.
  • 4. The method of claim 1, wherein the trained machine learning model is a regression model, and wherein the one or more first output of the trained machine learning model provide information relating to an amount of credit that should be issued to a potential borrower.
  • 5. The method of claim 4, further comprising determining an amount of credit to offer to the potential borrower based on the one or more first output of the trained machine learning model.
  • 6. The method of claim 1, wherein the automatically generated report justifies use of the trained machine learning model to inform lending decisions with respect to the one or more fairness metrics and the one or more accuracy metrics.
  • 7. The method of claim 6, wherein the automatically generated report explains an original machine learning model, an adversarial training process, and the trained machine learning model.
  • 8. The method of claim 6, wherein the automatically generated report justifies selection of the trained machine learning model for use in production based on the importance of one or more input variables on one or more first output of an original machine learning model and the importance of the one or more input variables to the one or more first output of the trained machine learning model.
  • 9. The method of claim 8, wherein the importance of one or more input variables on one or more first output of an original machine learning model is determined on the basis of statistical analysis performed with respect to the one or more input variables and the one or more first output of the original machine learning model.
  • 10. The method of claim 9, wherein the importance of the one or more input variables to the one or more first output of the trained machine learning model is determined on the basis of statistical analysis performed with respect to the one or more input variables and the one or more first output of the original machine learning model.
  • 11. The method of claim 10, wherein the automatically generated report comprises: one or more dynamic portions generated on the basis of the one or more first output of the trained machine learning model; andone or more static portions.
  • 12. The method of claim 11, wherein the one or more dynamic portions comprise a graphic depicting the one or more first output of the original machine learning model, and the one or more first output of the trained machine learning model.
  • 13. The method of claim 11, wherein the one or more dynamic portions comprise statistical analysis of the one or more first output of the original machine learning model, and the one or more first output of the trained machine learning model.
  • 14. The method of claim 13, wherein the one or more fairness metrics relate to algorithmic bias against one or more groups of potential borrowers, and wherein the automatically generated report explains a risk of algorithmic bias with respect to one or more first output of the trained machine learning model.
  • 15. The method of claim 14, wherein the automatically generated report explains a risk of algorithmic bias with respect to one or more first output of an original machine learning model.
  • 16. The method of claim 15, wherein the automatically generated report explains the risk of algorithmic bias with respect to one or more comparisons between the one or more first output of the trained machine learning model and the one or more first output of the original machine learning model.
  • 17. The method of claim 16, wherein the one or more comparisons comprise statistical analysis.
  • 18. The method of claim 3, wherein the automatically generated report explains trade-offs with respect to the one or more fairness metrics and the one or more accuracy metrics.
  • 19. The method of claim 18, wherein the trade-offs are explained based on statistical analysis.
  • 20. The method of claim 3, further comprising training an original machine learning model to produce the trained machine learning model, wherein the training comprises: obtaining a training data set indicating one or more sensitive attributes of one or more potential borrowers;providing the training data set to the original machine learning model, wherein the original machine learning model comprises hidden layers and weights indicating connections between the hidden layers;obtaining a output of the original machine learning model based on the original machine learning model’s processing of the training data set;providing the output to an adversarial machine learning model;obtaining a second output of the adversarial machine learning model, wherein the second output indicates a prediction relating to the one or more sensitive attributes;comparing the first output to the second output; anddetermining, based on comparing the first output to the second output, one or more updated values corresponding to one or more of the weights indicating connections between the hidden layers of the original machine learning model, wherein the trained machine learning model comprises hidden layers and weights with the one or more updated values indicating connections between the hidden layers.
  • 21. The method of claim 20, wherein comparing the first output to the second output comprises generating an error term for a protected population and an error term for another population; and determining a ratio of the error term for the protected population and the error term for the other population.
  • 22. The method of claim 21, wherein the error term for the protected population is determined based on a mean squared error value for the protected population, and wherein the error term for the other population is determined based on a mean squared error value for the other population.
  • 23. The method of claim 1, wherein the report includes a comparison of the trained machine learning model with an original machine learning model from which the trained machine learning model was produced, the comparison indicating variations in accuracy and fairness between the trained machine learning model and the original machine learning model.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 63/242,726, filed Sep. 10, 2021, and U.S. Provisional Application No. 63/248,187, filed Sep. 24, 2021, the contents of which are incorporated by reference herein.

Provisional Applications (2)
Number Date Country
63248187 Sep 2021 US
63242726 Sep 2021 US