The system described herein relates to the methods and devices useful in automation of data mining, statistical analysis, predictive modeling, real-time learning and optimization techniques. More particularly, embodiments of the present system pertain to the systems, methods, apparatuses that provide decision making support, taking as input the incoming raw data, applying transformations to the data, building meaningful variables, training predictive models, regularly assessing the performance of the system, creating reports, raising alerts and rebuilding models, running tests on defined segments and fine-tuning them, and all of this automated to minimize or in some cases absolutely eliminate human intervention.
The rapidly growing large volumes of a wide variety of data pouring from different sources, carries valuable information that can be economically extracted. These sources include internet data, data from social networks, mobile data, sensor data and big science data. The data can be converted to actionable insights which contribute towards numerous different decision making processes. The size and complexity of data involved in any event that occurs online or otherwise, has been increasing exponentially. If this event demands a decision as an output, to manage the process flow (involving credit or otherwise) with caution and maintain level of confidence while doing so, available data needs to be analyzed using efficient, fast, reliable and intelligent predictive models. Business enterprises and technology systems thus require rapid reactions and decision making processes to keep up with emerging markets. A solution is thus required which has a high volume capability and real time functionality to pull data from multiple dependable sources.
In addition to the accelerating rates at which data is being generated and its complexity increasing, the meaning of this data and its interpretation also changes with time. The relationships among data from different or even same sources keep transforming over time. If new types of data are emerging and the meaning of the data elements and of relationships among elements keeps changing, the system must change as well to accommodate these variations. The existing conventional decision support systems are equipped with the fundamental procedure to carry out an event review, fetch required information and process the request to output a recommendation. However, these systems are not prepared to meet the high demands of the rapidly changing environment. They do not have the provision to pour outcomes into a self learning system, which would adapt and re-write itself according to the most current updated information available.
It is desirable to transition traditional data mining and storage needs into the realms of real-time computing with the efficiency of modern processing systems. It is also desirable to provide quick, efficient, accurate, up-to-date, smart recommendations to interactive channels by combining analytics, business logic and contact strategies.
A further understanding of the nature and advantages of the present system and method may be realized by reference to the remaining portions of the specification and the drawings wherein reference numerals are used throughout the drawings to refer to similar components.
The adaptive decision system and method may use heuristic methodology that utilizes predictive modeling techniques like neural networks, applied using a constant feedback cycle to rapidly learn, classify and adapt to changing patterns, as well as making decisions fast enough for use in real-time. Unlike the standard neural network based approach, this model takes into account the sensitivity of parameters for dealing with uncertainty in decision making. Existing systems do not provide real-time adaptive automated modeling based on behavior with minimal operational upkeep. The real-time automated aspect of this system will provide solutions that react instantaneously to events as they occur. The methods used by the system is agnostic to specific modeling approach/methodology, and as such may be applied to any method wherein models are created from data, including new (yet to be invented) methods for creating predictive models from data. The system and methods described are equally valid with any specific modeling method or combination of modeling methods, including boosting algorithms, Bayesian forests and deep learning.
In accordance with an embodiment of the present system, an apparatus, system, and method are disclosed that provide an adaptive, automated neural network setup for model creation and deployment, which continuously learns from its output and re-creates itself whenever required. This platform amalgamates various processes that can essentially build the decision making system of a process or an entire collection of processes (such as required to operate an entire business), including but not limited to variable generation, model building, optimization and fine-tuning, portfolio performance evaluation, report generation and alert triggering mechanisms, decision logic and strategy design, and deployment in a real time system. This platform can be applied to any type of business that uses data from any event requiring a decision as an output, including (but not limited to) consumer credit, business credit, fraud detection, targeted marketing, transaction monitoring, customer care, pricing optimization, and many others.
According to some embodiments of the present system, the design facilitates examination of millions of events for potential outcomes in order to determine the probability of success and decide the fate of the credit involved (if any) therein based on the current business strategy in real time. This system may take into account variables pertinent to the type of transaction, particularly, variables based on economic environment, employment and income, affordability, and the effect that time has on these and other variables related to the subject matter. In addition, in accordance with an embodiment of the present system, a method is provided for creation of meaningful time dependent variables that can be personalized given the nature of event.
In a further embodiment of the present system, a method is provided that may contribute to the generation of Bayesian adjusted digital filters. For example, each time new information is received by the system, it may be consumed by specific processes (that may be implemented as one or more algorithms) to compute updated values for these variables in real time. These processes may be tailored to achieve particular goals. Thus, the output generated from the processes may be reused as input in addition to new data in the next iteration. The current system may employ many transformations to manipulate these variables. These transformations would include but are not limited to regulating the contribution of data towards certain calculations based on time when that data was captured, control the domination of a subset in the calculations pertaining to the entire sample space, etc.
The system may have a sub-system that may extend the platform to generate post-decision analytics, taking into consideration the decision values and all external data related to the event. This sub-system also enables the management and storage of all the data that flows through the system, the input data, and variables created from the raw data, model outputs and computed scores, and decisions. Furthermore, in accordance with an embodiment of the present system, a mechanism may be provided for continuous performance evaluation of the portfolio as a whole in addition to the sub-system processes. This mechanism may monitor the distribution shifts, correlation changes, and other diagnostic measures, triggers alerts whenever necessary for human intervention and generates automated reports.
Ill further embodiments a system is provided for providing dynamic evaluation, re-building and optimizing the underlying mathematical models that intelligently iterate based on the current performance and threshold defined. This sub-system can be further connected to a management module that acts as an alert trigger system targeting the underperforming components and to invite human interaction whenever the automated system goes out of bounds. This sub-system also allows introduction of new improved models and modeling techniques via human intervention. In still further embodiments a system is provided for Implementation, management and adjustment of test strategies relative to the current business strategy.
In yet further embodiments a system is provided that may automatically take in new inputs (models, variables, rules and challenger specifications) and develop an optimized strategy which leads to increased gains in key business metrics and ensures successful deployment of the formulated rules into the production system.
The system may also include a component that rebuilds the models. If one does not take time into consideration and rebuild the model, the data degrades and so does the model performance. In the past with other systems, humans had to sift through the large volumes of data in order to arrive at educated conclusions because bad data (corrupted, incorrect or incomplete) had the potential to degrade the performance the model. These manual methods require intensive human expertise and effort and consume many business resources (including time) which could be spent much more efficiently in other areas.
An embodiment of the present system may utilize various real-time technology components such as workflow systems, knowledge management, data warehousing, data mining, and data analysis and has the ability to make recommendations possible in high volume query environments. The dynamic, (near) optimal, real-time decisions are based on a combination of current and historical data of the subject matter being analyzed for resulting predictive trends. The parameters of those decisions may transition over time, which the model accounts for. Many times, the optimized decision can be made only from predictive values resulting from the trends of analyzed historical data calculations of risk and profits and loss. These decisions are produced from the predictive model, which does not require human interaction.
Traditional systems have only attempted to decrease data latency and have not targeted analysis or action latencies, which have normally been controlled by manual processes. The Adaptive modeling platform (AMP) system is designed to maximize the reduction of data analysis and action latencies. Specifically, this system reduces the time needed to:
Other exemplary and more robust methodologies used by this system provide for threshold detection, triggering automated actions, alerts resulting from patterns and trends, and feedback back into the process.
This system can meet the needs of the modern competitive environment of high consumer expectations and demanding customer relationships, while increasing revenue and maximizing operational efficiencies. The various industries requiring technology which has the ability to process and analyze increasing volumes of continuously updated data from multiple sources are, but not limited to telecommunications, networking, logistics, transportation, government. Areas of application for this system include, but are not limited to:
Reference is now made to
The AMP 100 may comprise a Real Time Decision Engine 101 that may deploy one or more predictive models and decision strategy of the AMP 100. The Real Time Decision Engine 101 may take in raw data, create variables and compute model scores, and based on the current business strategy may provide a real time decision. The variables thus created and the final decisions made may be stored in a Master Data Manager 110, at all times.
In the case of a loan application, the incoming raw data may be the data that the applicant provides which includes (but is not limited to) his/her identity information, bank account details and debit/credit card details. This raw data may be processed and transformed, creating bin and flag variables, among other mathematical operations and transformations. The data thus generated may be input into the modeling suite, where each model produces a score which might be a predictor of probability of this application getting approved, measure of credit risk involved or fraud, etc. Data may also be fetched from third party providers and credit bureaus. A real time decision which includes rejection of loan to the applicant, further processing or immediate approval may be involved. For example, let the input data constitutes of the following variables (among other data):
1. first name (say John),
2. last name (say Taylor),
3. monthly income (say $2000) and
4. bank account number.
The data may also be fetched from third parties, which may include (but is not limited to) credit scores, presence on electoral rolls, any records of bankruptcy, etc. This data is subjected to transformations that convert it into processed information for the purpose of extracting maximal predictive power. Some examples of transformations on income variable may include (but are not limited to):
In the above example, John, gets the value of flag variable on income=0, value of flag variable on bank account number=0 and falls in income bin 2. In this example, the lending decision may be based on one of the third party credit scores, flag variable on bank account number and profit model scores built by the system. The flag variable on bank account number acts as a fraud alert, triggering when the same bank account number is seen for a different customer. Such a customer may not get rejected, but may be treated with caution. Let the raw and derived variables be given as input to a model that predicts profit. If the combination of the two scores and the flag on bank account number does not pass the predetermined threshold, the customer may be either rejected or evaluated further. Given the fraud alert is not triggered, let John gets a high profit score but a low third party credit score and the combination does not pass the threshold of the decision logic in place, he is subjected to further analysis. This decision is recorded by the system.
The system also may include a Near Real Time Processor 102 that may capture post decision data and applies transformations to it. In a lending environment, the post decision data may include (but is not limited to) data collected on customer surveys and call center conversations. Some lenders may collect additional data after a provisional decision is provided to the customer. This additional data may include (but is not limited to) data captured from bank statements, utility bills, employer data, etc. For example, a user, John, may be asked for his bank statement or pay slips, via customer care center that may be used to confirm his monthly income and thus establish trust in the customer.
The data from the Near Real Time Processor 102 may be synced to a Master Data Manager 110. This data is used as an input to the models. Since the data is always fetched from the Master Data Manager whenever a new model is build or an existing one is rebuilt, or the performance of a certain business metric has to be evaluated, this data needs to be synced into the Data Manager. The Master Data Manager 110 may be a master repository of all system data. The system also may have an Automated Report Generation Module 103 that takes data from Master Data Manager 110, and applies trigger alerts and distribution shift alarms along with reports on the performance of key business metrics. In a lending environment, for example, there is some expected minimum number of applications for loans, percentage of approvals, maximum risk cutoffs, fraud thresholds, etc that should be satisfied. If any of these are not met, an alert is triggered by this system to notify for human intervention.
The AMP 100 also may include a Variable Generation Module 104 that maintains variables by computing them from the transactional data and also handles creation of new variables from the transactional data using specifications provided by human input. The specifications in case of a lending business may include (but are not limited to) building flags, bins, and mathematical and logical transformations of raw data to extract more valuable information out of the available data, which can be input into the modeling module. As in the above example, flag and bin transformations are performed on the income and bank account number data obtained from a user's loan application. The data created and used here may be synchronized with the Master Data Manager 110. The AMP 100 also may include a Model Building Module 105 that may automatically build predictive models as per the specifications from an external human source. It also re-builds models that have been previously deployed, as and when triggered by a Model Management Module 106. Specifications in case of a lending business will include (but are not limited to) percentages of data to be considered in the development sample, modeling techniques to be used and other restrictions/conditions that the data should follow/satisfy. The performance of the existing models may deteriorate with time and it may not validate as well on the incoming data as it did on the training data. This system will allow for the re-build of models at regular, alert triggered or human initiated intervals. For example, when an initial profit model (say M1) from the above example was built, the number of transactions used as input was 10. Now, with the lending business running, the number of transactions has increased to say 100 and thus a second version of the profit model (say M2) may be built. When John's application is received, let M1 predicts his profit to be $100 and M2 predicts his profit to be $110. Actual profit generated by John is $109. Since M2 produces better estimates of the derived variable, this new improved model would be incorporated into the lending business.
The Model Management Module 106 may constantly monitor the performance levels of the models and business metrics with each new transaction. A (new) transaction or event may be a request or an exchange or transfer of a tangible product or service for an asset/money/payment/promise of payment between one or more parties, for example a loan application from a customer to a lender, loan acceptance by lender, transfer of money. Another example may be a credit/debit card transaction for purchase of goods, acceptance of customer details by the retailer and delivery of the purchased entity. If the performance of any of the deployed models or business metrics falls below a predefined threshold, it sends a request to the Model Building Module 105 to rebuild the underperforming model(s) or to a Decision Logic Module 108 to improve the corresponding decision logic. For example, in a lending business, a lending decision may be made by making use of a set of models. The repayment behavior of the borrowers from latest vintages may be compared with the predictions of the models and if the performance parameters show a drop from estimated values, then the Model Management Module may trigger a request to the Model Building Module to rebuild models that are underperforming. The decision logic might depend upon the underperforming models or be affected directly by the new data. The decision logic may be modified to make the threshold fit into the expected values.
In the above lending example, where the profit model was rebuilt, the evaluation of performance of the models and trigger for the rebuild is given by the Model Management Module. The decision logic may also be revisited to check for any scope of optimization. With a modification of the models and other business rules, the system may generate a completely different score combination and decision logic. In this case, since only one model is considered, for the sake of simplicity let model M2 replaces model M1. The decision logic (comprising of the profit model score, a third party credit score and a fraud rule) is altered accordingly and the new thresholds are computed. Both the Model Building Module 105 and Model Management Module 106 use data from the Master Data Manager 110.
The AMP 100 also may have a Champion/Challenger Engine 107 that may evaluate the performance of the various tests that are run on different segments of the portfolio and along with the Model Management Module 106, forms the core of the system which determines what needs to go into the business decision strategy for the Real Time Decision Engine 101. In a lending business, from time to time, new models may be built, improved strategies and algorithms may be developed, third party services may get updated and improved versions may have to be put into use for better results and more accurate estimates. In such cases, a test may be performed on a small sample of the incoming applications and performance of this new service/data may thus be evaluated. When this is done using the Champion/Challenger Engine 107, all optimizations ranging from the percentages that may go into production for this test and if the test is successful are used to determine performance so that automatic deployment of this new service/data to entire incoming volume may require minimum or no human interaction. For example, if a third party whose credit score is being used in the decision logic (in the above lending example) releases a new version of its credit data, before incorporating the new version, the lender may want to test its performance on a small percentage of his portfolio. Given a few specifications, this is handled automatically by the system. The system may test the new version, compare its performance with the old version and generate an optimized strategy.
The AMP 100 may further include a Score Combination and Decision Logic Module 108 that may combine each of the model scores into decision rules and undertakes a thorough profit optimization and gives a final verdict as to whether each particular rule should be deployed into the Real Time Decision Engine 101 or discarded. In a lending environment, the decision of whether an application for loan should be approved may depend upon multiple factors including (but not limited to) probability of conversion, risk estimates, fraudulent intent, profit analysis. Each of these factors may be computed using various models and model scores may be seen in conjunction to arrive at the final decision of whether an application should be rejected or approved. The Score Combination and Decision Logic Module may put together these tasks of combining scores from various models and decision strategies, and their deployment.
The AMP 100 also may have a Pre-production Testing Engine 109. Thus, before deployment of the variables, models and decision logic, the Pre-production Testing Engine 109 may act as a staging environment to test and identify any issues with the prospective deployment into the real time decision system. After testing is completed within Pre-Production Testing Engine 109, the decision logic specifications obtained as an end result of computation by Score Combination and Decision Logic Module 108 along with the corresponding predictive model, challenger and variable specifications, are deployed into the Real Time Decision Engine 101, completing the feedback cycle. The cycle usually repeats when sufficient new data is available. In some embodiments, the cycle repeats with each new transaction. In the AMP 100 shown in
The Pre-processing Testing module 109 of the AMP 100 shown in
The Events 202 may be a stream of incoming transactions on which the decision strategies (typically created using predictive models) are applied and a decision is made (usually in real time). Each event may be provided to the system in xml feeds (although each event may be provided in other formats and data formats that are within the scope of the disclosure) and may consist, for example, of application data, third party data, post decision data or simply data triggered by passage of time. In an embodiment of the system used for consumer credit, each Event 202 may be an incoming loan application with each one referred to as a single transaction. The Real Time Decision Engine 101 provides a decision during each transaction to either approve or decline credit. In an embodiment of the system used for fraud detection, Events 202 typically corresponds to credit or debit card transactions. The Real Time Decision Engine 101 then performs its processes to classify a given transaction as fraudulent or not.
Each event is captured in the Data Pre-processing module 204. The variables may be created using this information along with data received from the Production Data Mart 200, as per the specifications in Variable Creation module 206. The Real Time Decision Engine 101 may have a Production Data Mart 200 that may send as well as receive input from the Master Data Manager 110 (part of the Near Real Time Processor 102). The variables resulting from the output are fed into the Model Execution module 208 and the execution of the decision logic 230 triggers a process that will use the various variables and models computed which decide the fate of any given transaction—e.g., in case of a lending business, the decision could be whether to approve the loan application or reject it. It also triggers a data process that will update the various profiles and other tables to reflect the processing of this current lead. For the lending example mentioned above, this module may operate by taking in data as input from the transaction (first name, last name, monthly income, bank account number, third party data), cleaning up and building derived variables on this raw data to extract more value (flag and bin variables on income and bank account data), building models and computing model scores (profit model), and providing the decision as to whether an application is approved or not.
In the Near Real Time Processor 102, offline decision outcomes data 304, and data from other additional external sources 302, enters a Data Pre-Processing module 242, where complex variable computations, which include (but are not limited to) dynamic risk computations, etc, are carried out. External sources may include (but are not limited to) data collected from social networks, etc. The data then may be synched with the information received from the Variable Generation Module 104, which contains dependent variables (DVs) and independent variables (IDVs) in the form of definitions, exclusions, missing value treatment, segmentation, maturity conditions, and sampling parameters. A Data Processing module 240, part of the Real Time Decision Engine 101, updates various profiles which become a part of the input data and provides Post-Approval data which is also fed into the Data Pre-Processing 242. Using the data from these sources, it computes transformations of these data based on the specifications in the Independent Variable specifications (IDV.spec) file. The various profile variables and tables are updated as part of this processing and this data is stored permanently in the Master Data Manager 110.
The Near Real Time Processor 102 may also enable the generation of various types of reports on a continuous basis in a Reports Data Mart 306. The Master data Manager 110 provides data to the Automated Reports Generation Module 103 which computes various business performance metrics concerning, but not limited to profit, bad debt, acquisition, model performance, champion/challenger performance. In the system, alarms and alerts are triggered based on the comparative analysis against data from a previous day, previous month, previous year, previous market conditions or any such conditions specified. This module constantly measures correlations between different independent variables, correlation between independent variables and dependent variables, applies the Kolmogorov-Smirnov statistical distribution test across all variables to identify significant distributional shifts over various time intervals, and monitors other such diagnostic metrics. If an anomaly is found, this module automatically triggers corrective action invoking the Model Management Module 106 (which then triggers model rebuilds for affected variables). A case is also flagged for human intervention. The human operator may interfere whenever he/she so desires. This module also sends to and receives data from the Modeling Data Mart 308.
In an embodiment of the AMP system 100 used for consumer credit, the pre-decision data retrieved by Real Time Decision Engine 101 usually includes (but is not limited to) consumer's loan application data covering their income among other details as well as any external credit bureau data (covering an applicant's customer history). The post decision data (retrieved from Near Real Time Processor 102) can include an applicant's reaction to the decision, take-up status of the offer, and more. Outcomes data 304 in this embodiment generally consists of performance of previous loans in the portfolio. The Automated Reports Generation Module 103 covers portfolio performance metrics like profit, bad debt, conversion and more. All the data is consolidated by the Master Data Manager 110 and made available to other modules.
In an embodiment of the AMP system used for fraud detection, pre-decision data can typically include (but not limited to) transaction details like the card transaction amount, geographic details, merchant details and more. Outcomes data 304 in this embodiment typically consists of known fraud/not fraud states of previous transactions processes by the system. External sources may also have consolidated data on known fraudulent transactions. All the data is consolidated by the Master Data Manager 110 and made available to other modules.
The variable generation module 104 also may be responsible for creating a special class of variables that may be computed by applying Bayesian adjustments to Digital filter functions (referred to in this document as Bayesian adjusted Digital Filters). These variables are usually computed on specific entities, which together represent the data. An entity can be any of (but not limited to) the following: consumer, applicant, bank, state, city, postal code, IP address, and in general, any defined classification of data can be an entity. In an embodiment of the AMP system, the Bayesian adjusted digital filters are computed in the following manner: Let Ei (i=1, 2, 3, . . . ), and t, (I=1, 2, 3, . . . ) denote a series of events, E (and their corresponding timestamps, t) representing an entity, present in the data (for example: if the entity is a bank, these events could be historical loan applications involving customers who have accounts with that particular bank). Let DF(E) be a digital filter function applied to the entity.
For events (E1, E2, . . . , En) and timestamps (t1, t2, . . . , tn) where t is the current time:
where f(Ei) is an objective function computed on each event (e.g.: f(Ei) could represent current status of the loan application Ei), and λ is a positive constant, representing the “half-life” of the digital filter function. The Bayesian adjusted digital filter is computed, for example, by modifying the digital filter function with Bayesian prior estimates:
k is referred to as the Bayesian constant, and typically takes a positive integer value. The prior estimate is typically computed from other instances of entities in the data (eg: in the case where the entity is a bank, the prior estimate can be computed from the data comprising of the loan applications covering the set of all banks). Other ways of getting the prior estimate include (but are not limited to) computing the average from all data and computing the average from subset of data comprising of preselected entities. The prior estimate can also be computed as a digital filter function applied to the entity class (eg: if the entity is a bank, the prior estimate can be a digital filter function on loan applications covering a subset of banks).
Multiple types of entities can be combined to compute second (and higher) order Bayesian adjusted digital filters. In this case, the prior estimate for each entity can come from its corresponding class in the data (eg: entity can be a combination of bank and postal code, where the prior estimate for the bank is computed from loan application data covering a subset of banks and the prior estimate for postal code is computed from loan application data covering a subset of postal codes).
The objective function used in these variables can also be a function of “outcome” data. These include, but are not restricted to, loan outcomes, fraud, income from loan, profitability, cross sell, conversion, propensity to reactivate, tenure of relationship, sale price (of products), revenue and so on. In these instances, the variables act as (near) real time feedback inputs for the predictive models (and the AMP system in general).
In an embodiment of the AMP system used for consumer credit, the Variable Generation Module 104 maintains the library of variables used as inputs for modeling risk, profitability, conversion, among other metrics. For example, the variables may include (but are not limited to) raw and derived variables from credit history data and identity data of the applicant. In an embodiment of the AMP system used for Fraud detection, the Variable Generation Module 104 maintains the library of variables used as inputs for building fraud detection models, and to simulate business impact due to fraud (before and after improved detection). Examples of variables used for fraud detection may include (but are not limited to) frequency variable on debit/credit card, postcode, bank account number, IP address and device identity variables.
In an embodiment of the AMP system used for consumer credit, the Model Building Module 105 is used to build key predictive models on important dependent variables including (but not limited to) risk, conversion, income, profitability, take-up, fraud and identity, term; where each of the dependent variables can have multiple instances (usually based on timeline). In an embodiment of the AMP system used for fraud detection, the Model Building Module 105 is used to build predictive models on several variations of the fraud/not fraud transactional dependent variable. These models, when applied to a lending environment, may predict if a transaction is likely to go through with approval, if it will be higher in risk, be profitable or fraudulent in nature. In the lending example for evaluating John's loan application, the profit model is built to estimate profit earned from each transaction.
The Champion-Challenger Engine 107 may also enable the manual specification for challenger segments. These alternate strategies (the challenger segments) are tried on a random subset of the business transactions and their performance rated against the current champion business strategy. This module automatically adjusts the subset percentages based on performance of the implemented challenger. This module also may evaluate how the predictive models perform in the different challenger segments, and generates rebuild triggers for the model management module if the performance falls below expected or specified thresholds. After a sufficient number of transactions (varies by business product and specific challenger being tested), a successful challenger strategy becomes a part of the current business strategy in production; else it is either modified further or flagged for human intervention.
In an embodiment of the AMP system used for consumer credit, the Decision Logic Module 108 creates a business strategy based on constrained optimization where the objective function is (to maximize) Total Profit from the portfolio. Constraints typically are in the form of available lending capital and reserve requirements and specific conditions on metrics related to bad debt and unit profit (i.e. individual loan or customer level profit), and the inputs are the different predictive models built for each aspect of the portfolio (risk, conversion, profitability, income, etc). The business strategy typically ends up as a set of mathematical rules derived from the models that are applied to each transaction. The derived business strategy, along with the underlying predictive models and variables is then passed onto the Pre-Production Testing 109, to be eventually deployed into the Real Time Decision Engine 101.
In an embodiment of the AMP system used for fraud detection, the Decision Logic Module 108 creates a business strategy based on constrained optimization where the usual objective function is (to minimize) Loss from fraudulent transactions. Constraints typically are in the form of the (maximum allowed) number of false positives per successful detection of fraudulent transaction, and maximum decline rate. The business strategy typically ends up as a set of mathematical rules derived from the models that are applied to each transaction in real time. The derived business strategy, along with the underlying predictive models and variables is then passed onto the Pre-Production Testing 109, to be eventually deployed into the Real Time Decision Engine 101.
Each of the components in
Each of the components in
While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the disclosure, the scope of which is defined by the appended claims.
This applications claims the benefit under 25 USC 119(e) to U.S. Provisional Patent Application Ser. No. 61/899,808 filed on Nov. 4, 2013 and entitled “Real-Time Adaptive Decision System and Method Using Predictive Modeling”, the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61899808 | Nov 2013 | US |