There has been a surge in the use of Machine Learning (ML) algorithms in automating various facets of science, business, and social workflow. While Artificial intelligence (AI) systems are developed and built to make decisions autonomously without prescribed rules, the large number of parameters in models such as such as Deep Neural Networks (DNNs) make them complex to understand and undeniably harder to interpret. Systems whose decisions cannot be well-interpreted are difficult to be trusted, especially in sectors, such as finance, banking, healthcare, and the like.
AI systems have been utilized for anomaly detection which has wide applications across industries. For instance, in Internet of Things (IoT) industry, AI systems has been utilized to detect anomalies in the individual IoT devices and/or the interconnections between the IoT devices; these anomalies may comprise device failure, overheat, abnormal energy consumption, security breaches, etc. In the cybersecurity industry, AI systems have been utilized to detect anomalies in network traffic, user behavior, and software applications; these anomalies may comprise malware, intrusions, phishing attaches, insider threats, etc. In the telecommunications industry, AI systems have been utilized to detect anomalies in network traffic, customer behavior, and network infrastructure, among other areas. These anomalies may comprise network congestion, network failures, fraudulent activity, station failure, etc. In the industrial facilities industry, AI systems have been utilized to detect anomalies in production lines, equipment performance, and worker safety, among other areas. In healthcare, AI systems have been utilized to detect fraudulent insurance claims and payments, and in finance, AI systems have been utilized to find pattern of fraudulent purchases. In banking field, AI systems can be utilized to detect financial crimes such as money laundering activity which is an outlier to certain patterns of depositing money into account holder's account. Money laundering is the process of changing large amounts of money obtained from crimes, such as drug trafficking, into origination from a legitimate source. It is a crime in many jurisdictions. Financial institutions and other regulated entities have set anti-money laundering (AML) regimes to prevent, detect, and report money laundering activities. An effective AML program requires financial institutions to be equipped with tools to investigate, to identify their customers, establish risk-based controls, keep records, and report suspicious activities. In recent years, machine learning-based transaction monitoring systems have been successfully used to complement traditional rule-based systems. This is done to reduce the high number of false positives and the effort needed to manually review all the generated alerts.
Using artificial intelligence (AI) to aid in anomaly detection such as IoT system and/or device failures and/or malfunctions, suspicious data packets in network traffic, equipment failures, or Anti-money laundering (AML) detection may provide a number of advantages comparing to traditional AML system, such as better implementation, reduced administrative complexity and false positives, etc. However, in certain industry or fields, in addition to anomaly detection, it is also desirable to provide explanation about what contributed to a detected anomaly for increasing reliability and trust in the AI system. For example, in the field of financial industry, especially in AML, an explainable AI is desirable due to, without limitation, regulatory reasons (e.g., the authority would want to know what the suspicious activities are, and why an activity is suspicious). Providing a human-comprehensible explanation may also enable financial investigators to ascertain the attributes, severity, and tendencies of the suspicious activities quickly and take actions accordingly.
Conventional AI algorithms used in the AML regimes may provide predictions/detections of suspicious activities. However, these conventional AI algorithms do not provide sufficient explanations to the output indicative of the predictions of suspicious activities. This may cause the information receivers, (e.g., tech experts, maintenance team, hardware and software engineers, investigators, regulators, and law enforcement people, etc.) to be skeptical about the output provided by these AL algorithms. Therefore, the value conveyed by these AI algorithms may be reduced or diminished because of the reasonable skepticism from the investigators and regulators.
The field of explainable AI is focused on the understanding and interpretation of the behavior of AI systems. However, current explainable AI techniques may not be capable of providing sufficient explanation. For instance, current explainable AI models may provide interpretability which is mostly connected with the intuition behind the outputs of a model; with the idea being that the more interpretable a machine learning system is, the easier it is to identify cause-and-effect relationships within the system's inputs and outputs. However, such interpretable model may not translate to one that humans are able to understand the internal logic of or its underlying processes. For instance, current interpretable models may only be able to provide what features contribute to a detection of suspicious activities, i.e., using credit allocation with local explanations. They do not provide the reason(s) why this/these feature(s) have led to the detection of suspicious activities. For example, some conventional methods, such as SHapley Additive explanations (SHAP) explanations may provide a certain degree of explanations of the output of an AL algorithm. SHAP is a game theoretic approach to explain the output of machine learning (ML) model. It may connect optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. However, these explanations may only provide what features contribute to a detection of suspicious activities, i.e., using credit allocation with local explanations. They do not provide the reason(s) why this/these feature(s) have led to the detection of suspicious activities. In a way, it is similar to providing medical lab results without providing a reference range for the vital signs, and the investigators and regulators would not have put trust in these incomplete explanations.
This problem intensifies with the complexities of traits of the financial activities in different business, industries, countries, i.e., there is no universal reference ranges for all financial activities. The reference range depends largely on the industry, business model, and other factors. Financial datasets often consist of high-dimensional feature spaces that are difficult to inspect. Being an outlier does not necessarily imply that a particular application is fraudulent. Thus, it is crucial to be able not only to evaluate an instance given its anomaly score but also to understand the drivers behind the model decision. Therefore, a sophisticated human-comprehensible explanation along with the AL algorithms are desired in the financial industry.
Recognized herein is the need to provide highly sophisticated explanations of results/outputs of AI algorithms in AML programs. The explanations may not only provide what features led to an anomaly (i.e., a detection of suspicious activities), but may also provide the reason(s) why these features have led to the anomaly. Unlike conventional explainable AI methods such as SHAP explanations, methods and systems herein may provide a unique local feature importance algorithm (e.g., Expectation Surface) that not only return what features contributed to the anomalousness of a datapoint, but they additionally convey what the expected value range of a feature would have been, leading to improved explainability.
In some embodiments, the improved explanation may be based on an expected value range of a feature would have been that is generated by the local feature importance algorithm (e.g., Expectation Surface). The Expectation Surfaces may be capable of taking higher dimensional correlation of features into account without compromising computational efficiency of the algorithm. In some embodiments, the methods herein may provide a novel local feature importance algorithm for an anomaly detection model such as Isolation Forest (iForest) model. Such methods may be implemented in various applications such as cybersecurity software, factory management system, IoT system and devices mapping and management mechanism, Transaction Monitoring software, Anti-Money-Laundering mechanism, and other crime or fraudulent detection systems. For example, an expected value or value range, by means of an expectation surface(s), is provided in the context of a particular data packet of the network traffic, a group of data packets, a certain type of data packet, etc. Additionally, the methods and systems herein may be capable of providing improved explanations with sufficient accuracy and robustness for anomaly detection.
The present disclosure provides a computer-implemented method for providing explanations for AI algorithm outputs. The method comprises: (a) receiving transaction log data; (b) identifying anomalous transactions based at least in part on the transaction log data; (c) generating an expectation surface for one or more anomalous transactions; and (d) generating explanations for the anomalous transactions based at least in part on the expectation surface.
The system/method described above can provide sophisticated explanations to the output of AI algorithms. By structuring the algorithms to provide expectation surface, the methods and systems described herein also provide expected value or value range for one or more features. The expected value may provide information receivers, (e.g., investigators, regulators, and law enforcement people, etc.) with insights why these transactions are marked anomalous, so as to facilitate investigation activities. Additionally, the algorithms herein are structured to provide improved algorithmic complexities, runtime performance and memory efficiency. Although the method and system are described in the context of detecting money laundering, risk behavior analysis, it should be noted that methods and systems herein can be utilized in a wide range of fields that may involve any type of anomaly detection, risk assessment, behavior analytics and the like.
In an aspect, a computer-implemented method for providing explainable anomaly detection is provided. The method comprises; (a) generating a set of input features by processing an input data packet related to one or more transactions; (b) predicting, using a model trained using a machine learning algorithm, an anomaly score for each of the one or more transactions by processing the set of input features; (c) computing an expectation surface for at least subset of features from the set of input features; and (d) generating, based at least in part on the expectation surface, an output comprising i) a detection of an anomalous transaction from the one or more transactions, ii) one or more factors attributed to the anomalous transaction and iii) an expected value range for the one or more factors.
In some embodiments, the model does not provide explanation of a prediction and wherein the machine learning algorithm is unsupervised learning. In some embodiments, the model is an isolation forest model. In some cases, the expectation surface is a one-dimensional surface and wherein the expectation surface is computed by traversing a tree of the isolation forest model. Alternatively, the expectation surface is a surface of n dimensionality and wherein the expectation surface is computed by distinguishing an actual path from an exploration path. In some instances, the exploration path allows n features to vary at the same time.
In some embodiments, the expectation surface has a dimensionality same as the number of the subset of features. In some embodiments, the expectation surface is an inverted anomaly score surface of the subset of features. In some cases, the at least subset of features is selected using a local feature importance algorithm.
In some embodiments, the anomalous transaction is a fraudulent activity. In some cases, the method further comprises comparing the expectation surface with one or more expectation surfaces of one or more other types of business. In some cases, the method further comprises determining a money laundering activity upon finding a match of the expectation surface with the one or more expectation surfaces.
In another related yet separate aspect, a system for providing explainable anomaly detection is provided. The system comprises a first module comprising a model trained to predict an anomaly score for each of one or more transactions, where an input to the model includes a set of input features related to the one or more transactions; a second module configured to compute an expectation surface for at least a subset of features from the set of input features, and a graphical user interface (GUI) configured to display information based at least in part on the expectation surface, i) a detection of an anomalous transaction from the one or more transactions, ii) one or more factors attributed to the anomalous transaction and iii) an expected value range for the one or more factors.
In some embodiments, the model does not provide explanation of a prediction and is trained using unsupervised learning. In some embodiments, the model is an isolation forest model. In some cases, the expectation surface is a one-dimensional surface and wherein the expectation surface is computed by traversing a tree of the isolation forest model. In some cases, the expectation surface is a surface of n dimensionality and wherein the expectation surface is computed by distinguishing an actual path from an exploration path. In some instances, the exploration path allows n features to vary at the same time.
In some embodiments, the expectation surface has a dimensionality same as the number of the subset of feature. In some embodiments, the expectation surface is an inverted anomaly score surface of the subset of features. In some embodiments, the subset of features is selected using a local feature importance algorithm.
In some embodiments, the anomalous transaction is a fraudulent activity. In some embodiments, the expectation surface is compared against one or more expectation surfaces of one or more other types of business to determine the fraudulent activity.
Another aspect of the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements any of the methods above or elsewhere herein.
Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto. The computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.
Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference. To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.
The novel features of the invention are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings (also “Figure” and “FIG.” herein), of which:
While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed.
Whenever the term “at least,” “greater than,” or “greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “at least,” “greater than” or “greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.
Whenever the term “no more than,” “less than,” or “less than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “no more than,” “less than,” or “less than or equal to” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1.
As mentioned above, current AI solutions for detecting anomalies may have disadvantages: unsupervised models lead to low performance, resulting in a high number of false positives, while supervised models require a large amount of labeled data to achieve a high detection rate. For instance, labels are expensive to obtain and often require substantial human labor. Additionally, not all states a detection system could take on are known in advance. Instead, the use case often requires generally the detection of states outside the ordinary operation. In all these scenarios, anomaly detection techniques are paramount. Anomaly detection models such as Isolation Forest (iForest) model, though provides robustness, simplicity and accuracy, the model itself lacks capability of interpretation or explanation. While reliable approaches to compute local feature importance exist for the iForest, such as SHAP explanations or DIFFI, they don't provide any information about the expected value of a feature. In other words, the model conveys what features led to an anomaly, but not why. Current solutions lack the capability of providing feedback about what a normal value of a feature would have been.
Providing the expected value of a feature can be challenging because the expected value can depend on the values of other features, even if they are not assigned a high local feature importance score. The algorithms and methods provided herein may be capable of providing the explanation including why certain features led to an anomaly may take into account potential correlation and interdependencies of features. Details about the algorithm and methods are described later herein.
A client node (e.g., client node 102 and/or client node 106) may be, for example, a user device (e.g., mobile electronic device, stationary electronic device, etc.). A client node may be associated with, and/or be accessible to, a user. In another example, a client node may be a computing device (e.g., server) accessible to, and/or associated with, an individual or entity. A client node may comprise a network module (e.g., network adaptor) configured to transmit and/or receive data. Via the nodes in the computer network, multiple users and/or servers may communicate and exchange data, such as financial transaction logs, outputs of AI algorithms, etc. In some instance, the client nodes may transmit information associated with a set of financial transaction logs to the server platform 120. Examples of the information associated with financial transaction logs include, without limitation, transaction amount, transaction currency (e.g., U.S. Dollar, Euro, Japanese Yen, Great British Pound, Australian Dollar, Canadian Dollar, etc.), transaction type name (e.g., wire activities, cash deposit activities, regular check, cashier's check, certified check, money order, etc.), transaction time (e.g., time of a day, day of a year/a quarter, etc.), transaction unique identification number, and the like. In some embodiments, the client nodes may receive and present to a user an Internet-featured item (e.g., an explanation table).
In at least some examples, the system (e.g., server platform 120) may be one or more computing devices or systems, storage devices, and other components that include, or facilitate the operation of, various execution modules depicted in
The feature selection engine 124 may be configured to receive a stream of data such as data representing network traffic (e.g., traffic logs), data representing IoT devices' connectivity and/or general health, data representing production line productivities, data representing financial transactions (e.g., financial transaction logs). The features selection engine 124 may implement a feature selection algorithm for selecting features to calculate expectation surface. In some embodiments, features are selected when the model was trained. For example, during model run time, the feature selection engine 124 may query the database storing historic transactions selecting the same features to calculate expectation surface. In some embodiments, the feature selection engine 124 may query database storing historic data (e.g., historic network traffic logs and features associated with the logs, historic IoT devices' connectivity, historic transactions) to select features to calculate expectation surface based on pre-determined rules. Historic network traffic logs may comprise source and destination IP addresses, protocols and ports, packet size and volume, timestamps, etc. In some embodiments, to detect anomalies, data representing IoT devices' connectivity and/or general health may comprise device status, data traffic, energy consumption, sensor data, etc. In some embodiments, financial transaction logs data may comprise, without limitation, transaction amount, transaction currency (e.g., U.S. Dollar, Euro, Japanese Yen, Great British Pound, Australian Dollar, Canadian Dollar, etc.), transaction type name (e.g., wire activities, cash deposit activities, regular check, cashier's check, certified check, money order, etc.), transaction time (e.g., time of a day, day of a year/a quarter, etc.), transaction unique identification number, and the like. It should be noted that although algorithms and methods herein are described with respect to transactional data or Anti-money laundering (AML) monitoring, they can be applied in various scenarios where anomaly detection and explanations are desired. For example, the methods and systems herein can be applied to industries and applications such as finance, banking, healthcare, and various other fraudulent crime detection or anomaly detection.
In the cases when the methods and systems herein are utilized in transaction monitoring, the transactions may be monitored and assigned with a risk score in real-time by the system. The risk score may correspond to a money-laundering risk level (e.g., money-laundering risk score). The term “risk score” may also be referred to as “anomaly score” which are utilized interchangeably throughout this specification. In some embodiments, this money-laundering risk score may be generated by Isolation Forest (IForest) model. In some embodiment, this money-laundering risk score may be generated based on the financial transaction data associated with financial transaction logs. For example, the further a transaction (or a group of transactions) does not conform to the normal profile, the higher a risk score may be associated to the transaction. In some embodiments, a risk score is assigned to a group of financial transactions associated with one entity, e.g., a bakery, a restaurant, a bookstore, a car dealer etc. In some embodiments, a risk score is assigned to a group of financial transactions associated with one entity during a period of time, e.g., an hour, a few hours, a day, a week, two weeks, three weeks, a month, a quarter, a year, a few years, etc. As described elsewhere herein, a risk score may denote an outlier that does not conform to the normal profile of the transaction in context of the industry the transaction is in. A pre-determined threshold is used to filter out transactions anomalous based on the risk score and may mark the underlying transaction(s) as an anomaly. In some embodiments, this pre-determined threshold may be determined by an administration agent, such as the administration agent 110. In some embodiments, this pre-determined threshold may be determined by a Machine Learning model, which has been trained to understand the normal profile in different industries or for different business models. Examples of different industries may comprise, without limitation, restaurants, car dealers, bookstores, software companies, etc. Examples of business models may comprise, without limitation, retailers, distributions, wholesales, manufactures, designer, service providers, etc.
The system herein may comprise a dynamic network of a plurality of AI engines (e.g., anomaly detection AI engine, payment screening AI engine, transaction monitoring AI engine, etc.) acting in parallel, which consistently act and react to other engine's action at any given time and/or over time, which actions may be based on detected, inter-relational dynamics as well as other factors leading to more effective actionable value. The system 100 may provide alignment, coordination and convergence of AI outputs for purposes of generating desired converged optimal outcomes and results. In some embodiments, the one or more AI engines may be deployed using a cloud-computing resource which can be a physical or virtual computing resource (e.g., virtual machine).
The feature selection engine 124 may be configured to select features to calculate an expectation surface based on feature properties associated with individual features of the anomalous transactions. In some embodiments, the features selection engine 124 may select features to calculate the expectation surface for all transactions, whether anomalous or not. Due to correlations between features fes and features . the expected value of a given feature f1 can depend largely on the choice of other features
.
In some embodiments, the feature selection engine may take into account the correlation and interdependencies of features by sorting the features by importance and omitting features that have a small effect on the value of the anomaly score. In some cases, the feature selection engine may choose at least a subset of features (e.g., 2-5 most important features) as the most important features according to a suitable local feature importance algorithm. For example, the local feature importance algorithm may compute the SHapley Additive explanations (SHAP) value and select a subset of the important features and/or the subset complementing features (to be omitted) accordingly. In some embodiments, the feature selection engine 124 may select a set of other features
based on feature properties associated with individual features. Feature properties, for example, may comprise local feature importance associated with other features
. In some embodiments, users may be given an option to select the features that they see fit for the expectation surface that they want to visualize, interact with. In other words, the expectation surface may be computed in an on-demand manner.
The expectation surface generation engine 126 may be configured to generate an expectation surface for the selected subset of features. In some cases, the expectation surface can be computed for all the features. As described above, the subset of features may be the most important for the anomaly score capturing most effects originating from correlations among the input features (e.g., high-dimensional transaction data and/or customer data). The expectation surface may be computed for the input features indicating the expected ranges (that are considered as normal). In some embodiments, the expectation surface generation engine 126 generates an expectation surface to generate explanation for marking the anomalous transactions. In some embodiments, the expectation surface generation engine 126 generates an expectation surface for all received transactions, whether anomalous or not.
In some cases, for a set of input features, e.g., k input features f, an expectation surface may be defined as an inverted anomaly score surface of a selected subset of features, e.g., feature fes. In some cases, the selected subset of features may be a subset of/features fes={fj: j E {1, 2, . . . , k}}, |fes|=1. The subset of features may be selected by the features selection engine as described above. Details about the algorithm implementing the expectation surface computation and feature selection are described elsewhere herein.
As described elsewhere herein, due to correlations between features fes and features . the expected value of a given feature fi can depend largely on the choice of other features
. The expectation surface generation engine 126 may calculate the expectation surface based on the selected other features
, which are selected by the feature selection engine 124. Details of the algorithm implementing the expectation surface method, computation of expectation surface, equations, algorithms are described elsewhere herein.
An expectation surface may indicate the normal profile associated with a transaction or a group of transactions. For example, as shown in connection with
The explanation generation module 128 may be configured to generate explanations for the anomalous transactions based at least in part on the expectation surface. In some embodiments, the expectation surface may provide an expected range for difference features or factors. For example, the expectation surface may indicate: for a bakery store, it is normal to have 68-85% revenue generated during morning hours, such as between 6:00 AM to 11:00 AM. When a large number of transactions (e.g., 90%) for a bakery fall outside of this expected range provided by the expectation surface, the transactions may be marked as anomaly. In some embodiments, the explanation generation module 128 may utilize this expectation surface to provide explanations as to the reasons a transaction or a group of transactions are anomalous. For example, for the above example, the explanation generation module 128 may provide explanations such as: 90% of the transactions occurs outside of the expected time period of 6:00 AM to 11:00 AM. In another example explanation, the explanation generation module 128 may provide explanations such as: only 10% of the transactions occurs outside of the expected time period 6:00 AM to 11:00 AM, normally, it should be 68-85%. This may provide the information receivers, (e.g., investigators, regulators, and law enforcement people, etc.) with insights why these transactions are marked anomalous, so as to facilitate investigation activities.
In some embodiments, the explanation generation module 128 may utilize natural language processing (NLP) to generate human-comprehensible explanations. NLP may be a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language. Other mechanisms may be utilized by explanation generation module 128 to generate human-comprehensible explanations.
Data access modules 142 may facilitate access to data storage 150 of the server platform 120 by any of the remaining modules 124, 126, and 128 of the server platform 120. In one example, one or more of the data access modules 142 may be database access modules, or may be any kind of data access module capable of storing data to, and/or retrieving data from, the data storage 150 according to the needs of the particular module 124, 126, and 128 employing the data access modules 142 to access the data storage 150. Examples of the data storage 150 include, but are not limited to, one or more data storage components, such as magnetic disk drives, optical disk drives, solid state disk (SSD) drives, and other forms of nonvolatile and volatile memory components.
As shown in
At least some of the embodiments described herein with respect to the system 100 of
In the context of cybersecurity, the risk score and expectation surface (expected range for a feature) can be utilized to identify anomalies or potential threats in network traffic, system logs, or other cybersecurity data, and provide reasons why an event is determined to be an anomaly. The risk score can be assigned to different events or activities within the network or system, and events with high risk scores may be flagged for further investigation. In some embodiments, the expectation surface can be utilized to identify features or characteristics of network traffic or system logs that are outside of the expected range, which could indicate a potential security threat or anomaly. By identifying the normal range of features or characteristics, the expectation surface can help to identify unusual or suspicious activity that may require further investigation. For example, in network intrusion detection, the risk score and expectation surface can be used to identify potential attacks or intrusions by analyzing network traffic patterns and identifying events with high risk scores or features outside of the expected range. Similarly, in log analysis, the risk score and expectation surface can be used to identify potential security breaches or anomalies by analyzing system logs and identifying events with high risk scores or features outside of the expected range, along with the expected range, for presentation to a user. In particular, unlike conventional anomaly detection method that requires known known pattern of normal events or normal range of value, the expectation surface herein beneficially allows for interpreting or explaining an anomalous event without knowing the normal range of value (e.g., crime pattern) in advance.
In some embodiments, the anomaly detection algorithm herein may utilize Isolation forest (IForest) to detect anomalies using isolation (e.g., how far a data point is to the rest of the data). The anomaly detection algorithm may isolate anomalies using binary trees with logarithmic time complexity with a generally low constant and a low memory requirement. An anomaly (i.e., outlier) is an observation or event that deviates from other events to arouse suspicion regarding its legitimacy. By focusing on detecting anomalies instead of modeling the normal points, IForest model may provide functions and utilities of detecting suspicious activities, such as for a financial institution. However, the IForest model itself lacks capability of interpretation or explanation. While approaches to compute local feature importance exist for the iForest, such as SHAP explanations or DIFFI, they don't provide any information about the expected value of a feature. In other words, the model conveys what features led to an anomaly, but not why.
Methods and systems herein provide an expectation surface for the IForest model which beneficially generating sophisticated explanations of the output of an IForest model. An expectation surface may be defined herein as an inverted anomaly score surface of a given feature, e.g., feature fes of dimension l. The below equation (equation 1) may illustrate an expectation surface(ES) of dimensionality of l:
As equation 1 shows, the expectation surface may be an inverted anomaly score surface of features fes, assuming that all other features are kept constant. In equation 1, (f1, . . . , fk)=s (f) may be the anomaly score of an IForest model for the k input features f. Consider an arbitrary subset of l features fes={fj: j ∈ {1, 2, . . . , k}}, |fes|=l. The complement of fes is denoted
,
k−l. The dimensionality of the Expectation surface may be l(l=1, 2, 3, 4, 5, 6, etc.) depending on the selection of the subset of the features. In some embodiments, maxfes(ES) may denote or represent the minimal anomaly score for the feature set fes and hence represents the models' most expected values of these features. The one dimensional expectation surfaces can be computed for all features without the need of applying a feature selection algorithm.
Due to correlations between features fes and features . the expected value of a given feature fi can depend largely on the choice of other features
. The expectation surface method herein beneficially takes into account potential correlation and interdependencies of features. In some embodiments of implementing the method, the feature selection engine 124 of the system 120 may select a subset of features fes and/or the complementing features
based on the importance and/or feature properties associated with individual features, respectively. For instance, the feature selection engine may choose the subset of features (e.g., 2-5 most important features) as the most important features according to a suitable local feature importance algorithm. For example, the local feature importance algorithm may compute the SHapley Additive explanations (SHAP) value and select a subset of the important features and/or the subset complementing features
(to be omitted) accordingly.
In some embodiments, the feature selection engine 124 may select a set of other features based on feature properties associated with individual features. Feature properties, for example, may comprise local feature importance associated with other features
. In some embodiments, the feature selection engine 124 may sort individual other features
based on their local feature importance, and the choose the ones above a pre-determined threshold. This may reduce or eliminate the correlation effects of other features
on the expectation surface of the feature fes because the local importance score may represent the correlation between the feature fes and other features
. An equation (equation 2) below may provide the reason for this effect.
Equation 2 illustrates a calculation of SHAP value. As shown in equation 2, if ϕfi is relatively small, it means omitting feature fi may have a relatively small effect on v. Hence, it cannot be strongly correlated with any of features fes, as omitting any of fes would have a strong effect on v. This equation 2 may further show that if two features have a strong correlation effect, they would have a relatively high local importance score. This is because changing one or both of their values (i.e., in effect breaking the correlation), would impact the anomaly score drastically. Therefore, the local importance score associated with other features may be used to select the other features
as the most important subset of features to calculate the expectation surface for feature
which may take into consideration of the correlation effects between features.
In some embodiments, the feature selection engine 124 may select a number of other features to calculate the expectation surface for feature fes, for example, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20. In some embodiments, the number of other features selected to calculate the expectation surface for feature fes may be any natural number between 1-100, 1-1000, 1-10000, 1-100000. In some embodiments, a given sample of 2, 3, 4, or 5 features may be selected to calculate the expectation surface for feature fes, as the anomaly scores may capture the effects originating from correlations between feature fes and other features .
The present disclosure provides examples of algorithms for computing an expectation surface. The following algorithm may compute one-dimensional expectation surfaces for iForest model. As mentioned above, The Isolation Forest algorithm is based on the isolation principle: it tries to separate data points from one another, by recursively and randomly splitting the dataset into two partitions along its features axes. The iForest model is trained using unsupervised learning based on the theory that if a point is an outlier, it will not be surrounded by many other points, and therefore it will be easier to isolate it from the rest of the dataset with random partitioning. The Isolation Forest algorithm uses the training set to build a series of isolation trees, which when combined form the Isolation Forest; each isolation tree is built upon a subset of the original data, randomly sampled. The splitting is performed along a random feature axis, using a random split value which lies between the minimum and maximum values for that feature amongst the data points in that partition. This split process is performed recursively until a single point has been isolated from the others. The number of splits required to isolate an outlier is likely to be much smaller than the one needed by a regular point, due to the lower density of points in the surrounding feature space. Isolation Forest leverages an ensemble of isolation trees, with anomalies exhibiting a closer distance to the root tree. The anomaly score can be derived from path length h (x) of a point x which is defined as the average number of splits required to isolate the point across all the trees in the forest.
As shown below, the computation of a one dimensional expectation surface for a single tree is defined in Algorithm 1, with aid of algorithms 2, 3 and 4.
In some embodiments, algorithm 1 may provide generalizations to an expectation surface of higher dimension n (n>1). In some embodiment, an exploration path may allow n features to vary at the same time. That is, if a path is on an exploration path for a set of features fi
The following tables shows the different implementations of the algorithm to generate a one-dimensional expectation surface and an n-dimensional expectation surface.
The algorithm provided herein may have improved scaling capability that it can be easily implemented with the number of samples N out of which the tree is grown. In the meantime, the algorithm herein may not significantly increase computational cost as proved by the algorithm complexity analysis below. In algorithm 1, there may be either one or two elements pushed into queue q. In some embodiments, the rate with which each case occurs may govern the algorithmic complexity of the algorithm. To derive the algorithmic complexity, the system described herein may consider one-dimensional expectation surfaces, with the assumption that the system is already on an exploration path for a feature fi. The probability to encounter another node of the same type fi may be equal to
where K is the overall number of features. In some embodiments, for the number of node visits of a single tree of depth m, Sm, the following recursion relation (equation 3) may follow:
In some embodiments, the first summand corresponds to the scenario of encountering a node that is node of feature type fi, which may happen with a probability
In this case, there may only be a single element pushed to the queue q. If the system encounters the feature fi, it may push two elements to the queue q, which explains the second summand. The third summand may account for having visited the root node.
The above recursion relation of equation 3 may be solved as follows:
In some embodiments, if the system is not on an exploration path, the recursion relation of the total number of visited nodes cm of a tree of depth m may be the following:
In some embodiments, the first summand of equation 5 may represent the case for which an element of the actual path is added to the queue q. The second summand may correspond to the case of starting a new exploration path. The third summand may account for having visited the root node. Equation 5 may be solved as follows
In some embodiments, for a tree grown out of N samples, the average height of a trained tree is m=log N. It hence follows that for K>>1, the number of nodes visited and hence the algorithmic complexity scales as follows:
In some embodiments, the algorithmic complexity of a ‘normal’ iForest inference may be log N. The expectation surface algorithm provided herein is hence not more expensive, and has still excellent scaling behavior with the number of samples N out of which the tree is grown.
In some embodiments, for a d-dimensional expectation surface, the recursion relation for Sm may be the following:
In some embodiments, for a 2-dimensional expectation surface, the recursion relation of cm may be the following:
In some embodiments, the first order approximation, for K>>d, of the runtime complexity may be follows:
In some embodiments, the memory requirement is dominated by the ES dictionary. In some embodiments, for every leaf node that the system encounters, it may add one entry into ES dictionary. In some embodiments, the number of leaf nodes may be less than the number of overall visited nodes. Therefore, if d>>K, the memory complexity is bound as:
In some embodiments, and
may be the number of leaf nodes encountered by the algorithm for a tree of depth m, which may continue exact derivation.
The anomaly detection methods and systems as described herein can be applied in a wide range of industries and fields. In some cases, the methods and systems may be implemented on a cloud platform system (e.g., including a server or serverless) that is in communication with one or more user systems/devices via a network. The cloud platform system may be configured to provide the aforementioned functionalities to the users via one or more user interface. The user interface may comprise a graphical user interfaces (GUIs), which may include, without limitation, web-based GUIs, client-side GUIs, or any other GUI as described elsewhere herein.
A graphical user interface (GUI) is a type of interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, as opposed to text-based interfaces, typed command labels or text navigation. The actions in a GUI are usually performed through direct manipulation of the graphical elements. In addition to computers, GUIs can be rendered in hand-held devices such as mobile devices, MP3 players, portable media players, gaming devices and smaller household, office and industry equipment. The GUIs may be provided in a software, a software application, a web browser, etc. The GUIs may be displayed on a user device or user system (e.g., mobile device, personal computers, personal digital assistants, cloud computing system, etc.). The GUIs may be provided through a mobile application or web application.
In some cases, the graphical user interface (GUI) or user interface may be provided on a display. The display may or may not be a touchscreen. The display may be a light-emitting diode (LED) screen, organic light-emitting diode (OLED) screen, liquid crystal display (LCD) screen, plasma screen, or any other type of screen.
In some cases, one or more systems or components of the system (e.g., anomaly detection, explanation generation, explanation visualization, etc.) may be implemented as a containerized application (e.g., application container or service containers). The application container may provide tooling for applications and batch processing, such as web servers with Python or Ruby, JVMs, or even Hadoop or HPC tooling. For instance, the frontend of the system may be implemented as a web application using the framework (e.g., Django Python) hosted on an Elastic Cloud Compute (EC2) instance on Amazon Web Services (AWS). The backend of the system may be implemented as serverless compute service such as hosted on AWS Lambda as a serverless compute service running a web framework for developing RESTful APIs (e.g., FastAPI). This may beneficially allow for a large-scale implementation of the system.
In some cases, one or more functions or operations consist with the methods described herein can be provided as software application that can be deployed as a cloud service, such as in a web services model. A cloud-computing resource may be a physical or virtual computing resource (e.g., virtual machine). In some embodiments, the cloud-computing resource is a storage resource (e.g., Storage Area Network (SAN), Network File System (NFS), or Amazon S3.RTM.), a network resource (e.g., firewall, load-balancer, or proxy server), an internal private resource, an external private resource, a secure public resource, an infrastructure-as-a-service (IaaS) resource, a platform-as-a-service (PaaS) resource, or a software-as-a-service (SaaS) resource. Hence, in some embodiments, a cloud-computing service provided may comprise an laaS, PaaS, or SaaS provided by private or commercial (e.g., public) cloud service providers.
Methods and systems herein may be integrated in any third-party systems in a user selected deployment option such as SaaS or alternative deployment options, based on a cloud-native, secure software stack. For example, the anomaly detection and/or GUI modules that can be deployed to interface with one or more banks with one or more different bank cores or banking platforms such as Jack Henry™, FIS™, Fiserv™, or Finxact™, although not limited thereto. In some cases, depending on the use application, the system herein may comprise logging and telemetry module to communicate a security system of a bank infrastructure which can provide detailed bank-facing technical operations and information security details, giving the banks visibility and auditability the bank may need.
In an exemplary application of the presented anomaly detection system in transaction monitoring and Anti-money laundering (AML) monitoring, the false positives may be reduced significantly (e.g., over 70%) to prevent increased compliance headcount and unknown crime patterns can be detected before they become rampant.
Existing anomaly detection methods in AML monitoring may only provide insights into the features that lead to an anomaly but don't give insight into what would have constituted a normal value. For example, the following examples are generated using existing local feature importance algorithm in a transaction monitoring system:
This type of explanations may help a user to understand what the main factors are (i.e., features) that drives the detection/determination of anomaly. It may inform the user the actual values of the features (e.g., 11%, 5.20%, 40.32% in above example), but it may lack information about what a normal value would have been. Additionally, it doesn't provide information about the direction of the outlies outside of the normal value range, i.e. the user may have to guess whether the reported value of 11% cash accumulation against a single counterparty is a too high or a too low value according to the model. This becomes exacerbated across difference industries, for example, some accounts a value of 11% might be normal, for instance, for a merchant selling rare and expensive paintings, while for other accounts this would be an anomaly, like for a bakery.
The improved anomaly detection methods and systems herein may beneficially provide the expected values in the context of the given transaction. In some embodiments, system herein May allow users to visualize with the anomaly detection analytics via streamlined and intuitive GUI. For example, users may be provided with not only the detected anomalous transaction, the attributes/factors led to the conclusion but also the expected unsuspicious range value for each factor. The GUI may provide guidance to a user (e.g., investigators, regulators, and law enforcement people, etc) an assessment of the attributes, severity, and tendencies of the suspicious activities. In another example, the GUI may also provide an interactive tool allowing users to interact with a visualization of the multiple factors/features and visualize how the features are correlated. Examples and details about the GUIs are described later herein.
Transactions may be monitored and assigned with a money-laundering risk score. In some embodiments, this money-laundering risk score may be generated by IForest model. In some embodiment, this money-laundering risk score may be generated based on the financial transaction logs data associated with financial transaction logs. For example, the further a transaction (or a group of transactions) does not conform to the normal profile, the higher a risk score may be associated to the transaction. In some embodiments, a risk score is assigned to a group of financial transactions associated with one entity, e.g., a bakery, a restaurant, a bookstore, a car dealer etc. In some embodiments, a risk score is assigned to a group of financial transactions associated with one entity during a period of time, e.g., an hour, a few hours, a day, a week, two weeks, three weeks, a month, a quarter, a year, a few years, etc. As described elsewhere herein, a risk score may denote an outlier that does not conform to the normal profile of the transaction in context of the industry the transaction is in. If a risk score of a transaction or a group of transactions is greater than a pre-determined threshold, expectation surface generation engine 126, explanation generation module 128, or other components of the server platform 120 may mark the underlying transaction(s) as an anomaly. In some embodiments, this pre-determined threshold may be determined by an administration agent, such as the administration agent 110. In some embodiments, this pre-determined threshold may be determined by a Machine Learning model, which has been trained to understand the normal profile in different industries or for different business models. Examples of different industries may comprise, without limitation, restaurants, car dealers, bookstores, software companies, etc. Examples of business models may comprise, without limitation, retailers, distributions, wholesales, manufactures, designer, service providers, etc.
The feature selection engine be configured to select features to calculate expectation surface based on feature properties associated with individual features of the anomalous transactions. In some embodiments, the features selection engine may select features to calculate the expectation surface for all transactions, whether anomalous or not. The expectation surface may provide what the above examples are missing: the expected value range of the model in the context of the transaction. From those expectation surfaces, the expected value of a feature may be derived, under the assumption that all other features are not changed (i.e. the context of the transaction is not changed).
The transaction is anomalous due to the factors shown below:
The volume of transactions against this same counterparty in the last 30 days vs. the overall volume of non-authorization transactions was 11.00%. The expected range of values for unsuspicious transactions would have been 0%-5%.
The count of transactions against this same counterparty in the last 30 days vs. the overall count of non-authorization transactions was 5.20%. The expected range of values for unsuspicious transactions would have been 0%-2.1%.
The volume of nighttime transactions in the last 30 days vs. the overall volume of captured and non-declined transactions was 40.32%. The expected range of values for unsuspicious transactions would have been 0%-10.1%.
In the illustrated example, the expected unsuspicious range is provided, e.g., 0%-5%; 0%-2.1%; 0%-10.1%. The expected unsuspicious value range may provide guidance to a user (investigators, regulators, and law enforcement people, etc) an assessment of the attributes, severity, and tendencies of the suspicious activities. Such explanation beneficially facilitates efficiency and trust of the AI algorithms for AML regime. The expected value of a feature may be derived using the algorithm as described above under the assumption that all other features are not changed (i.e., the context of the transaction is not changed).
In some embodiments, higher dimensional expectation surfaces may be used as an interactive tool for investigators to understand the correlation between features.
In some cases, upon identifying an anomalous account (e.g., anomalous restaurant account), the method may compare the anomalous restaurant account with an expectation surface of another type of business (e.g., Automotive Store). If there is a match, it may indicate the account may be disguising itself as a Restaurant which is suspicious.
The GUI may allow users to visualize the explanation and anomaly detection analytics in an interactive manner. Visualizations as in
The process 400 may then proceed to operation 420, wherein the system 100 may identify anomalous transactions based, at least in part, on the transaction log data. In some embodiments, transactions may be monitored and assigned with a risk score (e.g., money-laundering risk score). In some embodiments, this money-laundering risk score may be generated by IForest model. In some embodiment, this money-laundering risk score may be generated based on the financial transaction logs data associated with financial transaction logs. For example, the further a transaction (or a group of transactions) does not conform to the normal profile, the higher a risk score may be associated to the transaction. In some embodiments, a risk score is assigned to a group of financial transactions associated with one entity, e.g., a bakery, a restaurant, a bookstore, a car dealer etc. In some embodiments, a risk score is assigned to a group of financial transactions associated with one entity during a period of time, e.g., an hour, a few hours, a day, a week, two weeks, three weeks, a month, a quarter, a year, a few years, etc. As described elsewhere herein, a risk score may denote an outlier that does not conform to the normal profile of the transaction in context of the industry the transaction is in. If a risk score of a transaction or a group of transactions is greater than a pre-determined threshold, feature selection engine 124 of the system may mark the underlying transaction(s) as an anomaly. In some embodiments, this pre-determined threshold may be determined by an administration agent, such as the administration agent 110. In some embodiments, this pre-determined threshold may be determined by a Machine Learning model, which has been trained to understand the normal profile in different industries or for different business models. Examples of different industries may comprise, without limitation, restaurants, car dealers, bookstores, software companies, etc. Examples of business models may comprise, without limitation, retailers, distributions, wholesales, manufactures, designer, service providers, etc.
Next, the process 400 may continue to operation 430, where the system 100 may generate an expectation surface which is an inverted anomaly score surface of a selected subset of features. The expectation surface may be generated to provide explanation for marking the one or more anomalous transactions. The feature selection engine 124 may select a subset of features to calculate the expectation surface based on feature properties associated with individual features of the anomalous transactions. In some embodiments, the features selection engine 124 may select features to calculate the expectation surface for all transactions, whether anomalous or not. In some embodiments, the expectation surface generation engine 126 of the server platform 120 may generate an expectation surface for the transactions, for example, based on the selected features.
An expectation surface may be generated, in operation 430, as an inverted anomaly score surface of a given subset of features, e.g., feature fes. The computation of the expectation surface is same as described above. The expectation surface may indicate the normal profile associated with a transaction or a group of transactions. For example, as shown in connection with
Next, the process 400 may proceed to operation 440, wherein the system 100 may provide explanations for the anomalous transactions based at least in part on the expectation surface. The explanation generation module 128 of server platform 120 may generate explanations for the anomalous transactions based at least in part on the expectation surface. In some embodiments, the expectation surface may provide an expected value range for difference features or factors. For example, the expectation surface may indicate: for a bakery store, it is normal to have 68-85% revenue generated during morning hours, such as between 6:00 AM to 11:00 AM. When a large number of transactions (e.g., 90%) for a bakery fall outside of this expected range provided by the expectation surface, the transactions may be marked as anomaly. In some embodiments, the explanation generation module 128 of the server platform 120 may utilize, in operation 440, this expectation surface to provide explanations as to the reasons a transaction or a group of transactions are anomalous. For example, for the above example, the explanation generation module 128 may provide explanations such as: 90% of the transactions occurs outside of the expected time period of 6:00 AM to 11:00 AM. In another example explanation, the explanation generation module 128 may provide, in operation 440, explanations such as: only 10% of the transactions occurs outside of the expected time period 6:00 AM to 11:00 AM, normally, it should be 68-85%. This may provide the information receivers, (e.g., investigators, regulators, and law enforcement people, etc.) with insights why these transactions are marked anomalous, so as to facilitate investigation activities. In some embodiments, in operation 440, natural language processing (NLP) to generate human-comprehensible explanations. NLP may be a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language. Other mechanisms may be utilized in operation 440 to generate human-comprehensible explanations.
Optionally, the method may further comprise comparing the expectation surface of one types of business to the expectation surfaces of other types of business to determine a money-laundering activity. The comparison result may be displayed to the user on the GUI to guide a user in further assessment.
The present disclosure provides computer systems that are programmed to implement methods of the disclosure.
The computer system 501 includes a central processing unit (CPU, also “processor” and “computer processor” herein) 505, which can be a single core or multi core processor, or a plurality of processors for parallel processing. The computer system 501 also includes memory or memory location 510 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 515 (e.g., hard disk), communication interface 520 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 525, such as cache, other memory, data storage and/or electronic display adapters. The memory 510, storage unit 515, interface 520 and peripheral devices 525 are in communication with the CPU 505 through a communication bus (solid lines), such as a motherboard. The storage unit 515 can be a data storage unit (or data repository) for storing data. The computer system 501 can be operatively coupled to a computer network (“network”) 530 with the aid of the communication interface 520. The network 530 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet. The network 530 in some cases is a telecommunication and/or data network. The network 530 can include one or more computer servers, which can enable distributed computing, such as cloud computing. The network 530, in some cases with the aid of the computer system 501, can implement a peer-to-peer network, which may enable devices coupled to the computer system 501 to behave as a client or a server.
The CPU 505 can execute a sequence of machine-readable instructions, which can be embodied in a program or software. The instructions may be stored in a memory location, such as the memory 510. The instructions can be directed to the CPU 505, which can subsequently program or otherwise configure the CPU 505 to implement methods of the present disclosure. Examples of operations performed by the CPU 505 can include fetch, decode, execute, and writeback.
The CPU 505 can be part of a circuit, such as an integrated circuit. One or more other components of the system 501 can be included in the circuit. In some cases, the circuit is an application specific integrated circuit (ASIC).
The storage unit 515 can store files, such as drivers, libraries and saved programs. The storage unit 515 can store user data, e.g., user preferences and user programs. The computer system 501 in some cases can include one or more additional data storage units that are external to the computer system 501, such as located on a remote server that is in communication with the computer system 501 through an intranet or the Internet.
The computer system 501 can communicate with one or more remote computer systems through the network 530. For instance, the computer system 501 can communicate with a remote computer system of a user. Examples of remote computer systems include personal computers (e.g., portable PC), slate or tablet PC's (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iphone, Android-enabled device, Blackberry®), or personal digital assistants. The user can access the computer system 501 via the network 530.
Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 501, such as, for example, on the memory 510 or electronic storage unit 515. The machine executable or machine readable code can be provided in the form of software. During use, the code can be executed by the processor 505. In some cases, the code can be retrieved from the storage unit 515 and stored on the memory 510 for ready access by the processor 505. In some situations, the electronic storage unit 515 can be precluded, and machine-executable instructions are stored on memory 510.
The code can be pre-compiled and configured for use with a machine having a processer adapted to execute the code, or can be compiled during runtime. The code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as-compiled fashion.
Aspects of the systems and methods provided herein, such as the computer system 501, can be embodied in programming. Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk. “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
Hence, a machine readable medium, such as computer-executable code, may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the databases, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
The computer system 501 can include or be in communication with an electronic display 535 that comprises a user interface (UI) 540. Examples of UI's include, without limitation, a graphical user interface (GUI) and web-based user interface.
Methods and systems of the present disclosure can be implemented by way of one or more algorithms. An algorithm can be implemented by way of software upon execution by the central processing unit 505.
While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.
This application is a continuation of International Application No. PCT/EP2023/060861, filed on Apr. 25, 2023, which claims the priority and benefit of U.S. Provisional Application No. 63/334,324, filed on Apr. 25, 2022, the entirety of which is incorporated herein by reference.
| Number | Date | Country | |
|---|---|---|---|
| 63334324 | Apr 2022 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/EP2023/060861 | Apr 2023 | WO |
| Child | 18903693 | US |