The website https://www.esecurityplanet.com/products/top-ueba-vendors.html lists several vendors that provide User and Entity Behavior Analysis (UEBA) solutions. Among these, https://logrhythm.com/products/logrhythm-user-xdr/, https://www.splunk. com/en_us/software/user-behavior-analytics.html, and https://www.forcepoint.com/product/ueba -user-entity-behavior-analytics are major players.
A User and Entity Behavior Analysis (UEBA) system helps build a behavioral profile of users and entities in an organization and assigns a risk score when the behavior of a user or entity deviates from the normal. This intrusion detection system helps identify compromised accounts, data exfiltration, and insider threats and can serve both as a diagnostic tool and an early warning system. In a specific implementation, identification of anomalies under three categories (time, pattern, and count) is accomplished using unsupervised ML techniques (RPCA, Markov chains, and EMA) and alert fatigue reduction through peer grouping. These unsupervised techniques require no labelled information and can constantly adapt to changing patterns in the data, thereby reducing false positives, and eliminating the need for re-training.
User and Entity Behavior Analysis can be a key component of a cyber security framework that seeks to detect insider threats. UEBA systems track users and entities in an enterprise or organization and build up a profile of their normal behavior. These systems then raise alerts when the behavior of the user or entity deviates from the previously established normal baseline.
A proposed system tracks user and entity behavior under 3 categories—time, count, and pattern. An event is composed of different fields that describe it. For example, a log on event could have different fields like username, hostname, log on time, log on type, etc. An event is passed through one or more algorithms, depending on what kind of behavioral information needs to be tracked from the event. The decision of which inputs to feed to which algorithms, and the handling of the anomalous events detected, are done external to the system. This allows the engine to be highly flexible and generalize to multiple domains, without changes to the engine itself.
For example, a user logon event can be processed under the Time category to detect whether the user is logging on at an anomalous time. It can also be processed under the Pattern category to detect whether the user is logging on a host that does not fit into the user's regular logon pattern.
Types of Anomalies
Time Anomalies
In the time category, the time at which a user does a particular activity, such as log ons, file downloads, file uploads, etc. is modeled. Cases such as machine log ons or printing requests at unusual times, could be indicative of a compromised user account and are flagged here. The algorithm used for identifying anomalies in this category is called Robust Principal Component Analysis (RPCA). This algorithm also provides an expected value for each anomaly. The difference between the expected time of occurrence of the specific event and the actual time of occurrence can help in gauging the severity of the anomaly and provide a risk score. For example, an employee logs on at 5 am when he generally logs on between 9 and 10 am would be flagged as an anomaly.
The inference phase 204 starts with modeling an event at module 214. Adding anomalies to the model helps capture concept drift if it occurs and adapt accordingly, during which anomalies eventually become normal. For example, Model′ includes a first anomaly. Model″ includes multiple anomalies. Having multiple anomalies over an extended period indicates that the data distribution has changed over time and concept drift has occurred. This means detected anomalies become a new normal. Eventually Model″ becomes the Model, and this new Model will consider all the anomalies that were previously detected in Model′ and Model″ as normal. it may be noted, however, that in a specific implementation, there is actually only one model; Model′ is conceptual.
The flowchart 200 continues to decision point 216 with determining whether the event has an anomaly. If it is determined the event has an anomaly (216-Yes), then the flowchart 200 continues to module 218 with determining a risk score that is equal to a function of x and y. For example, the risk score could be equal to the absolute value of (x-y)/x, where x is an expected value and y is an actual value. If it is determined the event does not have an anomaly (216-No), then the flowchart 200 continues to module 220 with setting a risk score equal to 0 (or a zero-equivalent value). In either case, the flowchart 200 ends with updating the model at module 222.
Count Anomalies
A high number of file downloads, failed logons, printing requests, etc. by a user could be indicative of either a compromised account or an infiltration attempt by external hackers. In a specific implementation, these kinds of anomalies are detected by maintaining an Exponential Moving Average (EMA) for an aggregation interval specified in minutes. For example, if the interval is configured to be 60 minutes, then events are aggregated every 60 minutes and 24 different averages are maintained, one for each hour of the day. Thresholds are then calculated for each hour as, for example, the (average+n* [exponential moving standard deviation]), where n is a configurable parameter. If the number of events per hour exceeds the associated threshold, an anomaly is flagged. Daily and monthly EMAs are also maintained with respective thresholds. Thus, an event could be an interval, daily, or monthly anomaly, or be an anomaly under more than one category. Based on the difference between the actual count of events, and the threshold, a risk score is generated. For example, if a user has executed 20 DML, queries on an SQL server when the threshold is only 3, an anomaly is detected with a risk score of 1.
The inference phase 304 starts with modeling an event at module 314. The flowchart 300 continues to decision point 316 with determining whether the event has an interval, daily, or monthly anomaly. If it is determined the event has an anomaly (316-Yes), then the flowchart 300 continues to module 318 with determining a risk score that is equal to the difference between a threshold and actual count. For example, the risk score could be equal to ([interval threshold]−count) for an interval risk score, ([daily threshold]−count) for a daily risk score, and ([monthly threshold]−count) for a monthly risk score. If it is determined the event does not have an anomaly (316-No), then the flowchart 300 continues to module 320 with setting a risk score equal to 0 (or a zero-equivalent value). In either case, the flowchart 300 ends with updating the model at module 322.
Pattern Anomalies
Anomalies that can be captured based on other behavior patterns besides the time and the count of different events, come under the pattern category. For example, we may wish to capture cases Where a user logs on to a machine that he has not used before, in a remote session. To detect this case, we can form a pattern using the fields USERNAME, HOSTNAME, LOGON TYPE, The patterns to be monitored are configured initially, and a Markov Chain model is trained with available data. (In the given example, the data could include available logon records with users, hosts, and logon type information.) The model is used to determine the probabilities of different events occurring. A threshold is calculated from the training data as shown in
The inference phase 404 starts with modeling an event at module 414. The flowchart 400 continues to decision point 416 with determining whether probability is greater than or equal to the threshold. if it is determined the probability is less than the threshold (416-No), then the flowchart 400 continues to module 418 with determining a risk score for an anomaly. For example, the risk score could be calculated as (1−[probability of event occurring]). In the case of a user logging on to a machine he has never used before, the probability would be 0, thus it would be detected as an anomaly with a risk score of 1. If it is determined probability is greater than or equal to the threshold (416-Yes), then the flowchart 400 continues to module 420 with determining a. risk score for a non-anomaly, such as by setting a risk score equal to 0 (or a zero-equivalent value). In either case, the flowchart 400 ends with updating the threshold at module 422.
Peer Grouping
Anomalies detected during the previous stage are detected based on the individual behavior of users and entities. There may be cases where an event may be anomalous considering the past behavior of a specific user but may not be anomalous considering the normal behavior of his/her peers. In those cases, the risk score generated can be moderated by comparison with the baseline of the peer group to which the user belongs as shown in
When an event is flagged as an anomaly, the peer group to which the user associated with the event belongs is found, and the values of the fields in the event are compared with the acceptable values for that peer group at module 510. Depending on how many fields have values that are acceptable, the risk score is raised or lowered at module 512. The modified risk score is then used to assess the threat posed by a user to the organization, in conjunction with all the other risk scores associated with different events initiated by the user.
Overview of the Algorithms Used
Robust Principal Component Analysis (RPCA)
This algorithm decomposes a data matrix into two components—low rank and sparse. The low rank component captures the underlying distribution of the data and can be thought of as representing normal behavior. The sparse component captures outliers or anomalies that do not fit in with the data distribution that is identified. Any non-zero entry in the sparse component indicates an anomaly. The past data points are stored in a model, and when new data points come in, they are appended to the older points and then passed to the algorithm. If the sparse component for the new data point is nonzero, then it is flagged as an anomaly.
Exponential Moving Average (EMA)
In this method, the average of a series of points is calculated by giving exponentially decreasing weights to older points. The formula to calculate the EMA at a point t is:
EMA(t)=w*t+(1−w)*EMA(t−1)
EMA(t−1)=EMA at the previous data point
t=current data point
EMA(t)=EMA at the current data point
weight, w=2/(n+1)
n=configurable parameter which determines how many of the latest points should contribute the most to the EMA
Markov Chains
This algorithm works by forming chains of different states that can occur one after the other, on the principal that the probability of a state B occurring after a state A, depends only on the current state A and not on any other states that occurred before A. This has been adapted to finding anomalies in patterns as follows. Suppose the pattern to be modeled is USERNAME, HOSTNAME, LOGON TYPE. Then the probability of the whole chain is obtained by multiplying the probability of co-occurrence of the USERNAME and HOSTNAME values, and the probability of co-occurrence of the HOSTNAME and LOGON TYPE values.
Robust Clustering Using Links (ROCK)
This is an agglomerative hierarchical clustering algorithm that is especially suitable for clustering based on categorical variables. An agglomerative clustering algorithm follows the bottom-up approach, where each data point is considered a cluster initially, and clusters are merged in successive levels as shown by way of example in the tree structure 600 of
ROCK uses a concept of links between data points instead of using traditional distance measures such as Euclidean distance. The algorithm performs better than traditional partitioning clustering methods such as KMeans, KMedoids, CLARA, CLARANS, etc. which are much more effective for numerical datasets. Density based methods such as DBSCAN may flag certain records as noise and users such as superadmins may be singled out as anomalies instead of a valid cluster. Once data is passed to the algorithm for clustering, the similarity between each and every pair of data points is calculated based on the Jaccard Coefficient, and stored. A pair of points are considered to be neighbors if their similarity exceeds a certain threshold. The number of links between a pair of points is the number of common neighbors for the points. The larger the number of links between a pair of points, the greater is the likelihood that they belong to the same cluster. In the first iteration of the algorithm, each point is considered to be a cluster as shown in
The flowchart 700 continues to decision point 714 with determining whether a desired number of clusters has been reached or no more merging is possible. If not (714-No), then the flowchart returns to module 710 and continues as described previously. Otherwise (714-Yes), the flowchart 700 ends module 716 with outputting cluster representations. Thus, the process is continued until no more clusters can be merged, or if the number of clusters formed goes below a desired number. Once clustering is complete, we can use the representations of the different clusters that are formed as the baseline behavior for different peer groups.
ConStream (CONdensation Based STREAM Clustering)
ConStream adapts to time-varying patterns. E.g., a person who is logging in from one location relocates and the new location becomes the new normal. This enables creation of a new category based on the new normal without separate training; data distribution has changed. Advantageously, algorithms can adapt to concept drift without separate training. An admin may get notification of anomaly, but the system adapts to changes, so the admin will eventually stop getting notifications.
When data is received as a stream it is possible that the data distribution may vary over time i.e., clusters that are currently present in the data at time t may be inactive at time t+1, and there may be new clusters created as the data distribution changes. Although ROCK was suitable for clustering categorical variables, it is not suitable for handling streaming data. When the data distribution changes, thus rendering the learned model obsolete, concept drift is said to have occurred. Concept drift can be handled externally by running the algorithm at regular intervals and comparing the clusters produced. Space and time complexity is also high because it performs multiple passes over the data points. ConStream handles concept drift in the following manner. If an incoming data point at time t, does not fit into any of the existing clusters, a new cluster is created with this point as shown in
From module 810, the flowchart 800 continues to decision point 812 where it is determined whether the number of clusters is greater than k, where k represents a (configurable, actual, or preferred) maximum number of clusters that can be present at any given time. If not (812-No), then the flowchart 800 ends at module 808 as described previously. If so (812-Yes), then the flowchart 800 continues to module 814 where the least recently updated cluster is removed and then ends at module 808 as described previously. The algorithm goes over each point only once, and thus is much faster than the ROCK. It keeps a handle over the memory requirements through a configurable parameter k. If the number of clusters exceeds this value, then the least recently updated cluster is removed. Weighted Jaccard coefficient is used as the similarity measure. While calculating the Jaccard coefficient, the weights provided to each point by the fading function are used to determine a weighted count.
A new cluster could be created from a dramatic difference between an individual and a peer group. Cluster death happens if nobody is in the cluster. Because data distribution could be changed with time, recent points could be given more weight. This can be done, for example, through the form of a fading function f(t)=2−λt which uniformly decays with time t. Here λ is called the decay rate, and the higher the value of λ, the higher the importance given to recent data compared to data points in the past. Thus, for data streams which do not change much, we should pick a lower value of λ, whereas for rapidly changing data streams we should pick a higher value. The maximum inactivity period after which a cluster dies is equal to 1/λ. Thus, if λ is set to be 0.001, a cluster dies if there are no new points added to the cluster in 1000 time steps.
All the algorithms discussed above update their models with the latest events irrespective of whether the events are anomalous. The reason behind this is that anomalous events generally span a short period of time, so even if the model is updated with these events, there would not be a significant change to the behavior of the model. The behavior of the model changes only if the events span across a longer duration. This is the case when the data distribution itself changes. Updating the model with these events would then ensure that the model recognizes the change and adapts to the new data distribution without any external intervention. This enables the system to function independently as soon as the initial configurations are made. The administrator also has an option of not updating the models with anomalous events if he so desires. While this is generally unnecessary and interferes with the automated running of the system, it can be done if the administrator wishes to have more control over the modelling.
Implementation Example (LOG360)
The techniques described above can be adapted to multiple (and most) domains.
Implementation entails looking at actual input and how data is processed. Advantageously, the techniques are effective with relatively limited labeled data. For example, with data that is not labeled as normal or abnormal. When insufficient data is available for each user, it is difficult to label each transaction. While a domain may include a lot of data on users, it may lack sufficient data at an individual user level for labeling; in such a case, the techniques described above are powerful tools.
In this example, LOG360 builds risk profiles for anomalies for which risk scores are generated using techniques described above. Data that is used in this example includes user sign on logs, client server logs, firewall logs, printer logs, file access logs, and dBase access logs. Such data can be used to determine risk for multiple different scenarios. For example, more than typical number of file downloads may indicate a risk of an attempt to steal information while remote logon from a new device may indicate a risk of a network security breach. In this example, an admin can click on specific user to get profile and drill down to assess a threat.
An alternative implementation example is in a health care setting, such as a hospital. In hospitals you have doctors, patients, and others. It is useful to know about access to medication, diagnosis, treatment, etc., as well as who is accessing what. Detecting anomalous events, such as a change in medication or dosage, can be lifesaving in such an environment.
An alternative implementation example is in banking. You can flag behavior to, for example, identify fraudulent actors. Typically, a bank will not have enough training data to identify “normal” so instead, anomalies are detected. Logon to net banking, where, when, etc. It might be practically impossible to identify bad behavior in general, but anomalies are identifiable on an individual level and eventually turn into normal behavior for the individual. For example, an admin may configure a banking system to raise severe alerts when a logon attempt happens in a place known as a haven of hackers, but the admin may not know whether a logon that happens outside a blacklisted place is anomalous without considering individual user behavior. The UEBA system is useful in this scenario because it captures the behavior of each individual user. Thus, for example, a logon attempt by an individual user from a new place can trigger a severe warning, additional logon attempts by the individual user from the new place can trigger warnings of potentially decreasing severity (or increased severity, followed by decreasing severity), until the warnings ceased and the new place is no longer considered “new,” becoming a new normal for the individual user.
Conceptualized System
The CRM 402 and other computer readable mediums discussed in this paper are intended to include all mediums that are statutory (e.g., in the United States, under 35 U.S.C. 101), and to specifically exclude all mediums that are non-statutory in nature to the extent that the exclusion is necessary for a claim that includes the computer-readable medium to be valid. Known statutory computer-readable mediums include hardware (e.g., registers, random access memory (RAM), non-volatile (NV) storage, to name a few), but may or may not be limited to hardware.
The CRM 402 and other computer readable mediums discussed in this paper are intended to represent a variety of potentially applicable technologies. For example, the CRM 402 can be used to form a network or part of a network. Where two components are co-located on a device, the CRM 402 can include a bus or other data conduit or plane. Where a first component is co-located on one device and a second component is located on a different device, the CRM 402 can include a wireless or wired back-end network or LAN. The CRM 402 can also encompass a relevant portion of a WAN or other network, if applicable.
The devices, systems, and computer-readable mediums described in this paper can be implemented as a computer system or parts of a computer system or a plurality of computer systems. In general, a computer system will include a processor, memory, non-volatile storage, and an interface. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor. The processor can be, for example, a general-purpose central processing unit (CPU), such as a microprocessor, or a special-purpose processor, such as a microcontroller.
The memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed. The bus can also couple the processor to non-volatile storage. The non-volatile storage is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software on the computer system. The non-volatile storage can be local, remote, or distributed. The non-volatile storage is optional because systems can be created with all applicable data available in memory.
Software is typically stored in the non-volatile storage. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer-readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at an applicable known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable storage medium.” A processor is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.
in one example of operation, a computer system can be controlled by operating system software, which is a software program that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile storage and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile storage.
The bus can also couple the processor to the interface. The interface can include one or more input and/or output (I/O) devices. Depending upon implementation-specific or other considerations, the I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other I/O devices, including a display device. The display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system. The interface can include an analog modem, ISDN modem, cable modem, token ring interface, satellite transmission interface (e.g., “direct PC”), or other interfaces for coupling a computer system to other computer systems. Interfaces enable computer systems and other devices to be coupled together in a network.
The computer systems can be compatible with or implemented as part of or through a cloud-based computing system. As used in this paper, a cloud-based computing system is a system that provides virtualized computing resources, software and/or information to end user devices. The computing resources, software and/or information can be virtualized by maintaining centralized services and resources that the edge devices can access over a communication interface, such as a network. “Cloud” may be a marketing term and for the purposes of this paper can include any of the networks described herein. The cloud-based computing system can involve a subscription for services or use a utility pricing model. Users can access the protocols of the cloud-based computing system through a web browser or other container application located on their end user device.
Returning to the example of
A computer system can be implemented as an engine, as part of an engine or through multiple engines. As used in this paper, an engine includes one or more processors or a portion thereof. A portion of one or more processors can include some portion of hardware less than all of the hardware comprising any given one or more processors, such as a subset of registers, the portion of the processor dedicated to one or more threads of a multi-threaded processor, a time slice during which the processor is wholly or partially dedicated to carrying out part of the engine's functionality, or the like. As such, a first engine and a second engine can have one or more dedicated processors or a first engine and a second engine can share one or more processors with one another or other engines. Depending upon implementation-specific or other considerations, an engine can be centralized or its functionality distributed. An engine can include hardware, firmware, or software embodied in a computer-readable medium for execution by the processor that is a component of the engine. The processor transforms data into new data using implemented data structures and methods, such as is described with reference to the figures in this paper.
The engines described in this paper, or the engines through which the systems and devices described in this paper can be implemented, can be cloud-based engines. As used in this paper, a cloud-based engine is an engine that can run applications and/or functionalities using a cloud-based computing system. All or portions of the applications and/or functionalities can be distributed across multiple computing devices and need not be restricted to only one computing device. In some embodiments, the cloud-based engines can execute functionalities and/or modules that end users access through a web browser or container application without having the functionalities and/or modules installed locally on the end-users' computing devices.
Returning to the example of
A database management system (DBMS) can be used to manage a datastore. In such a case, the DBMS may be thought of as part of the datastore, as part of a server, and/or as a separate system. A DBMS is typically implemented as an engine that controls organization, storage, management, and retrieval of data in a database. DBMSs frequently provide the ability to query, backup and replicate, enforce rules, provide security, do computation, perform change and access logging, and automate optimization. Examples of DBMSs include Alpha Five, DataEase, Oracle database, IBM DB2, Adaptive Server Enterprise, FileMaker, Firebird, Ingres, Informix, Mark Logic, Microsoft Access, InterSystems Cache, Microsoft SQL Server, Microsoft Visual FoxPro, MonetDB, MySQL, PostgreSQL, Progress, SQLite, Teradata, CSQL, OpenLink Virtuoso, Daffodil DB, and OpenOffice.org Base, to name several.
Database servers can store databases, as well as the DBMS and related engines. Any of the repositories described in this paper could presumably be implemented as database servers. It should be noted that there are two logical views of data in a database, the logical (external) view and the physical (internal) view. In this paper, the logical view is generally assumed to be data found in a report, while the physical view is the data stored in a physical storage medium and available to a specifically programmed processor. With most DBMS implementations, there is one physical view and an almost unlimited number of logical views for the same data.
A DBMS typically includes a modeling language, data structure, database query language, and transaction mechanism. The modeling language is used to define the schema of each database in the DBMS, according to the database model, which may include a hierarchical model, network model, relational model, object model, or some other applicable known or convenient organization. An optimal structure may vary depending upon application requirements (e.g., speed, reliability, maintainability, scalability, and cost). One of the more common models in use today is the ad hoc model embedded in SQL. Data structures can include fields, records, files, objects, and any other applicable known or convenient structures for storing data. A database query language can enable users to query databases and can include report writers and security mechanisms to prevent unauthorized access. A database transaction mechanism ideally ensures data integrity, even during concurrent user accesses, with fault tolerance. DBMSs can also include a metadata repository; metadata is data that describes other data.
As used in this paper, a data structure is associated with a particular way of storing and organizing data in a computer so that it can be used efficiently within a given context. Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an address, a bit string that can be itself stored in memory and manipulated by the program. Thus, some data structures are based on computing the addresses of data items with arithmetic operations; while other data structures are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways. The implementation of a data structure usually entails writing a set of procedures that create and manipulate instances of that structure. The datastores, described in this paper, can be cloud-based datastores. A cloud-based datastore is a datastore that is compatible with cloud-based computing systems and engines.
Returning to the example of
The risk threshold notification engine 2014 provides the risk score in the risk score datastore 2012. in a report in the reports datastore 2016. It may be noted a risk score of 0 can be referred to as having no risk, which would always fail to reach the risk threshold for notification purposes. However, it is also possible to set the risk threshold to a value above 0, if desired. Reports need not be complete reports. For example, the reports datastore 2016 can include data sufficient to populate charts and tables as illustrated in the screenshots of
The clustering engine 2102 is intended to represent an engine that clusters data associated with a user or entity in the profile datastore 2104 for the peer groups datastore 2112. Clustering can be accomplished as described previously with reference to the examples of
The peer group comparison engine 210$ is intended to represent an engine that upon receiving an event represented in the event datastore 2110, matches it to the corresponding peer group of the peer groups datastore 2112, and, provides an indication to the risk score modification engine 2114 to modify a current risk score associated with the event and the user or entity. Advantageously, if a user or entity can be matched to a peer group that has certain behaviors, events that are perhaps anomalous to a user or entity can be seen as non-anomalous for a peer group, which could be motivation to reduce the associated risk score. For example, an engineer may not make use of a useful website, but upon learning about the resource begins to use it, navigates to the website for the first time; if other engineers of the same peer group use the website, the risk score associated with the new behavior can (perhaps) be reduced. Advantageously, this risk score analysis can be complemented by a rules-based network security protocol, allowing the use of both network security rules and risk scores. For example, a system can include risk scores associated with remote logon in addition to rules associated with remote logon that may supersede, act as a minimum or maximum, or act as a default for risk scoring.
Number | Date | Country | Kind |
---|---|---|---|
202041025719 | Jun 2020 | IN | national |
202041043889 | Oct 2020 | IN | national |
The present application claims priority to Indian Provisional Patent Application No. 202041025719 filed Jun. 18, 2020, Indian Provisional Patent Application No. 202041043889 filed Oct. 8, 2020, U.S. Provisional Patent Application Ser. No. 63/083,057 filed Sep. 24, 2020, and U.S. Provisional Patent Application Ser. No. 63/120,165 filed Dec. 1, 2020, which are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63083057 | Sep 2020 | US | |
63120165 | Dec 2020 | US |