This invention relates generally to security analytics in computer networks, and more specifically to dynamically determining rule risk scores in a cybersecurity monitoring system.
Fraud or cybersecurity monitoring systems conventionally rely on rules to detect sessions of interest. For each user session, cybersecurity monitoring systems evaluate a set of rules based on user activity in a session. Anomalous user activity typically will cause one or more rules to trigger, as malicious activities often manifest themselves in anomalous changes from historical habitual patterns.
Each rule is associated with a score, and, if a rule is triggered during a session, that rule's score is added to the user's session score. The sum of the scores for triggered rules is the final score for the session. A session with a final score over a threshold is presented to analysts for review.
One challenge for rule-based systems is uncontrolled inflation of the final session score due to high false-positive rates with some rules. When rules trigger frequently across sessions in a network, they generally inflate the scores across the board. This makes the threshold-setting difficult.
There is a need for a method to reduce a rule's score if it is observed to trigger across many different sessions in a particular network. However, rules may have different false positive rates in different networks. Therefore, a threshold tuned in one network need not apply to another.
Furthermore, conventional rule-based systems have no ability to learn how a rule behaves on a per-user basis. If a rule tends to trigger often for a particular network user, this rule will continue to trigger on the user in the future. This rule is deemed to have high false positive for the concerned user, but it is not necessarily so for other users. This results in score inflation for this particular user's sessions. Therefore, there is a need for a per-user-and-rule score discounting scheme to reduce the score inflation.
The present disclosure relates to a cybersecurity monitoring system, method, and computer program for dynamically determining a rule's risk score based on the network and user for which the rule triggered. The methods described herein addresses score inflation problems associated with the fact that rules have different false positive rates in different networks and for different users, even within the same network. In response to a rule triggering, the system dynamically adjusts a default risk points associated with the triggered rule based on a per-rule and per-user probability that the rule triggered due to malicious behavior.
To calculate the aforementioned probability for each rule-user pair, the security-monitoring system (“the system”) first assesses “a global risk” probability for each rule in the applicable network/environment. A rule's global risk probability is the network-wide probability that the rule triggered due to malicious behavior. The method accounts for frequency of a rule's triggering across all entities in the network, and the more frequently a rule triggers, the lower the global risk probability for the rule. This risk is a global risk since it is derived from the entire population in the network.
For each user-rule pair in the system (or in an applicable set of rules), the system then calculates a “local risk” probability using: (1) the global risk probability for the rule, and (2) the rule's triggering history for the user. The local risk probability is the probability that a rule triggered for a user due to malicious behavior. The method accounts for frequency of a rule's triggering in the past user history, and the more frequently a rule triggers for a user, the lower the local risk probability for the rule-user pair. The local risk probability for a rule is customized per user. This risk is a local risk since it is derived in a user's local context. The rule's global risk is utilized in the process of calculating the local risk probability for each rule-user pair.
In certain embodiments, both the global and local risk probabilities also factor in context within a network environment. In these embodiments, the global risk probability is based on the frequency in which a rule triggers in a particular context, and the local risk probability for a user/rule pair is based on the global risk and the user's history in the particular context.
In future triggering of a rule for a user, instead of assigning a default, fixed score to the rule, a user-specific (and, in some cases, context-specific) dynamic score is assigned to the rule by adjusting the default risk score of the rule in accordance with the local risk probability of the applicable user-rule pair (and, in some cases, network context). If a user-specific risk score in not available because a rule has never triggered for a user (or at least not triggered during a training period), the global risk probability is used as the anchor to adjust the rule's risk score. If a rule never triggers for any user during a training period, the default risk score is used.
This method can dramatically reduce the common problem of score inflation in a conventional rule-based system. Less-than-useful alerts due to common triggering of some rules are suppressed. More interesting alerts with varied rules triggering can rise in ranks, resulting in better overall malicious-behavior detection.
The present disclosure describes a system, method, and computer program for dynamically determining a cybersecurity rule risk score based on the network and user for which the rule triggered. The method is performed by a computer system that detects cyber threats in a network and performs a risk assessment of user network activity (“the system”). The system may be a user behavior analytics (UBA) system or a user-and-entity behavior analytics system (UEBA). An example of a UBA/UEBA cybersecurity monitoring system is described in U.S. Pat. No. 9,798,883 issued on Oct. 24, 2017 and titled “System, Method, and Computer Program for Detecting and Assessing Security Risks in a Network,” the contents of which are incorporated by reference herein.
As context for the methods described herein, the system scores user activity in a network for potential malicious behavior. More specifically, the system evaluates user sessions in view set of rules and, for each user session, determines whether any rules are triggered as a result of the session. A “user session” may be a user logon session, a time period in which user activity is evaluated, or other grouping of user activity. Each of the evaluated rules is associated with a rule score, and, if a rule is triggered during the session, the rule's score is added to the user's session score. The methods described herein relate to dynamically adjusting the rule's score based on the network, user, and, in some cases, network context in which or for which the rule triggered. The sum of the scores for triggered rules is the final score for the session. The system raises an alert for user sessions with scores above a threshold.
The methods described herein achieve the following objectives:
The first objective calls for a global model that calculates a global risk probability for each triggered rule ri. The global risk probability for rule ri triggering is notated herein as P(malice|ri) or P(M|ri). The second objective calls for a per-account personal model that calculates a local risk probability for each triggered rule ri by user u. The local risk probability for rule ri triggering is notated herein as P(malice|ri, U) or P(M|ri, u). In the methods described herein, the two objectives are combined seamlessly. This is done leveraging the global model as a conjugate prior for the personal model. In other words, the global posterior P(malice|ri) is used as the prior in calculating the personal posterior P(malice|ri, u).
With respect to the method of
Second, for each user and each rule in the system, the system calculates a “local risk” probability for the user-rule pair using: (1) the global risk probability for the applicable rule, and (2) the rule's triggering history for the applicable user (step 120). The local risk probability is the probability that a rule triggered for the particular user due to malicious behavior. The local risk probability reflects the frequency of a rule's triggering both network wide and for the particular user. The more frequently a rule triggers for a user, the lower the local risk probability for the corresponding rule-user pair. The local risk probability for a rule is customized per user. This risk is local risk since it is derived in a user's local context. The rule's global risk is utilized in the process of calculating the local risk probability for each rule-user pair. In one embodiment, the data used to calculate the local and global risk probabilities is gathered over a training period, such as two weeks or one month. The training data and the corresponding risk probabilities may be updated periodically or on a sliding window basis.
In a future triggering of a rule during a user session, the system dynamically adjusts the risk score of the rule based on the local risk probability for the applicable rule-user pair. Each rule has a starting, default risk score. When a rule triggers, the system retrieves the default score associated with the triggered rule and dynamically adjusts the rule's score using the applicable local risk probability (steps 130-150). The resulting rule score is specific to the user (as well as specific to the rule). In one embodiment, the local risk probability ranges from 0 to 1, and the adjusted risk score is obtained by multiplying the default score by the local risk probability. The adjusted risk score is added to session risk score for the applicable user session (step 160).
In one embodiment, a Bayesian formula is used to calculate the global and local risk probabilities. For example, the following formula may be used to calculate the global risk probability:
P(M|ri) denotes the global risk probability for rule in the formulas herein stands for a legitimate session, and “M” in the formulas herein stands for a malicious session. P(ri|M) is 1/N where N is the number of unique observed triggered rules. P(ri|L) is number of unique sessions (or users) ri is observed divided by number of sessions (or users) an is observed. In one embodiment, since P(M) and P(L) are unknown, they are set to 0.5 (i.e., equal probability of either a legitimate or a malicious session), but those skilled in the art will appreciate that other values could be used as necessary to optimize the rule scores.
If the system monitors more than one network, the global risk probability is calculated for each network monitored by the system, as the frequency in which rules trigger can vary across networks.
In one embodiment, the local risk probability is calculated for each rule-user pair in a network using the following Bayesian formula:
P(M|ri, u) denotes the local risk probability for rule ri and user u. P(M|ri) is the global risk probability for the rule. P(L|ri)=1−P(M|ri). P(ri|M, u) is 1/M where M is the number of observed triggered rules in the account's history. P(ri|L, u) is the number of times ri is observed in the account's history divided by number of any ri in history.
In certain embodiments, both the global and local risk probabilities also factor in context within a network environment. In these embodiments, the global risk probability is based on the frequency in which a rule triggers in a particular context, and the local risk probability for a user/rule pair is based on the context-specific global risk and the user's history in a particular context. A “network context” for a rule is defined by the data source on which the rule is evaluated. For example, assume a particular rule may trigger due to an event on a work station or an event on a server. In this example, there are two different network contexts for the rule: workstations and servers.
As stated above,
Second, for each user/rule/context combination, the system calculates a local risk probability for the combination using: (1) the context-specific global risk probability for the applicable rule, and (2) the rule's triggering history for the applicable user in the applicable network context (step 220). The local risk probability for the combination is the probability that the applicable rule triggered for the applicable user in the applicable network context due to malicious behavior.
In one embodiment, the data used is calculate the context-specific local and global risk probabilities is gathered over a training period, such as two weeks or one month. The training data and the corresponding risk probabilities may be updated periodically or on a sliding window basis.
In a future triggering of a rule during a user session, the system dynamically adjusts the risk score of the rule based on the local risk probability for the applicable rule/user/combination. As stated above, each rule has a starting, default risk score. When a rule triggers, the system retrieves the default score associated with the triggered rule and dynamically adjusts the rule's score using the applicable context-specific and user-specific local risk probability (steps 230-250). This results in a risk score for the rule that is customized for the user and the network context. In one embodiment, the local risk probability ranges from 0 to 1, and the adjusted risk score is obtained by multiplying the default score by the context-specific and user-specific local risk probability. The user-specific and context-specific risk score for the triggered rule is added to session risk score for the applicable user session (step 260).
In one embodiment, the global risk probability for a rule in a particular context, c, is calculated as follows:
Where:
P(M|ri, c) denotes the global risk probability for the rule ri-context c pair. If c was not seen in the trigger history of the rule, or if P(M|ri,c) is not available, P(M|ri) is used.
In one embodiment, the local risk probability for a rule ri and user u in a particular context, c, is calculated as follows:
Where:
P(M|ri, c, u) denotes the local risk probability for the combination of rule ri user u, and network context c.
For certain account/rule combinations, a context-specific and user-specific local risk probability may be available. This may be the case when the rule has a history of triggering for the user in the applicable network context during the training period. For other account/rule combinations in the same network, only a user-specific (but not context-specific) local probability may be available. This may be the case when the rule has triggered for the user during the training period, but not in the applicable network context. In yet other account/rule combinations, only a global risk probability is availability. This may be the case when the applicable rule did not trigger for the user during the training period. Therefore, the type of “anchor” used to customize a rule's risk score for a session can vary, depending on what type of global or local risk probabilities are available for the particular user/rule pair|context combination at issue. In one embodiment, the system selects the anchor for a calculating a risk score during user session in accordance with the following priorities:
Dynamic Risk Score=Default risk score*P(M|ri,c,u), if P(M|ri,c,u) is available; otherwise,
Dynamic Risk Score=Default Risk Score*P(M|ri,u), if P(M|ri,u) is available; otherwise,
Dynamic Risk Score=Default Risk Score*P(M|ri), if P(M|ri) is available; otherwise,
Dynamic Risk Score=Default Risk Score if P(M|ri) is not available.
The methods are described herein with respect to a per-rule and per-user scheme to reduce score inflation. However, the methods can be applied to any network entity for which a rule may trigger.
The methods described herein are embodied in software and performed by a computer system (comprising one or more computing devices) executing the software. A person skilled in the art would understand that a computer system has one or more memory units, disks, or other physical, computer-readable storage media for storing software instructions, as well as one or more processors for executing the software instructions.
As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the above disclosure is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
This application claims the benefit of U.S. Provisional Application No. 62/481,778, filed on Apr. 5, 2017, and titled “Dynamic Rule Risk Score Determination,” the contents of which are incorporated by reference herein as if fully disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
5941947 | Brown et al. | Aug 1999 | A |
6223985 | DeLude | May 2001 | B1 |
6594481 | Johnson et al. | Jul 2003 | B1 |
7668776 | Ahles | Feb 2010 | B1 |
8326788 | Allen et al. | Dec 2012 | B2 |
8443443 | Nordstrom et al. | May 2013 | B2 |
8479302 | Lin | Jul 2013 | B1 |
8539088 | Zheng | Sep 2013 | B2 |
8606913 | Lin | Dec 2013 | B2 |
8676273 | Fujisake | Mar 2014 | B1 |
8881289 | Basavapatna et al. | Nov 2014 | B2 |
9055093 | Borders | Jun 2015 | B2 |
9081958 | Ramzan et al. | Jul 2015 | B2 |
9189623 | Lin et al. | Nov 2015 | B1 |
9680938 | Gil et al. | Jun 2017 | B1 |
9692765 | Choi et al. | Jun 2017 | B2 |
9760240 | Maheshwari et al. | Sep 2017 | B2 |
9779253 | Mahaffey et al. | Oct 2017 | B2 |
9798883 | Gil et al. | Oct 2017 | B1 |
9843596 | Averbuch et al. | Dec 2017 | B1 |
9898604 | Fang et al. | Feb 2018 | B2 |
10095871 | Gil et al. | Oct 2018 | B2 |
10178108 | Lin et al. | Jan 2019 | B1 |
10419470 | Segev et al. | Sep 2019 | B1 |
10467631 | Dhurandhar | Nov 2019 | B2 |
10474828 | Gil et al. | Nov 2019 | B2 |
10496815 | Steiman et al. | Dec 2019 | B1 |
10645109 | Lin et al. | May 2020 | B1 |
20020107926 | Lee | Aug 2002 | A1 |
20030147512 | Abburi | Aug 2003 | A1 |
20040073569 | Knott et al. | Apr 2004 | A1 |
20060090198 | Aaron | Apr 2006 | A1 |
20070156771 | Hurley et al. | Jul 2007 | A1 |
20070282778 | Chan et al. | Dec 2007 | A1 |
20080040802 | Pierson et al. | Feb 2008 | A1 |
20080170690 | Tysowski | Jul 2008 | A1 |
20080301780 | Ellison et al. | Dec 2008 | A1 |
20090144095 | Shahi et al. | Jun 2009 | A1 |
20090171752 | Galvin et al. | Jul 2009 | A1 |
20090293121 | Bigus et al. | Nov 2009 | A1 |
20100125911 | Bhaskaran | May 2010 | A1 |
20100269175 | Stolfo et al. | Oct 2010 | A1 |
20120278021 | Lin et al. | Nov 2012 | A1 |
20120316835 | Maeda et al. | Dec 2012 | A1 |
20120316981 | Hoover | Dec 2012 | A1 |
20130080631 | Lin | Mar 2013 | A1 |
20130117554 | Ylonen | May 2013 | A1 |
20130197998 | Buhrmann et al. | Aug 2013 | A1 |
20130227643 | Mccoog et al. | Aug 2013 | A1 |
20130305357 | Ayyagari et al. | Nov 2013 | A1 |
20130340028 | Rajagopal et al. | Dec 2013 | A1 |
20140315519 | Nielsen | Oct 2014 | A1 |
20150046969 | Abuelsaad et al. | Feb 2015 | A1 |
20150121503 | Xiong | Apr 2015 | A1 |
20150339477 | Abrams et al. | Nov 2015 | A1 |
20150341379 | Lefebvre et al. | Nov 2015 | A1 |
20160005044 | Moss et al. | Jan 2016 | A1 |
20160021117 | Harmon et al. | Jan 2016 | A1 |
20160306965 | Iyer et al. | Oct 2016 | A1 |
20160364427 | Wedgeworth, III | Dec 2016 | A1 |
20170019506 | Lee et al. | Jan 2017 | A1 |
20170024135 | Christodorescu et al. | Jan 2017 | A1 |
20170155652 | Most et al. | Jun 2017 | A1 |
20170161451 | Weinstein et al. | Jun 2017 | A1 |
20170213025 | Srivastav et al. | Jul 2017 | A1 |
20170236081 | Grady Smith et al. | Aug 2017 | A1 |
20170318034 | Holland et al. | Nov 2017 | A1 |
20180004961 | Gil et al. | Jan 2018 | A1 |
20180048530 | Nikitaki et al. | Feb 2018 | A1 |
20180144139 | Cheng et al. | May 2018 | A1 |
20180165554 | Zhang et al. | Jun 2018 | A1 |
20180234443 | Wolkov et al. | Aug 2018 | A1 |
20180248895 | Watson et al. | Aug 2018 | A1 |
20180288063 | Koottayi et al. | Oct 2018 | A1 |
20190034641 | Gil et al. | Jan 2019 | A1 |
20190334784 | Kvernvik et al. | Oct 2019 | A1 |
20200021607 | Muddu et al. | Jan 2020 | A1 |
20200082098 | Gil et al. | Mar 2020 | A1 |
Entry |
---|
Ioannidis, Yannis, “The History of Histograms (abridged)”, Proceedings of the 29th VLDB Conference (2003), pp. 1-12. |
DatumBox Blog, “Machine Learning Tutorial: The Naïve Bayes Text Classifier”, DatumBox Machine Learning Blog and Software Development News, Jan. 2014, pp. 1-11. |
Freeman, David, et al., “Who are you? A Statistical Approach to Measuring User Authenticity”, NDSS, Feb. 2016, pp. 1-15. |
Malik, Hassan, et al., “Automatic Training Data Cleaning for Text Classification”, 11th IEEE International Conference on Data Mining Workshops, 2011, pp. 442-449. |
Wang, Alex Hai, “Don't Follow Me Spam Detection in Twitter”, International Conference on Security and Cryptography, 2010, pp. 1-10. |
Chen, Jinghui, et al., “Outlier Detection with Autoencoder Ensembles”, Proceedings of the 2017 SIAM International Conference on Data Mining, pp. 90-98. |
Number | Date | Country | |
---|---|---|---|
62481778 | Apr 2017 | US |