To gain access to a network, a user may use a credential such as a username and password, a certificate, a security key, and so forth. User credentials can be stolen by an unauthorized entity. For example, a user may disclose the user's credential to the unauthorized entity, which may be masquerading as a legitimate service. Alternatively, the unauthorized entity may include malware that can track a user's inputs to extract a credential entered by the user, or can access stored information to retrieve the credential.
Some implementations of the present disclosure are described with respect to the following figures.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings.
In the present disclosure, use of the term “a,” “an”, or “the” is intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, the term “includes,” “including,” “comprises,” “comprising,” “have,” or “having” when used in this disclosure specifies the presence of the stated elements, but do not preclude the presence or addition of other elements.
Once an unauthorized entity has obtained (stolen) a user's credential, the unauthorized entity can move within a network. The movement of the unauthorized entity within the network using a (stolen) valid credential is referred to as lateral movement. By performing lateral movement, the unauthorized entity seeks to find other vulnerable users (to obtain credentials of such other users or to obtain sensitive information belonging to such other users), vulnerable devices, and sensitive information. With the stolen credential, the unauthorized entity can also access devices in the network to obtain information stored by or accessible by such devices, or to use resources of the devices. Devices in the network may store sensitive information, or can have access to information that ultimately allows the unauthorized entity to access a data repository containing sensitive information. Sensitive information can refer to financial information, technical information, or any other information that an enterprise or individual wishes to protect against disclosure to unauthorized entities (users, programs, or machines).
Because lateral movement involves the access of users or devices by an unauthorized entity based on use of a valid credential, traditional security mechanisms, such as malware detectors, may not be able to detect the unauthorized use of the credential. For example, traditional security mechanisms may not be able to distinguish between a user's authorized use of the user's own credential and an unauthorized entity's use of the same credential after stealing it.
In accordance with some implementations of the present disclosure, a machine-learning based approach is used to distinguish unauthorized authentication events (that use stolen or compromised credentials) from benign authentication events (which are authentication events by authorized entities). For a given authentication event between multiple devices in a network, a system according to some implementations of the present disclosure identifies a set of authentication events at the devices, where the identified set of authentication events are temporally related to the given authentication event.
To detect unauthorized authentication events (also referred to as detecting lateral movement or detecting a stolen credential), a classifier can be trained using a training data set. A classifier can also be referred to as a machine-learning model. A training data set refers to collections of features (sometimes arranged as feature vectors), where each collection of features is assigned a label indicating whether or not the collection of features is indicative of an unauthorized authorization event. A positive label specifies that the collection of features is indicative of unauthorized authentication event, while a negative label specifies that the collection of features is not indicative of an unauthorized authentication event.
A “feature” can refer to any characteristic that is extracted from event data associated with a given authentication event. The feature can include an attribute retrieved from the event data, or an attribute computed based on the event data. In either case, the feature is considered to be extracted from event data.
Once the classifier is trained, the classifier is applied on a collection of features (e.g., a feature vector) associated with authentication events, where the authentication events can include the given authentication event as well as the set of authentication events that are temporally related to the given authentication event. A classifier applied on a collection of features can refer to any of: (1) one classifier applied on the collection of features, or (2) one classifier applied on multiple collections of features, or (3) multiple classifiers applied on one collection of features, or (4) multiple classifiers applied on multiple collections of features. The system determines, based on an output of the classifier, whether the given authentication event is an unauthorized authentication event associated with lateral movement (or stolen credential).
An authentication event is generated when a user or program at a first (source) device in a network attempts to log into a second (destination) device in the network by offering a user's credential to the second device. In some examples, a credential can include a combination of a username and a password, a security certificate, a security key, or any other information that can be used to determine whether the user or the program at the first device is authorized to access the second device.
A network can log authentication events by storing information relating to authentication events into a data repository. For a given authentication event between a plurality of devices in a network, techniques or mechanisms according to some implementations of the present disclosure can identify a context that includes a set of authentication events (at the devices) that are temporally related to the given authentication event. The given authentication event can have a given timestamp. Another authentication event is associated with the given authentication event if the other authentication event occurred at a source device or a destination device and has a timestamp that is within a time window that includes the timestamp of the given authentication event.
For ease of implementation, the context can exclude certain other information, such as Domain Name System (DNS) events, netflow events, and other events. By not relying on other information such as information of DNS events netflow events, and other events, a classifier for identifying unauthorized authentication events does not have to rely on information that may not be collected in networks of some enterprises. As a result, to use unauthorized authentication event identification techniques or mechanisms according to some implementations, such enterprises would not have to first invest in additional equipment or programs to enable the logging of event data other than authentication events.
DNS events include events associated with performing a DNS lookup, in which a device issues a DNS query to determine a network address (e.g., an Internet Protocol (IP) address) assigned to a domain name of the device. For example, the source device can issue a DNS lookup to the destination device or another device. Alternatively, the destination device can issue a DNS lookup to the source device or another device. DNS events also include DNS responses to DNS queries, where a DNS response contains the network address identified by a device (e.g., a DNS server) in response to a domain name in a DNS query.
Netflow events can include events related to transfer of data between devices, such as between the source and destination devices, or between a source or destination device and a different device.
In some examples, other types of event data that classifiers according to some implementations do not have to rely upon include security events. A security event can be any event that triggers a security action at the device. For example, the device may include a malware detector that detects suspicious activities at the device caused by a virus or other malware, which can trigger the malware detector to issue a security alert or to take other action, such as to quarantine a process or to stop a process. Examples of other security events include an alert issued by an intrusion detection system (which has detected intrusion into a device or network), a firewall alert issued by a firewall, and so forth.
In further examples, other types of event data that classifiers according to some implementations do not have to rely upon include Hypertext Transfer Protocol (HTTP) events. An HTTP event can include an HTTP request issued by a device. An HTTP request can be issued by a device to obtain information of another device. Thus, for example, the source device can issue an HTTP request to the destination device, or alternatively, the source device or destination device can issue an HTTP request to a different device. The device receiving the HTTP request can issue an HTTP response, which is another HTTP event.
The classifier can be applied on a collection of features associated with the context, and an output of the classifier can be used to determine whether a given authentication event is an unauthorized authentication event.
The devices can be part of an enterprise network, which is accessible by users of an enterprise (e.g., a company, a government agency, an educational organization, etc.). In other examples, the network 102 (or a portion of the network 102) can be a public network, such as the Internet.
A user 104 or a program 106 at device 1 can initiate an authentication event 108 with device 2. For example, the user 104 can type in the user's credential, or the user can use a security device (e.g., a badge, a smartphone, a wearable device such as a smart watch, smart eyeglasses, a head-mounted device, etc.) that stores a credential that can be communicated from the security device to device 1, such as by using a wireless connection (e.g., a Bluetooth link, a Wi-Fi link, a radio frequency identification (RFID) link, etc.). In another example, the user 104 at device 1 can attempt to authenticate a different user to device 2. The program 106, which includes machine-readable instructions, can include an application program, an operating system, and so forth. The program 106 can similarly provide a user's credential to initiate the authentication event 108.
In some examples, a logging system 110 can log event data of the authentication event 108 in an authentication log 112, which can store various attributes off the authentication event 108. Examples of attributes in event data of an authentication event include any or some combination of the following: a timestamp (which indicates the time at which the authentication event 108 occurred), an identifier of an initiating user that initiated the authentication event 108 (the initiating user is already authenticated on the source device, and the initiating user wants to authenticate to the destination device—the initiating user wants to authenticate himself/herself, or authenticate a different user), an identifier of a destination user to be authenticated on a destination device (the destination user can be the same as the initiating user), an identifier of the source device (e.g., device 1), an identifier of a destination device (e.g., device 2), a type of authentication, a success/failure indication of the authentication event, and so forth. The authentication log 112 can store event data of multiple authentication events among various devices that communicate over the network 102.
The authentication log 112 can refer to a data repository (or multiple data repositories) to store event data. The authentication log 112 can be stored on a storage device or a collection of storage devices.
In addition to logging event data of authentication events, the logging system 110 can also store event data of associated authentication events in the authentication log 112. In some examples, an associated authentication event (that is associated with a given authentication event) is an authentication event that is temporally related to the given authentication event. For example, the given authentication event can have a given timestamp. Authentication events are associated with the given authentication event if the authentication events occurred at a source device or a destination device and have timestamps within a time window that includes the timestamp of the given authentication event.
As used here, an “engine” can refer to a hardware processing circuit, which can include any or some combination of the following: a microprocessor, a core of a multi-core microprocessor, a microcontroller, a programmable gate array, a programmable integrated circuit device, or any other hardware processing circuit.
The classifier 118 can be trained by the stolen credential detection system 114 and can be applied on features extracted from a context of a given authentication event to determine whether the given authentication event is an unauthorized authentication event. Although just one classifier 118 is shown in
If a stolen credential is detected, the stolen credential detection engine 116 can output a stolen credential indication 120 over the network 102 to an action system 122, which includes a stolen credential action engine 124. The stolen credential action engine 124 can automatically take action to address the detected stolen credential, in response to the stolen credential indication 120. For example, the stolen credential action engine 124 can establish a communication with device 1, device 2, or both devices 1 and 2, to cause the device(s) to halt or stop any further activity (e.g., restrict the device(s) from communicating over the network 102 and/or restrict the device(s) from processing events, accessing data, and so forth). As more specific examples, the stolen credential action engine 124 can shut down processes (e.g., processes of a program or multiple programs) at device 1 and/or device 2 to prevent unauthorized access of information or resources at device 1 and/or device 2. In other examples, the stolen credential action engine 124 can take other actions, including sending a notification of the detected stolen credential to an administrator or other user, or triggering other security responses to the detected stolen credential.
By using a classifier 118 that is trained, the detection of a stolen credential is based on the intuition that authentication events surrounding a normal (benign) authentication event differs from authentication events surrounding an unauthorized authentication event. For example, consider an authentication event AE from a source device S to a destination device D at time T. If the authentication event AE is malicious, then an attacker may have established a foothold on source device S and may have conducted several authentication activities on source device S most of which may have failed. Then at some point, the authentication event AE happens and the attacker authenticates to destination device D. After the authentication, the attacker may conduct further malicious activities on destination device D.
The time window 204 of
Event data associated with the authentication events AE1, AE2, AE3, AE4, and AE5 is logged by the logging system 110 (
In some examples, the values of W1 and W2 can be preset, such as by an administrator or other user. In further examples, the values of W1 and W2 can be learnt by the stolen credential detection engine 116 based on an analysis of past data and based on feedback provided regarding classifications of authentication events by the classifier 118. For example, a user can indicate that a classification made by the classifier 118 is correct or incorrect, and the classifier 118 can use this feedback to update itself.
Examples of features that can be extracted from a context of a given authentication event can include any or some combination of the events listed in Table 1 below.
Although specific features derived based on event data of authentication events are listed above, it is noted that in other examples, other types of features can be derived from event data of authentication events.
Each feature vector in the training data set is associated with a classification label, which can be assigned by a user or another classifier. A positive classification label indicates that the respective feature vector is associated with a positive classification for an unauthorized authentication event (i.e., the respective feature vector indicates an unauthorized authentication event), while a negative classification label indicates that the respective feature vector is associated with a negative classification for an unauthorized authentication event (i.e., the respective feature vector indicates a benign authentication event).
The training process 300 then trains (at 304) the classifier using the training data set.
The classifier can be updated in response to a triggering condition. For example, the classifier can be updated periodically (a time-based triggering condition) or in response to a request of an administrator or other user, or a program. Updating the classifier can involve using feedback provided regarding classifications of the classifier to modify the classifier.
Although reference is made to training just one classifier on the training data set built (at 302), it is noted that in alternative examples, multiple classifiers can be trained on the training data set. These multiple classifiers make up an ensemble of classifiers. The different classifiers can be trained using different machine learning techniques, including, as examples, any of the following: logistic regression, random forests, gradient boosting, neural networks, and so forth.
Gradient boosting and random forests are examples of techniques for producing an ensemble of classifiers. Gradient boosting is an ensemble technique where a weighted ensemble of weak models (e.g., shallow trees) are combined to produce the prediction for a classification task. In gradient boosting, the successive models (classifiers) are trained on the gradient of the loss function in a previous iteration.
Random forests include an ensemble of decision trees. The output of a model (classifier) is based on the aggregation of the outputs of all the individual trees. The trees are trained differently, and on a slightly different data set so their outputs are diverse. Each tree is trained on a bootstrap sample of the training data. A bootstrap sample is a sample of the same size as the original data obtained by performing sampling with replacement. Further, during tree construction, at each split a random subset of features is selected, and then the split is performed on the best features among this subset.
The random forests technique is an example of a technique in which different classifiers of an ensemble of classifiers can be trained using respective different samples of the training data set.
The detection process 400 identifies (at 404) a set of authentication events, at the devices, that are temporally related to the detected authentication event. The authentication event and the set of authentication events make up a context of the authentication event.
The detection process 400 executes (at 406) a classifier (such as one trained according to the training process 300 of
The detection process 400 determines (at 408), based on an output of the classifier, whether the detected authentication event is an unauthorized authentication event
The machine-readable instructions include authentication event detecting instructions 506 to detect an authentication event that includes a user credential submitted from a source device to a destination device in a network. The machine-readable instructions further include event identifying instructions 508 to identify other authentication events within a time window including a time of the detected authentication event, the identified other authentication events comprising authentication events of the first device and the second device.
The machine-readable instructions further include stolen credential indicating instructions 510 to indicate occurrence of a stolen credential by an entity in the network based on applying a classifier on information associated with the identified other authentication events.
The storage medium 504 (
In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
Number | Name | Date | Kind |
---|---|---|---|
7376969 | Njemanze | May 2008 | B1 |
7889666 | Pei | Feb 2011 | B1 |
8464279 | Gutjahr | Jun 2013 | B2 |
8490162 | Popoveniuc | Jul 2013 | B1 |
8776190 | Cavage | Jul 2014 | B1 |
8910188 | Wang | Dec 2014 | B1 |
8966036 | Asgekar | Feb 2015 | B1 |
9378361 | Yen et al. | Jun 2016 | B1 |
9591006 | Siva Kumar et al. | Mar 2017 | B2 |
9667641 | Muddu et al. | May 2017 | B2 |
9679125 | Bailor | Jun 2017 | B2 |
9690933 | Singh | Jun 2017 | B1 |
9881179 | Patton | Jan 2018 | B2 |
9911290 | Zalewski | Mar 2018 | B1 |
9985984 | Chavez | May 2018 | B1 |
10104102 | Neumann | Oct 2018 | B1 |
10120746 | Sharifi Mehr | Nov 2018 | B1 |
10257227 | Stickle | Apr 2019 | B1 |
1028981 | Stavrou | May 2019 | A1 |
10298444 | Bishnoi | May 2019 | B2 |
10356107 | Kent | Jul 2019 | B1 |
10375095 | Turcotte | Aug 2019 | B1 |
10581886 | Sharifi Mehr | Mar 2020 | B1 |
10587633 | Muddu | Mar 2020 | B2 |
10771486 | Murphey | Sep 2020 | B2 |
20030101359 | Aschen | May 2003 | A1 |
20040221163 | Jorgensen | Nov 2004 | A1 |
20050005202 | Burt | Jan 2005 | A1 |
20050015624 | Ginter | Jan 2005 | A1 |
20050262343 | Jorgensen | Nov 2005 | A1 |
20060112039 | Wang | May 2006 | A1 |
20080184371 | Moskovitch | Jul 2008 | A1 |
20090099988 | Stokes | Apr 2009 | A1 |
20100095374 | Gillum | Apr 2010 | A1 |
20100312877 | Xie | Dec 2010 | A1 |
20110302653 | Frantz | Dec 2011 | A1 |
20110320816 | Yao | Dec 2011 | A1 |
20120191630 | Breckenridge | Jul 2012 | A1 |
20120290879 | Shibuya | Nov 2012 | A1 |
20130139179 | Roll | May 2013 | A1 |
20140181968 | Ge | Jun 2014 | A1 |
20140245439 | Day | Aug 2014 | A1 |
20150128267 | Gupta | May 2015 | A1 |
20150161237 | Agarwal | Jun 2015 | A1 |
20160006730 | Chari | Jan 2016 | A1 |
20160026656 | Mansour | Jan 2016 | A1 |
20160034361 | Block | Feb 2016 | A1 |
20160034712 | Patton | Feb 2016 | A1 |
20160112287 | Farmer | Apr 2016 | A1 |
20160224618 | Robichaud | Aug 2016 | A1 |
20160300059 | Abrams | Oct 2016 | A1 |
20160308884 | Kent | Oct 2016 | A1 |
20160308898 | Teeple | Oct 2016 | A1 |
20160335425 | Liu | Nov 2016 | A1 |
20160337385 | Fujishima | Nov 2016 | A1 |
20160357301 | Padiri | Dec 2016 | A1 |
20170063886 | Muddu et al. | Mar 2017 | A1 |
20170063909 | Muddu | Mar 2017 | A1 |
20170063910 | Muddu | Mar 2017 | A1 |
20170093902 | Roundy | Mar 2017 | A1 |
20170093910 | Gukai | Mar 2017 | A1 |
20170134362 | Randall | May 2017 | A1 |
20170230408 | Ahmed | Aug 2017 | A1 |
20170272521 | Takahashi | Sep 2017 | A1 |
20170277727 | Chen | Sep 2017 | A1 |
20170302685 | Ladnai | Oct 2017 | A1 |
20170351739 | Zou | Dec 2017 | A1 |
20180004948 | Martin | Jan 2018 | A1 |
20180027006 | Zimmermann et al. | Jan 2018 | A1 |
20180069893 | Amit | Mar 2018 | A1 |
20180124082 | Siadati | May 2018 | A1 |
20180314742 | Taropa | Nov 2018 | A1 |
20180316704 | Joseph Durairaj | Nov 2018 | A1 |
20190026459 | Harutyunyan | Jan 2019 | A1 |
20190036971 | Ford | Jan 2019 | A1 |
20190068627 | Thampy | Feb 2019 | A1 |
20190098037 | Shenoy, Jr. | Mar 2019 | A1 |
20190173893 | Muddu | Jun 2019 | A1 |
20190190936 | Thomas | Jun 2019 | A1 |
20190266325 | Scherman | Aug 2019 | A1 |
20200076837 | Ladnai | Mar 2020 | A1 |
Number | Date | Country |
---|---|---|
WO-2017037444 | Mar 2017 | WO |
Entry |
---|
Aruba, UEBA Use Case, Compromised User and Host Detection using Behavioral Analytics dated on or before Feb. 23, 2018 (7 pages). |
Chawla et al., SMOTE: Synthetic Minority Over-Sampling Technique published Jun. 2002 (37 pages). |
Kim et al., U.S. Appl. No. 15/689,045 entitled Extracting Features for Authentication Events filed Aug. 29, 2017 (36 pages). |
Manadhata et al., U.S. Appl. No. 15/689,043 entitled Unauthorized Authentication Events filed Aug. 29, 2017 (26 pages). |
Marwah et al., U.S. Appl. No. 15/689,047 entitled Training Models Based on Balanced Training Data Sets filed Aug. 29, 2017 (36 pages). |
Mike Scutt, Information Security, Rapid7 Community and Blog, Introspective Intelligence: What Makes Your Network Tick, What Makes it Sick? Nov. 17, 2016 (5 pages). |
Mobin Javed, Detecting Credential Compromise in Enterprise Networks, Electrical Engineering and Computer Sciences, University of California at Berkeley, Dec. 16, 2016 (90 pages). |
Musthaler et al., Fortscale's user behavioral analytics solution provides full context when truly malicious behavior is detected, Jan. 2016 (5 pages). |
RAPID7 —Detecting Lateral Movement with Windows Event Logs downloaded Jul. 31, 2017 (4 pages). |
RAPID7, Technical Primer, (unaged Detection and Response downloaded Jul. 31, 2017 (4 pages). |
Rod Soto. Dynamic Population Discovery for Laternal Movement (Using Machine Learning), https://www.slideshare.net/RodSoto2/dynamic-population-discovery-for-lateral-movement-using-machine-learning downloaded Jul. 19, 2017 (101 pages). |
Siadati et al., Detecting Malicious Logins in Enterprise Networks Using Visualization, IEEE 2016 (8 pages). |
Siadati et al., Detecting Structurally Anomalous Logins Within Enterprise Networks, CCS'17, Oct. 30-Nov. 3, 2017 (12 pages). |
STRATA-v4 https://www.slideshare.net/RamShankarSivaKumar/strata-2015-presentation-detecting-lateral-movement, downloaded Jul. 19, 2017 (34 pages). |
Vectra Networks, The Data Science Behind Vectra Threat Detections—https://yellowcube.eu/wp-content/uploads/2017/06/the-data-science-behind-vectra-threat-detections.pdf, 2016 (10 pages). |
Vectra Networks. White Paper, Detect Insider Attacks in Real Time https://yellowcube.eu/wp-content/uploads/2017/06/wp-insider-threat-detection.pdf, 2017 (6 pages). |
Wikipedia, Gradient Boosting last edited Jul. 28, 2017 (10 pages). |
Wikipedia, Random forest last edited Jul. 23, 2017 (10 pages). |
Alariki et al., Features Extraction Scheme for Behavioural Biometric Authentication in Touchscreen Mobile Devices, ISSN 0973-4562, vol. 11, No. 18 (2016); No. of pp. 14. |
Chebrolu et al.; Feature deduction and ensemble design of intrusion detection systems; 2005; Elsevier, Computers & Security 24; pp. 1-13, as printed. (Year: 2005). |
Feng et al; Security after Login: Identity Change Detection on Smartphones Using Sensor Fusion, Aug. 27 (Year: 2015) (6 pages). |
Jakobsson et al, Implicit Authentication for Mobile Devices, (Year: 2009) (6 pages). |
Kent et al, Differentiating User Authentication Graphs, Jul. 22 (Year: 2013) (4 pages). |
Rybnik, et al., A Keystroke Dynamics Based System for User Identification, Jul. 9 (Year: 2008) (6 pages). |
Number | Date | Country | |
---|---|---|---|
20190327253 A1 | Oct 2019 | US |