Graph-based multi-staged attack detection in the context of an attack framework

Information

  • Patent Grant
  • 12063226
  • Patent Number
    12,063,226
  • Date Filed
    Friday, September 24, 2021
    3 years ago
  • Date Issued
    Tuesday, August 13, 2024
    8 months ago
Abstract
The present disclosure relates to a system, method, and computer program for graph-based multi-stage attack detection in which alerts are displayed in the context of tactics in an attack framework, such as the MITRE ATT&CK framework. The method enables the detection of cybersecurity threats that span multiple users and sessions and provides for the display of threat information in the context of a framework of attack tactics. Alerts spanning an analysis window are grouped into tactic blocks. Each tactic block is associated with an attack tactic and a time window. A graph is created of the tactic blocks, and threat scenarios are identified from independent clusters of directionally connected tactic blocks in the graph. The threat information is presented in the context of a sequence of attack tactics in the attack framework.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

This invention relates generally to cyberattack security and detection in computer networks, and more specifically to graph-based, multi-staged cybersecurity attack detection in the context of an attack framework.


2. Description of the Background Art

Cybersecurity monitoring systems generate alerts for analysts to investigate. For a large organization, a huge volume of alerts are generated on a daily basis, and it is difficult for analysts at such organizations to investigate alerts individually. As a result, presenting alerts in a meaningful and understandable way is critical for the usability of cybersecurity monitoring systems. A common approach to detecting cybersecurity threats and organizing alerts is on a per-user and per-24-hour session approach. There are a number of issues with this approach:

    • The 24-hour resolution is arbitrary. A given threat is not necessarily confined within the rigid 24-hour resolution.
    • Slow-and-low attack detection is difficult to address. Since sessions are essentially scored independent from one another, malicious activity from one session is not connected with other future or past sessions of the same user or those of other users.
    • Most critically, it is challenging for analysts to act on a seemingly random collection of alerts with no immediate story to tell. The burden is on the analysts to evaluate the severity scope of a session.


Therefore, there is strong demand for a solution that automatically and intelligently connects alerts in a way that detects attacks across users and sessions and that presents the connected alerts with an immediate story to tell.


SUMMARY OF THE DISCLOSURE

The present disclosure relates to a system, method, and computer program for graph-based multi-stage attack detection in which alerts are displayed in the context of aa sequence of tactics in an attack framework, such as the MITRE ATT&CK framework. The method enables the detection of cybersecurity threats that span multiple users and sessions and provides for the display of threat information in the context of a framework of attack tactics.


A computer system for detecting cybersecurity attacks obtains a plurality of cybersecurity alerts generated in an analysis window, such as a 30-60 day window. The system classifies each of the alerts with an attack tactic in an attack framework having a sequence of attack tactics.


The system then groups the alerts into tactic blocks, where each tactic block is associated with a start time, an end time, and an attack tactic. The system creates a graph of tactic blocks by directionally connecting blocks based on a time criterion, a tactic criterion, and a matching criterion. Directionally connecting tactic blocks based on the time, tactic, and matching criteria enables multi-stage threat detection.


In certain embodiments, the time criterion for directionally connecting a first tactic block to a second tactic block is satisfied if the first block has an earlier start time than the second block and if the end time of the first block is within P hours of the start time of the second block.


In certain embodiments, the tactic criterion for directionally connecting a first tactic block to a second tactic block is satisfied if the tactic is associated with the first block is the same or proceeds the tactic associated with the second block in the attack framework.


In certain embodiment, the matching criterion for directionally connecting a first tactic block to a second tactic block is satisfied if one or more of the following is true:

    • (a) the first and second blocks are associated with the same user name;
    • (b) the first and second blocks share the same source host computer; or
    • (c) any of the first block destination host computers matches the second block's source host computer.


After creating the graph, the system identifies one or more independent clusters of interconnected components in the graph of tactic blocks. For example, the system may identify connected components in the graph using a connected components algorithm in graph theory. For each of the clusters, the system identifies a threat scenario comprising a sequence of attack tactics in the attack framework. In certain embodiments, identifying a threat scenario includes identifying a path of tactic blocks in the cluster that represents the highest-risk sequence of events in the cluster.


The system ranks the threat scenarios and displays information for the n highest ranked threat scenarios, wherein n is an integer greater than or equal to 1. The information displayed for a threat scenario includes a sequence of attack tactics associated with the threat scenario. In this way, the threat is presented with a “story” of the threat as told by the sequence of attack tactics.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 is a flowchart that illustrates a method, according to one embodiment, for graph-based multi-stage attack detection in which alerts are displayed in the context of tactics in an attack framework.



FIG. 2 illustrates an example graph of tactic blocks.



FIG. 3 illustrates an example display of a threat scenario.



FIG. 4 illustrates an example username-asset graph for a threat scenario.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present disclosure relates to a system, method, and computer program for graph-based multi-stage attack detection in which alerts are displayed in the context of tactics in an attack framework, such as the MITRE ATT&CK framework. The method is implemented in software and performed by a computer system that detects and assesses cyber threats in a network. The computer system may be a user behavior analytics (UBA) system or a user-and-entity behavior analytics system (UEBA) An example of a UBA/UEBA cybersecurity monitoring system is described in U.S. Pat. No. 9,798,883 issued on Oct. 24, 2017 and titled “System, Method, and Computer Program for Detecting and Assessing Security Risks in a Network,” the contents of which are incorporated by reference herein.


An embodiment of the invention is described below with respect to FIG. 1.


1. Grouping Alerts into Tactic Blocks

The system receives and/or generates security alerts on an on-going basis. The alerts may be in the form of triggered fact or anomaly-based rules or they may be in form of events with non-zero risks in the case of a non-rule-based system that assesses event risks based on probability calculations. The system may generate the alerts itself and/or receive alerts from other cyber-monitoring systems.


The input to the method is a collection of alerts over a time frame, such as a day or several months. The time frame is referred to as the analysis window herein. In one embodiment, the method is performed on a daily basis, such as a batch job at the end of each day, using alert data from the past 30-60 days (i.e., an analysis window of the past 30-60 days). In certain embodiments, the system may filter out certain rules, such as rules that trigger frequently, from the analysis window.


As shown in FIG. 1, the system obtains the alerts generated or received in an analysis window (step 110). The system classifies each of the alerts with one or more attack tactics in an attack framework (step 120). An attack framework categorizes attack techniques into a number of attack tactics. An example of an attack framework is the MITRE ATT&CK framework which has the following twelve attack tactics: Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Command and Control, Exfiltration, and Impact. A number of attack techniques are mapped to each of these attack tactics. In one embodiment, each rule or event that can be the basis of an alert is pre-tagged with one or more attack techniques in the attack framework. In step 120, the system classifies an alert with one or more attack tactics by mapping the attack technique(s) associated with the alert to the applicable tactic(s) in the framework.


The system organizes alerts in the analysis window into groups referred to herein as “tactic blocks” (step 130). A tactic block is a group of alerts that satisfy an alert grouping criteria, including having the same tactic and falling within a certain time window. In one embodiment, alerts are grouped into tactic blocks based on tactic, time, user name, and source host. Each tactic block is associated with a start and end time based on the start and end timestamps of the first and last alert in the tactic block. In one embodiment, alerts are first grouped based on tactic, user name, and source host. If there are gaps of more than X amount of time (e.g., X=24 hours) between alerts, then the tactic block is split into smaller blocks.


An alert may appear in more than one tactic block. An alert associated with n tactics will be part of n tactic blocks, where n is an integer greater than or equal to 1. As a result, there may be multiple tactic blocks that are identical except for the tactic associated with the tactic block.


2. Graphing Tactic Blocks

A graph-based approach is used to ascertain “attack stories” from the tactic blocks, where the tactic blocks are the nodes of the graph. The system constructs a graph of tactic blocks by sorting tactic blocks by their start times and directionally connecting blocks that appear to be part of the same attack based on time, tactic, and matching criteria (e.g., same user name or source host) (step 140). The matching criteria may be based on attributes of the tactic blocks that are in addition to time and tactic. For example, if the alerts are grouped into tactic blocks based on time, tactic, user name, and source host, then the tactic blocks may be matched using the user name and source host attributes of the blocks. Directionally connecting tactic blocks based on time, tactic, and matching criteria enables threats to be identified across multiple stages of an attack.


In one embodiment, tactic blocks are sorted by their start times and a tactic block C (“C”) is directionally connected to a next tactic block N (“N”) in time if the following time, tactic, and matching criteria are met:

    • Time criteria: C's end time is within P hours from N's start time (e.g., P=24 or 48 hours) and N's end time is after C's start time; AND
    • Tactic criteria: C's tactic is before or the same as N's tactic in the sequence of tactics in the attack framework; AND
    • Matching Criteria: The condition of:
      • The nodes share the same username; OR
      • The nodes share the same source host computer; OR
      • Any of C's destination host computers matches N's source host computer; OR
      • Other matching criteria, such as, for example, shared hash, email subject, or filename.


In the example above, the time criteria ensures that connected tactic blocks are sufficiently close in time. The tactic criteria ensures that the story told by connected blocks fits within the attack framework. The matching criteria helps to further ensure that connected tactic blocks are part of the same attack. As indicated above, the MITRE ATT&CK framework consists of twelve tactics that have a sequential order. Although cyber attacks do not necessarily follow the exact sequence of tactics in the MITRE ATT&CK sequence, the tactic sequence generally reflects the most common order in which the tactics appear. The tactic criteria ensures that the story told by connected blocks is consistent with the sequence of tactics in the attack framework.


3. Identify Threat Scenarios from Clusters of Interconnected Tactic Blocks in the Graph

Once the graph is constructed, the system identifies one or more independent clusters of interconnected tactic blocks in the graph (step 150). Each cluster is a collection of tactic blocks that are directionally connected. There is no overlap between any pair of clusters. Each cluster captures a group of connected tactic blocks, and each cluster stands alone. In one embodiment, identifying clusters comprises identifying connected components in the graph, wherein each connected component is an independent cluster. FIG. 2 illustrates an example of graphed tactic blocks with two connected components, namely connected component 210 and connected component 220. The system may use a known connected components algorithm from the graph theory to identify connected components in the tactic blocks graph. An example of a connected component algorithm is set forth in in the following reference, which is incorporated herein by reference:

  • Hopcroft, J.; Tarjan, R. (1973), “Algorithm 447: Efficient algorithms for graph manipulation”, Communications of the ACM, 16(6): 372-378, doi: 10.1145/362248.362272.


For each of the clusters, the system identifies a threat scenario comprising a sequence of attack tactics (step 160). Each cluster has one or more paths of tactic blocks. A path of tactic blocks is a sequence of directionally connected tactic blocks that respects the sequence of tactics in the attack framework. In one embodiment, identifying a threat scenario for a cluster comprises identifying the path within the cluster that represents the highest-risk sequence of events in the cluster. Each cluster is associated with one threat scenario. In one embodiment, the system identifies the path associated with the highest-risk sequence of events in a cluster as follows:

    • The system identifies the start nodes in the cluster. The start nodes are the tactic blocks with only outgoing edges and no incoming edges (i.e., they are directionally connected to only other tactic block(s) that have a later start time).
    • Each of the start nodes serves as a starting point of a path within the cluster. Starting from a start node, a path follows the edges to nodes (i.e., tactic blocks) in time.
    • When a node encounters a fork, new paths are instantiated, one for each node forked.
    • Each alert is associated with a risk score or a risk probability based on the underlying rules or events that caused the alert to trigger. Each path is scored by summing up risk scores or risk probabilities associated with the alerts present in each node in the path. In certain embodiments, paths may be filtered based on thresholding on number of users involved, number of security vendor's alerts involved, time duration, etc.
    • The highest-scoring path is selected as the threat scenario for the cluster, as it represents the highest-risk sequence of events in the cluster.


By identifying a sequence of attack tactics as a threat scenario, the system is able to detect threats across multiple stages of attack. The system ranks the threat scenarios based on the score associated with each threat scenario (e.g., the sum of the scores of all triggered rules in the threat scenario path) (step 170).


4. Displaying Threat Scenarios in Context of the Attack Framework

The system displays information related to the highest-ranked threat scenarios, including the sequence of attack tactics associated with the threat scenario (step 180). In one embodiment, the n highest-ranked threat scenarios are displayed, wherein n is a positive integer. The sequence of attack tactics displayed for the threat scenario is based on the sequence of tactic blocks and associated tactics in the threat scenario. FIG. 3 illustrates an example of how information for a threat scenario may be displayed. In this example, the information is displayed in a table, where there is a row corresponding to each tactic block in the path that makes up the threat scenario. The table includes a “tactic” column that enables the user to see the progression of attack tactics for the threat scenario. Thus, the displayed information effectively tells a story about the threat scenario in terms of the attack framework. Each row also includes the user name, start time, end time, source host (if any), rules triggered, and risk scores for the tactic block corresponding to the row.


There are also other ways in which information for a threat scenario may be displayed. For example, the threat scenario may be presented in the form of a graphical timeline illustrating the progression of attack tactics for the threat scenario. It is often useful for a user to seen the usernames and assets (e.g., source hosts) associated with threat scenario. FIG. 4 illustrates an example of a graph that illustrates the user names and assets associated with a threat scenario. The graph in FIG. 4 show that there are eight user names and four assets in the threat scenario.


5. Alternate Embodiment

In an alternate embodiment, the system does not identify the highest-risk path in a cluster. Instead, it sums of the scores of the triggered rules/event in each of the tactic blocks of the cluster if applicable and ranks the clusters accordingly. In this embodiment, each cluster in its entirety is considered the threat scenario (as opposed to the highest-risk path in the cluster). In the display step, the system may show a visual timeline of all paths in the threat scenario, including showing how paths merge and bifurate from beginning to end.


6. General

The methods described herein are embodied in software and performed by a computer system (comprising one or more computing devices) executing the software. A person skilled in the art would understand that a computer system has one or more memory units, disks, or other physical, computer-readable storage media for storing software instructions, as well as one or more processors for executing the software instructions.


As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the above disclosure is intended to be illustrative, but not limiting, of the scope of the invention.

Claims
  • 1. A method, performed by a computer system, for graph-based, multi-stage attack detection in which cybersecurity alerts are displayed based on attack tactics in an attack framework, the method comprising: obtaining a plurality of cybersecurity alerts (“alerts”) generated in an analysis window;classifying each of the alerts with an attack tactic based on the attack framework having a sequence of attack tactics;grouping the alerts into tactic blocks, wherein each tactic block satisfies an alert grouping criteria including having a same attack tactic and falling within a time window, wherein the time window is within the analysis window, and wherein each tactic block is associated with a start time based on a start timestamp of a first alert in the tactic block and an end time based on an end timestamp of a last alert in the tactic block;constructing a graph of tactic blocks by directionally connecting blocks based on a time criteria, a tactic criteria, and a matching criteria, wherein the time criteria for directionally connecting a first tactic block to a second tactic block is satisfied in response to the first tactic block having an earlier start time than a second tactic block and in response to the end time of the first tactic block being within P hours of the start time of the second tactic block, and wherein the tactic criteria is satisfied for directionally connecting the first tactic block to the second tactic block in response to the tactic associated with the first tactic block being the same or preceding the tactic associated with the second tactic block in the attack framework;identifying one or more clusters of interconnected components in the graph of tactic blocks;for each of the clusters, identifying a threat scenario comprising a sequence of attack tactics in the attack framework;ranking the threat scenarios; anddisplaying information for n highest ranked threat scenarios, wherein n is a positive integer, and wherein the information displayed for said threat scenarios includes a sequence of attack tactics associated with the threat scenario.
  • 2. The method of claim 1, wherein the matching criteria for directionally connecting the first tactic block to the second tactic block is satisfied in response to the first and second tactic blocks satisfying one or more of the following: (a) the first and second tactic blocks are associated with the same user name;(b) the first and second tactic blocks share the same source host computer; or(c) any of the first tactic block destination host computers matches the second tactic block's source host computer.
  • 3. The method of claim 1, wherein identifying the one or more clusters comprises identifying one or more connected components using a connected component algorithm in graph theory.
  • 4. The method of claim 1, wherein, for each of the clusters, identifying the threat scenario comprises identifying a path that represents the highest-risk sequence of events in the cluster.
  • 5. The method of claim 4, wherein, each alert is associated with a risk score, and the path representing the highest-risk sequence of events is identified based on risk scores associated each type of alert in the path.
  • 6. The method of claim 5, wherein the threat scenarios are ranked as a function of the risk scores associated with the threat scenarios.
  • 7. The method of claim 1, wherein the sequence of attack tactics displayed is based on the sequence of tactic blocks in the threat scenario.
  • 8. A non-transitory computer-readable medium comprising a computer program, that, when executed by a computer system, enables the computer system to perform the following method for graph-based, multi-stage attack detection in which cybersecurity alerts are displayed based on attack tactics in an attack framework, the method comprising: obtaining a plurality of cybersecurity alerts (“alerts”) generated in an analysis window;classifying each of the alerts with an attack tactic based on the attack framework having a sequence of attack tactics;grouping the alerts into tactic blocks, wherein each tactic block satisfies an alert grouping criteria including having a same attack tactic and falling within a time window, wherein the time window is within the analysis window, and wherein each tactic block is associated with a start time based on a start timestamp of a first alert in the tactic block and an end time based on an end timestamp of a last alert in the tactic block;constructing a graph of tactic blocks by directionally connecting blocks based on a time criteria, a tactic criteria, and a matching criteria, wherein the time criteria for directionally connecting a first tactic block to a second tactic block is satisfied in response to the first tactic block having an earlier start time than a second tactic block and in response to the end time of the first tactic block being within P hours of the start time of the second tactic block, and wherein the tactic criteria is satisfied for directionally connecting the first tactic block to the second tactic block in response to the tactic associated with the first tactic block being the same or preceding the tactic associated with the second tactic block in the attack framework;identifying one or more clusters of interconnected components in the graph of tactic blocks;for each of the clusters, identifying a threat scenario comprising a sequence of attack tactics in the attack framework;ranking the threat scenarios; anddisplaying information for n highest ranked threat scenarios, wherein n is a positive integer, wherein the information displayed for said threat scenarios includes a sequence of attack tactics associated with the threat scenario.
  • 9. The non-transitory computer-readable medium of claim 8, wherein the matching criteria for directionally connecting the first tactic block to the second tactic block is satisfied in response to the first and second tactic blocks satisfying one or more of the following: (a) the first and second tactic blocks are associated with the same user name;(b) the first and second tactic blocks share the same source host computer; or(c) any of the first tactic block destination host computers matches the second tactic block's source host computer.
  • 10. The non-transitory computer-readable medium of claim 8, wherein identifying the one or more clusters comprises identifying one or more connected components using a connected component algorithm in graph theory.
  • 11. The non-transitory computer-readable medium of claim 8, wherein, for each of the clusters, identifying the threat scenario comprises identifying a path that represents the highest-risk sequence of events in the cluster.
  • 12. The non-transitory computer-readable medium of claim 11, wherein, each alert is associated with a risk score, and the path representing the highest-risk sequence of events is identified based on risk scores associated each type of alert in the path.
  • 13. The non-transitory computer-readable medium of claim 12, wherein the threat scenarios are ranked as a function of the risk scores associated with the threat scenarios.
  • 14. The non-transitory computer-readable medium of claim 8, wherein the sequence of attack tactics displayed is based on the sequence of tactic blocks in the threat scenario.
  • 15. A computer system for graph-based, multi-stage attack detection in which cybersecurity alerts are displayed based on attack tactics in an attack framework, the system comprising: one or more processors;one or more memory units coupled to the one or more processors, wherein the one or more memory units store instructions that, when executed by the one or more processors, cause the system to perform the operations of: obtaining a plurality of cybersecurity alerts (“alerts”) generated in an analysis window;classifying each of the alerts with an attack tactic based on the attack framework having a sequence of attack tactics;grouping the alerts into tactic blocks, wherein each tactic block satisfies an alert grouping criteria including having a same attack tactic and falling within a time window, wherein the time window is within the analysis window, and wherein each tactic block is associated with a start time based on a start timestamp of a first alert in the tactic block and an end time based on an end timestamp of a last alert in the tactic block;constructing a graph of tactic blocks by directionally connecting blocks based on a time criteria, a tactic criteria, and a matching criteria, wherein the time criteria for directionally connecting a first tactic block to a second tactic block is satisfied in response to the first tactic block having an earlier start time than a second tactic block and in response to the end time of the first tactic block being within P hours of the start time of the second tactic block, and wherein the tactic criteria is satisfied for directionally connecting the first tactic block to the second tactic block in response to the tactic associated with the first tactic block being the same or preceding the tactic associated with the second tactic block in the attack framework;identifying one or more clusters of interconnected components in the graph of tactic blocks;for each of the clusters, identifying a threat scenario comprising a sequence of attack tactics in the attack framework;ranking the threat scenarios; anddisplaying information for n highest ranked threat scenarios, wherein n is a positive integer, wherein the information displayed for said threat scenarios includes a sequence of attack tactics associated with the threat scenario.
  • 16. The system of claim 15, wherein, for each of the clusters, identifying the threat scenario comprises identifying a path that represents the highest-risk sequence of events in the cluster.
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 63/084,899, filed on Sep. 29, 2020, and titled “Graph-Based Multi-Staged Attack Detection,” the contents of which are incorporated by reference herein as if fully disclosed herein.

US Referenced Citations (179)
Number Name Date Kind
5941947 Brown et al. Aug 1999 A
6223985 DeLude May 2001 B1
6594481 Johnson et al. Jul 2003 B1
7181768 Ghosh et al. Feb 2007 B1
7624277 Simard et al. Nov 2009 B1
7668776 Ahles Feb 2010 B1
8326788 Allen et al. Dec 2012 B2
8443443 Nordstrom et al. May 2013 B2
8479302 Lin Jul 2013 B1
8484230 Harnett et al. Jul 2013 B2
8539088 Zheng Sep 2013 B2
8583781 Raleigh Nov 2013 B2
8606913 Lin Dec 2013 B2
8676273 Fujisake Mar 2014 B1
8850570 Ramzan Sep 2014 B1
8881289 Basavapatna et al. Nov 2014 B2
9055093 Borders Jun 2015 B2
9081958 Ramzan et al. Jul 2015 B2
9129110 Mason et al. Sep 2015 B1
9185095 Moritz et al. Nov 2015 B1
9189623 Lin et al. Nov 2015 B1
9202052 Fang et al. Dec 2015 B1
9680938 Gil et al. Jun 2017 B1
9690938 Saxe et al. Jun 2017 B1
9692765 Choi et al. Jun 2017 B2
9760240 Maheshwari et al. Sep 2017 B2
9779253 Mahaffey et al. Oct 2017 B2
9798883 Gil et al. Oct 2017 B1
9843596 Averbuch et al. Dec 2017 B1
9898604 Fang et al. Feb 2018 B2
10063582 Feng et al. Aug 2018 B1
10095871 Gil et al. Oct 2018 B2
10178108 Lin et al. Jan 2019 B1
10354015 Kalchbrenner et al. Jul 2019 B2
10360387 Jou et al. Jul 2019 B2
10397272 Bruss et al. Aug 2019 B1
10419470 Segev et al. Sep 2019 B1
10445311 Saurabh et al. Oct 2019 B1
10467631 Dhurandhar et al. Nov 2019 B2
10474828 Gil et al. Nov 2019 B2
10496815 Steiman et al. Dec 2019 B1
10621343 Maciejak et al. Apr 2020 B1
10645109 Lin et al. May 2020 B1
10685293 Heimann et al. Jun 2020 B1
10803183 Gil et al. Oct 2020 B2
10819724 Amiri et al. Oct 2020 B2
10841338 Lin et al. Nov 2020 B1
10887325 Lin et al. Jan 2021 B1
10944777 Lin et al. Mar 2021 B2
11017173 Lu et al. May 2021 B1
11080483 Islam et al. Aug 2021 B1
11080591 van den Oord et al. Aug 2021 B2
11140167 Lin et al. Oct 2021 B1
11151471 Niininen et al. Oct 2021 B2
11178168 Lin et al. Nov 2021 B1
11245716 Roelofs et al. Feb 2022 B2
11423143 Lin et al. Aug 2022 B1
11431741 Lin et al. Aug 2022 B1
11625366 Steiman et al. Apr 2023 B1
11956253 Lin et al. Apr 2024 B1
20020107926 Lee Aug 2002 A1
20030065926 Schultz et al. Apr 2003 A1
20030147512 Abburi Aug 2003 A1
20040073569 Knott et al. Apr 2004 A1
20060090198 Aaron Apr 2006 A1
20070156771 Hurley et al. Jul 2007 A1
20070282778 Chan et al. Dec 2007 A1
20080028467 Kommareddy et al. Jan 2008 A1
20080040802 Pierson et al. Feb 2008 A1
20080170690 Tysowski Jul 2008 A1
20080262990 Kapoor et al. Oct 2008 A1
20080301780 Ellison et al. Dec 2008 A1
20090144095 Shahi et al. Jun 2009 A1
20090171752 Galvin et al. Jul 2009 A1
20090292954 Jiang et al. Nov 2009 A1
20090293121 Bigus et al. Nov 2009 A1
20100125911 Bhaskaran May 2010 A1
20100191763 Wu Jul 2010 A1
20100269175 Stolfo et al. Oct 2010 A1
20100284282 Golic Nov 2010 A1
20110167495 Antonakakis et al. Jul 2011 A1
20120278021 Lin et al. Nov 2012 A1
20120316835 Maeda et al. Dec 2012 A1
20120316981 Hoover et al. Dec 2012 A1
20130080631 Lin Mar 2013 A1
20130117554 Ylonen May 2013 A1
20130197998 Buhrmann et al. Aug 2013 A1
20130227643 Mccoog et al. Aug 2013 A1
20130268260 Lundberg et al. Oct 2013 A1
20130305357 Ayyagari et al. Nov 2013 A1
20130340028 Rajagopal et al. Dec 2013 A1
20140007238 Magee Jan 2014 A1
20140090058 Ward et al. Mar 2014 A1
20140101759 Antonakakis et al. Apr 2014 A1
20140315519 Nielsen Oct 2014 A1
20150026027 Priess et al. Jan 2015 A1
20150039543 Athmanathan et al. Feb 2015 A1
20150046969 Abuelsaad et al. Feb 2015 A1
20150100558 Fan Apr 2015 A1
20150121503 Xiong Apr 2015 A1
20150205944 Turgeman Jul 2015 A1
20150215325 Ogawa Jul 2015 A1
20150339477 Abrams et al. Nov 2015 A1
20150341379 Lefebvre et al. Nov 2015 A1
20150363691 Gocek et al. Dec 2015 A1
20160005044 Moss et al. Jan 2016 A1
20160021117 Harmon et al. Jan 2016 A1
20160063397 Ylipaavalniemi et al. Mar 2016 A1
20160292592 Patthak et al. Oct 2016 A1
20160306965 Iyer et al. Oct 2016 A1
20160364427 Wedgeworth, III Dec 2016 A1
20170019506 Lee et al. Jan 2017 A1
20170024135 Christodorescu et al. Jan 2017 A1
20170127016 Yu et al. May 2017 A1
20170155652 Most et al. Jun 2017 A1
20170161451 Weinstein et al. Jun 2017 A1
20170178026 Thomas et al. Jun 2017 A1
20170213025 Srivastav et al. Jul 2017 A1
20170236081 Grady Smith et al. Aug 2017 A1
20170264679 Chen et al. Sep 2017 A1
20170318034 Holland et al. Nov 2017 A1
20170323636 Xiao et al. Nov 2017 A1
20180004961 Gil et al. Jan 2018 A1
20180048530 Nikitaki et al. Feb 2018 A1
20180063168 Sofka Mar 2018 A1
20180069893 Amit et al. Mar 2018 A1
20180075343 van den Oord et al. Mar 2018 A1
20180089304 Vizer et al. Mar 2018 A1
20180097822 Huang et al. Apr 2018 A1
20180144139 Cheng et al. May 2018 A1
20180157963 Salti et al. Jun 2018 A1
20180165554 Zhang et al. Jun 2018 A1
20180181883 Ikeda Jun 2018 A1
20180190280 Cui et al. Jul 2018 A1
20180234443 Wolkov et al. Aug 2018 A1
20180248895 Watson et al. Aug 2018 A1
20180285340 Murphy et al. Oct 2018 A1
20180288063 Koottayi et al. Oct 2018 A1
20180288086 Amiri et al. Oct 2018 A1
20180307994 Cheng et al. Oct 2018 A1
20180316701 Holzhauer et al. Nov 2018 A1
20180322368 Zhang et al. Nov 2018 A1
20190014149 Cleveland et al. Jan 2019 A1
20190028496 Fenoglio et al. Jan 2019 A1
20190034641 Gil et al. Jan 2019 A1
20190066185 More et al. Feb 2019 A1
20190080225 Agarwal Mar 2019 A1
20190089721 Pereira et al. Mar 2019 A1
20190103091 Chen Apr 2019 A1
20190114419 Chistyakov et al. Apr 2019 A1
20190124045 Zong et al. Apr 2019 A1
20190132629 Kendrick May 2019 A1
20190149565 Hagi et al. May 2019 A1
20190171655 Psota et al. Jun 2019 A1
20190182280 La Marca et al. Jun 2019 A1
20190205750 Zheng et al. Jul 2019 A1
20190207969 Brown Jul 2019 A1
20190213247 Pala et al. Jul 2019 A1
20190244603 Angkititrakul et al. Aug 2019 A1
20190303703 Kumar et al. Oct 2019 A1
20190318100 Bhatia et al. Oct 2019 A1
20190334784 Kvernvik et al. Oct 2019 A1
20190349400 Bruss et al. Nov 2019 A1
20190378051 Widmann et al. Dec 2019 A1
20200021607 Muddu et al. Jan 2020 A1
20200021620 Purathepparambil et al. Jan 2020 A1
20200082098 Gil et al. Mar 2020 A1
20200137104 Hassanzadeh Apr 2020 A1
20200177618 Hassanzadeh Jun 2020 A1
20200228557 Lin et al. Jul 2020 A1
20200302118 Cheng et al. Sep 2020 A1
20200327886 Shalaby et al. Oct 2020 A1
20210089884 Macready et al. Mar 2021 A1
20210125050 Wang Apr 2021 A1
20210126938 Trost Apr 2021 A1
20210182612 Zeng et al. Jun 2021 A1
20210232768 Ling et al. Jul 2021 A1
20220006814 Lin et al. Jan 2022 A1
20220147622 Chesla May 2022 A1
Non-Patent Literature Citations (19)
Entry
Bahnsen, Alejandro Correa “Classifying Phishing URLs Using Recurrent Neural Networks”, IEEE 2017.
Chen, Jinghui, et al., “Outlier Detection with Autoencoder Ensembles”, Proceedings of the 2017 SIAM International Conference on Data Mining, pp. 90-98.
Cooley, R., et al., “Web Mining: Information and Pattern Discovery on the World Wide Web”, Proceedings Ninth IEEE International Conference on Tools with Artificial Intelligence, Nov. 3-8, 1997, pp. 558-567.
DatumBox Blog, “Machine Learning Tutorial: The Naïve Bayes Text Classifier”, DatumBox Machine Learning Blog and Software Development News, Jan. 2014, pp. 1-11.
Fargo, Farah “Resilient Cloud Computing and Services”, PhD Thesis, Department of Electrical and Computer Engineering, University of Arizona, 2015, pp. 1-115.
Freeman, David, et al., “Who are you? A Statistical Approach to Measuring User Authenticity”, NDSS, Feb. 2016, pp. 1-15.
Goh, Jonathan et al., “Anomaly Detection in Cyber Physical Systems using Recurrent Neural Networks”, IEEE 2017.
Guo, Diansheng et al., “Detecting Non-personal and Spam Users on Geo-tagged Twitter Network”, Transactions in GIS, 2014, pp. 370-384.
Ioannidis, Yannis, “The History of Histograms (abridged)”, Proceedings of the 29th VLDB Conference (2003), pp. 1-12.
Kim, Jihyun et al., “Long Short Term Memory Recurrent Neural Network Classifier for Intrusion Detection”, IEEE 2016.
Malik, Hassan, et al., “Automatic Training Data Cleaning for Text Classification”, 11th IEEE International Conference on Data Mining Workshops, 2011, pp. 442-449.
Mietten, Markus et al., “ConXsense-Automated Context Classification for Context-Aware Access Control”, ASIA CCS'14, 2014, pp. 293-304.
Poh, Norman, et al., “EER of Fixed and Trainable Fusion Classifiers: A Theoretical Study with Application to Biometric Authentication Tasks”, Multiple Classifier Systems, MCS 2005, Lecture Notes in Computer Science, vol. 3541, pp. 1-11.
Shi, Yue et al., “Cloudlet Mesh for Securing Mobile Clouds from Intrusions and Network Attacks”, 2015 3rd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering, pp. 109-118.
Taylor, Adrian et al., “Anomaly Detection in Automobile Control Network Data with Long Short-Term Memory Networks”, IEEE 2016.
Taylor, Adrian “Anomaly-Based Detection of Malicious Activity in In-Vehicle Networks”, Ph.D. Thesis, University of Ottawa 2017.
Wang, Alex Hai, “Don't Follow Me Spam Detection in Twitter”, International Conference on Security and Cryptography, 2010, pp. 1-10.
Wang, Shuhao et al., “Session-Based Fraud Detection in Online E-Commerce Transactions Using Recurrent Neural Networks”, 2017.
Zhang, Ke et al., “Automated IT System Failure Prediction: A Deep Learning Approach”, IEEE 2016.
Provisional Applications (1)
Number Date Country
63084899 Sep 2020 US