The present invention relates generally to computer systems and networks, and particularly to detecting a bind shell attack on a computer in a network.
In many computers and network systems, multiple layers of security apparatus and software are deployed in order to detect and repel the ever-growing range of security threats. At the most basic level, computers use anti-virus software to prevent malicious software from running on the computer. At the network level, intrusion detection and prevention systems analyze and control network traffic to detect and prevent malware from spreading through the network.
Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
The description above is presented as a general overview of related art in this field and should not be construed as an admission that any of the information it contains constitutes prior art against the present patent application.
There is provided, in accordance with an embodiment of the present invention a method, including collecting data packets transmitted between multiple entities over a network, grouping the packets at least according to their source and destination entities and their times, into connections to which the packets belong, identifying pairs of the connections having identical source and destination entities and times that are together within a specified time window, generating sets of features for the identified pairs of the connections, evaluating, by a processor, the features in the pairs in order to detect a given pair of connections indicating malicious activity, and generating an alert for the malicious activity.
In one embodiment, the malicious activity includes a bind shell attack. In some embodiments, evaluating the features includes determining a baseline of the features, and comparing the features in the pairs of connections to the baseline of the features.
In additional embodiments, each given pair of connections includes first and second connections, and wherein each of the features are selected from a list consisting of respective ports used during the first and the second connections, respective start times of the first and the second connections, respective end times of the first and the second connections, respective durations of the first and the second connections, respective volumes of the first and the second connections, respective reverse volumes of the first and the second connections, a source IP address for the first and the second connections, a destination IP address for the first and the second connections and a protocol for the first and the second connections. In one embodiment, detecting the malicious activity includes detecting that the first and the second ports are different for the given pair of connections.
In further embodiments, each given pair of connections includes first and second connections, wherein a given feature includes a difference between respective start times of the first and the second connections. In supplemental embodiments, each given pair of connections includes first and second connections, wherein a given feature includes a difference between an end time of the first connection and a start time of the second connection. In another embodiment, each given pair of connections includes first and second connections, wherein a given feature includes a volume of data transmitted from the source entity to the destination entity during the first connection divided by a volume of data transmitted from the destination entity to the source entity during the first connection.
In some embodiments, each given pair of connections includes first and second connections, wherein evaluating the features includes applying a plurality rules to the features, and wherein detecting the given pair of connections indicating malicious activity includes detecting that at least a predetermined number of the rules vote true. In a first embodiment, a given rule votes false if a duration of the second connection is less than a small value, if a volume of data transmitted in the second connection is less than a negligible value, and wherein the given rule votes true otherwise. In a second embodiment, a given rule votes true if a volume of data transmitted in the first connection is less than a small value, and wherein the given rule votes false otherwise.
In a third embodiment, a given rule votes true if a difference between a start time of the first connection and a start time of the second connection is greater than a negligible value and less than a minimal value, and wherein the given rule votes false otherwise. In a fourth embodiment, a given rule votes true if a difference between an end time of the first connection and a start time of the second connection is a negligible value that can be positive or negative, and wherein the given rule votes false otherwise. In a fifth embodiment, a given rule votes true if a protocol used for the first connection is in a specified set of protocols, and wherein the given rule votes false otherwise.
In a sixth embodiment, a given rule votes true if a protocol used for the second connection is either unknown or is in a specified set of protocols, and wherein the given rule votes false otherwise. In a seventh embodiment, a given rule votes false if a count of distinct IP addresses of the entities that communicated with ports used during the first and the second connections is greater than a small value, and wherein the given rule votes true otherwise. In an eighth embodiment, a given rule votes false if, for a given pair of connections including a given destination entity, a count of unique source entities that accessed the given destination entity using a first given port during the first connection and a second given port during the second connection is greater than a high value, and wherein the given rule votes true otherwise.
In some embodiments, each given pair of connections includes first and second connections, wherein evaluating the features includes applying, to the features, a plurality of noise detectors including respective entries, wherein the noise detector votes false if the features from the given pair of connections are in accordance with one of the entries, wherein the given noise detector votes true otherwise, and wherein detecting the given pair of connections indicating malicious activity includes detecting that at least a predetermined number of the noise detectors vote true.
In one embodiment, each of the entries includes a specified internet protocol (IP) address for the destination entity, and a specified port number on the destination entity used by the first connection. In another embodiment, each of the entries also includes a second specified port number on the destination entity used by the second connection. In a further embodiment, each of the entries includes a specified internet protocol (IP) address for the source entity and a specified port on the destination entity used by the first connection.
There is also provided, in accordance with an embodiment of the present invention an apparatus, including a probe configured to collect data packets transmitted between multiple entities over a network, and at least one processor configured to group the collected packets at least according to their source and destination entities and their times, into connections to which the packets belong, to identify pairs of the connections having identical source and destination entities and times that are together within a specified time window, to generate sets of features for the identified pairs of the connections, to evaluate the features of the pairs in order to detect a given pair of connections indicating malicious activity, and to generate an alert for the malicious activity.
There is further provided, in accordance with an embodiment of the present invention a computer software product, the product including a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to collect data packets transmitted between multiple entities over a network, to group the packets at least according to their source and destination entities and their times, into connections to which the packets belong, to identify pairs of the connections having identical source and destination entities and times that are together within a specified time window, to generate sets of features for the identified pairs of the connections, to evaluate the features in the pairs in order to detect a given pair of connections indicating malicious activity, and to generate an alert for the malicious activity.
The disclosure is herein described, by way of example only, with reference to the accompanying drawings, wherein:
To attack and gain unauthorized access to data in a computer facility, some of the attacks use computer instructions (e.g., a software application or a script) known as shells that can be used to remotely control a computer in the facility. The shell can be used to either execute a malicious application on the compromised computer or to provide a user interface that an attacker can use to control the compromised computer.
One example of an attack is a bind shell attack which moves laterally over a network by opening an interactive command shell on a target computer and connecting to the target computer from a previously compromised computer. In a bind shell attack, an initial connection between two computers is used to either exploit a vulnerability on a first port or to use credentials to access the first port, and a follow-up connection on a second (different) port is used for the interactive shell.
Embodiments of the present invention provide methods and systems for detecting bind shell attacks that can comprise confidential data stored on a corporate network. As described hereinbelow, data packets transmitted between multiple entities over a network are collected, and the packets are grouped at least according to their source and destination entities and their times, into connections to which the packets belong. Pairs of the connections having identical source and destination entities and times that are together within a specified time window are identified, and sets of features are generated for the identified pairs of the connections. The features in the pairs are evaluated in order to detect a given pair of connections indicating malicious activity (e.g., a bind shell attack), and an alert is generated for the malicious activity.
Each office computer 26 comprises an office computer identifier (ID) 30 that can be used to uniquely identify each of the office computers, and each server 28 comprises a server ID 32 that can be used to uniquely identify each of the servers. Examples of IDs 30 and 32 include, but are not limited to, MAC addresses and IP addresses.
Office computers 26 are coupled to an office local area network (LAN) 34, and servers 28 are coupled to a data center LAN 36. LANs 34 and 36 are coupled to each other via bridges 38 and 40, thereby enabling transmission of data packets between office computers 26 and servers 28. In operation, servers 28 typically store sensitive (e.g., corporate) data.
Computing facility 20 also comprises an internet gateway 44, which couples computing facility 20 to public networks 46 such as the Internet. In embodiments described herein, attack detection system 22 is configured to detect a bind shell attack initiated by a networked entity such as a given office computer 26. In some embodiments, the networked entity may be infected, via the Internet, by an attacking computer 48.
To protect the sensitive data, computing facility 20 may comprises a firewall 42 that controls traffic (i.e., the flow of data packets 24) between LANs 34 and 36 and Internet 46 based on predetermined security rules. For example, firewall can be configured to allow office computers 26 to convey data requests to servers 28, and to block data requests from the servers to the office computers.
While the configuration in
Embodiments of the present invention describe methods and systems for detecting malicious activity between networked entities that comprise respective central processing units. Examples of the entities include office computers 26, servers 28, bridges 38 and 40, firewall 42 and gateway 44, as shown in
Additionally, while embodiments here describe attack detection system 22 detecting malicious content transmitted between a given office computer 26 and a given server 28, detecting malicious content transmitted between any pair of the networked entities (e.g., between two office computers 26 or between a given office computer 26 and firewall 42) is considered to be within the spirit and scope of the present invention. Furthermore, while
In the configuration shown in
In operation, processor 50 analyzes data packets 24, groups the data packets into connections 60, and stores the connections to memory 52. In embodiments described hereinbelow, processor 50 performs an additional analysis on the data packets in the connections to detect malicious activity in computing facility 20. In alternative embodiments, the tasks of collecting the data packets, grouping the data packets into connections 60, and analyzing the connections to detect the malicious activity may be split among multiple devices within computing facility 20 (e.g., a given office computer 26) or external to the computing facility (e.g., a data cloud based application).
Each connection 60 comprises one or more data packets 24 that are sequentially transmitted from a source entity to a given destination entity. In embodiments described herein, the source entity is also referred to as a given office computer 26, and the destination entity is also referred to herein as a given server 28. In some embodiments, processor 50 may store connections 60 in memory 52 as a first-in-first-out queue. In other embodiments, each connection 60 may comprise one or more data packets that are transmitted (a) from a first given office computer 26 to a second given office computer 26, (b) from a first given server 28 to a second given server 28, or (c) from a given server 28 to a given office computer 26.
In embodiments of the present invention, processor 50 generates a set of features from the data packets in each connection 60. Each given connection 60 comprises the following features 80:
In embodiments described herein, processor 50 groups multiple data packets 24 into a given connection 60. In some embodiments, processor 50 can identify each connection 60 via a 5-tuple comprising a given source IP address 66, a given source port 70, a given destination IP address 68, a given destination port 72 and a given protocol 74.
In one example where the protocol is TCP, the connection starts with a three-way handshake, and ends with either a FIN, an RST or a time-out. Processor 50 can track data packets 24, and construct the entire connection, since all the data packets in the connection are tied to each other with sequence numbers. In another example where the protocol is UDP, then there is no handshake. In this case, processor 50 can group the messages whose data packets 24 have the same 4-tuple comprising a given source IP address 66, a given source port 70, a given destination IP address 68 and a given destination port 72 (i.e., from the first data packet until there is a specified “silence time”).
Memory 52 also stores features 80, a classifier 82, rules and noise detectors 86. In operation, processor 50 can generate features 80 from single connections 60 or pairs of the connections (i.e., connections 60 that have identical source IP addresses 66, identical destination addresses 68, and that are transmitted within a specified time window). In one example, a given feature 80 for a single connection 60 may comprise the total data volume (i.e., adding the volumes for all the data packets in the given connection) in the given connection. In another example, a given feature 80 for a pair of connections 60 may comprise a time period between the end of the first connection in the pair and the start of the second connection in the pair. Additional examples of features 80 are described hereinbelow.
In embodiments of the present invention, processor 50 can use features 80, noise detectors 86 and rules 84 for classifier 82 to identify malicious activity (e.g., a bind shell attack) between a given office computer 26 and a given server 28. Examples of noise detectors 86 and rules 84 are described hereinbelow.
Processor 50 comprises a general-purpose central processing unit (CPU) or special-purpose embedded processors, which are programmed in software or firmware to carry out the functions described herein. This software may be downloaded to the computer in electronic form, over a network, for example. Additionally or alternatively, the software may be stored on tangible, non-transitory computer-readable media, such as optical, magnetic, or electronic memory media. Further additionally or alternatively, at least some of the functions of processor 50 may be carried out by hard-wired or programmable digital logic circuits.
In the configuration shown in
To infect the given office computer, compromised software 100 (and thereby payloads 102 and 104) can be loaded into memory 94 by a user (not shown) or via the Internet. For example, processor 92 can retrieve compromised software 100 from attacking computer 48 and load the compromised software into memory 94 in response to a user (not shown) pressing on a malicious link in an email.
In
In response to executing first payload 102, processor 96 opens, on the given server, second port 72B (i.e., the destination port used during the second connection in a given pair of connections 60) for inbound connections. In a typical configuration, firewall 42 allows outbound connections from the infected office computer to the attacked server via port 72A (e.g., port “100”), but does not allow outbound connections from the attacked server to either the infected office computer or the Internet.
Opening port 72B enables port 72B to receive larger amounts of data. This enables the compromised software 100 executing on processor 92 to complete attacking the given server by conveying, during subsequent second connection 60B, second payload 104 to the attacked server. Upon completing the attack, compromised software 100 can interact, via the second port, with the attacked server (i.e., via payload 104 executing on processor 96 in order to retrieve (i.e., “steal”) sensitive data 90 from the attacked server. In some embodiments payload 104 can be configured to retrieve data 90 and transmit the retrieved sensitive data to the infected office computer via port 72B.
In a collection step 120, processor 50 uses NIC 54 and probe 58 to collect data packets 24 transmitted between the entities coupled to networks 34 and 36 (e.g., office computers 26 and servers 28), and in a grouping step 122, the detection processor groups the collected data packets into connections 60. The following is an example of (a partial list) of raw data collected for connections 60A and 60B:
In a first identification step 124, processor 50 identifies pairs of connections 60 that comprise identical source computers (e.g., office computers 26), identical destination computers (e.g., servers 28), and are within a specified time window.
Bind shell attacks typically comprise two consecutive connections 60 between a source (i.e., a given office computer 26) and a destination (i.e., a given server 28), each of the connections using different ports 72 on the destination. In step 124, given a list L of connections 60 between the source and the destination, processor 50 can create a list (not shown) of connection pairs that can be candidates for a bind shell procedure. Processor 50 can then use the following algorithm to identify the connection pairs:
init pairs_list=[ ];
for each connection c1 in L;
possible_phase2←all connections c2 in L that have
The following is an example table showing a connection pair processed from the raw data described supra:
where phase1 indicates the first connection in the pair, and phase2 indicates the second connection in the pair.
In a generation step 126, the detection processor generates features 80 from the pairs of connections, and in an application step 128, processor 50 applies a set of noise detectors 86 and in application step 130 processor 50 applies a set of rules 84 to the features in the identified pairs of connections 60.
While monitoring data packets 24, processor 50 may identify large numbers of pairs of connections 60. Noise detectors 86 comprise, for the pairs of connections, sets of features 80 (e.g., destinations 68, ports 72 and protocols 74) that are may be common and/or may have a low probability of being malicious. Examples of noise detectors 86 include, but are not limited to:
In the examples of the noise detectors described supra, each of the noise detectors can vote false (i.e., not suspicious) for any pairs of connections 60 that were flagged. Likewise, each of the noise detectors can vote true (i.e., may be suspicious) for any pairs of the connections that were not flagged.
In addition to generating features from the data packets in each connection 60, as described supra, processor 50 can compute additional features 80 for each pair of connections 60 based on the information in the connections (e.g., start time 60 source IP address 66 etc.). Examples of computed features 80 include, but are not limited to:
Examples of rules 84 include, but are not limited to:
As described supra processor 50 extracts respective sets of attributes for identified pairs of connections 60, and compares the extracted sets of attributes to previously identified sets of attributes found to categorize any pairs of the connections as suspicious. In some embodiments, processor 50 can extract and compare the attributes by calculating the noise detectors, extracting the features, calculating the rules, and applying the model, as described respectively in steps 126, 128 and 130.
In some embodiments, rules 84 can be categorized into groups based on one or more subjects covered by the rules. As described hereinbelow, processor 50 can use a number of rules that are true for a given group as a parameter for detecting bind shell attacks. Examples of groups include:
In some embodiments, a given feature 80 may be based on a given group of rules 84. For example, a given feature 80 may comprise a number of rules 84 in the timing group that vote “true”. Another example of a given feature 80 based on multiple rules 84 comprises a number of all rules 84 that vote true.
In a second identification step 132, processor 50 identifies, based on features 80, noise detectors 86 and results from rules 84, malicious activity in a given pair of connections 60 that indicates a bind shell attack on a given network entity such as a given server 28. In addition to using the rules and the noise detectors as described hereinbelow, processor 50 can evaluate features 80 by determining a baseline of the features for normally observed traffic, comparing the features in the pairs of connections to the baseline of the features and suspecting malicious activity if the features in a given pair of connections 60 deviate from the baseline.
In one embodiment, processor 50 can evaluate features by analyzing combinations of features 80. In another embodiment, processor 50 can evaluate features 50 by applying rules 84 and noise detectors 86, and comparing respective numbers of the rules and the noise detectors that vote true against a predetermined threshold. In an additional embodiment, a given rule 84 voting true or a given feature having a specific value (e.g., a specific destination port 72) may indicate malicious activity. In a further embodiment, a number of rules 84 in a given category voting true can be used as a parameter in identifying malicious activity.
In one specific embodiment, processor 50 can identify a set destination ports 72 that are commonly seen in connections 60, and the detection processor can suspect malicious activity if it detects a pair of connections 60 that use a “new” (or rarely used) destination port 72. In another embodiment, processor 59 can flag a given pair of connections 60 as suspicious if the destination ports in the first and the second connections in the pair are different.
Finally in an alert step 134, processor 50 generates an alert (e.g., on user interface device 56) indicating the bind shell attack on the given networked entity, and the method ends. For example, processor 50 can generate the alert by presenting, on user interface device 56, a message indicating an attack on a given server 28 via a given office computer 26, and a type of the attack (e.g., bind shell).
While embodiments herein describe processor 50 performing steps 120-134 described supra, other configurations are considered to be within the spirit and scope of the present invention. For example, probe 58 may comprise a standalone unit that collects data packets 24, as described in step 120, and remaining steps 122-134 can be performed by any combination of processor 50, any other processors in computing facility 20, or a data cloud (not shown).
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
Number | Name | Date | Kind |
---|---|---|---|
5991881 | Conklin et al. | Nov 1999 | A |
6347374 | Drake et al. | Feb 2002 | B1 |
7178164 | Bonnes | Feb 2007 | B1 |
7181769 | Keanini et al. | Feb 2007 | B1 |
7523016 | Surdulescu et al. | Apr 2009 | B1 |
7694150 | Kirby | Apr 2010 | B1 |
7703138 | Desai et al. | Apr 2010 | B2 |
7712134 | Nucci et al. | May 2010 | B1 |
7752665 | Robertson et al. | Jul 2010 | B1 |
7870439 | Fujiyama et al. | Jan 2011 | B2 |
7908655 | Bhattacharyya et al. | Mar 2011 | B1 |
8245298 | Pletka et al. | Aug 2012 | B2 |
8397284 | Kommareddy et al. | Mar 2013 | B2 |
8429180 | Sobel et al. | Apr 2013 | B1 |
8490190 | Hernacki et al. | Jul 2013 | B1 |
8516573 | Brown et al. | Aug 2013 | B1 |
8555388 | Wang et al. | Oct 2013 | B1 |
8607353 | Rippert, Jr. et al. | Dec 2013 | B2 |
8620942 | Hoffman et al. | Dec 2013 | B1 |
8677487 | Balupari et al. | Mar 2014 | B2 |
8762288 | Dill | Jun 2014 | B2 |
8769681 | Michels | Jul 2014 | B1 |
8925095 | Herz et al. | Dec 2014 | B2 |
8966625 | Zuk et al. | Feb 2015 | B1 |
9038178 | Lin | May 2015 | B1 |
9117075 | Yeh | Aug 2015 | B1 |
9147071 | Sallam | Sep 2015 | B2 |
9231962 | Yen et al. | Jan 2016 | B1 |
9342691 | Maestas | May 2016 | B2 |
9378361 | Yen et al. | Jun 2016 | B1 |
9386028 | Altman | Jul 2016 | B2 |
9531614 | Nataraj et al. | Dec 2016 | B1 |
9736251 | Samant et al. | Aug 2017 | B1 |
9979739 | Mumcuoglu et al. | May 2018 | B2 |
9979742 | Mumcuoglu et al. | May 2018 | B2 |
10027694 | Gupta et al. | Jul 2018 | B1 |
10237875 | Romanov | Mar 2019 | B1 |
10728281 | Kurakami | Jul 2020 | B2 |
10904277 | Sharifi Mehr | Jan 2021 | B1 |
20020133586 | Shanklin | Sep 2002 | A1 |
20030110396 | Lewis et al. | Jun 2003 | A1 |
20030133443 | Klinker | Jul 2003 | A1 |
20040003286 | Kaler et al. | Jan 2004 | A1 |
20040015728 | Cole et al. | Jan 2004 | A1 |
20040117658 | Klaes | Jun 2004 | A1 |
20040199793 | Wilken et al. | Oct 2004 | A1 |
20040210769 | Radatti et al. | Oct 2004 | A1 |
20040250169 | Takemori et al. | Dec 2004 | A1 |
20040260733 | Adelstein et al. | Dec 2004 | A1 |
20050060295 | Gould | Mar 2005 | A1 |
20050128989 | Bhagwat et al. | Jun 2005 | A1 |
20050198269 | Champagne et al. | Sep 2005 | A1 |
20050216749 | Brent | Sep 2005 | A1 |
20050262560 | Gassoway | Nov 2005 | A1 |
20050268112 | Wang et al. | Dec 2005 | A1 |
20050286423 | Poletto | Dec 2005 | A1 |
20060018466 | Adelstein et al. | Jan 2006 | A1 |
20060031673 | Beck et al. | Feb 2006 | A1 |
20060075462 | Golan | Apr 2006 | A1 |
20060075492 | Golan et al. | Apr 2006 | A1 |
20060075500 | Beaman et al. | Apr 2006 | A1 |
20060107321 | Tzadikario | May 2006 | A1 |
20060126522 | Oh | Jun 2006 | A1 |
20060136720 | Armstrong et al. | Jun 2006 | A1 |
20060137009 | Chesla | Jun 2006 | A1 |
20060149848 | Shay | Jul 2006 | A1 |
20060161984 | Phillips et al. | Jul 2006 | A1 |
20060191010 | Benjamin | Aug 2006 | A1 |
20060215627 | Waxman | Sep 2006 | A1 |
20060242694 | Gold et al. | Oct 2006 | A1 |
20060259967 | Thomas et al. | Nov 2006 | A1 |
20060282893 | Wu et al. | Dec 2006 | A1 |
20070072661 | Lototski | Mar 2007 | A1 |
20070198603 | Tsioutsiouliklis et al. | Aug 2007 | A1 |
20070218874 | Sinha et al. | Sep 2007 | A1 |
20070226796 | Gilbert et al. | Sep 2007 | A1 |
20070226802 | Gopalan et al. | Sep 2007 | A1 |
20070245420 | Yong et al. | Oct 2007 | A1 |
20070255724 | Jung et al. | Nov 2007 | A1 |
20070283166 | Yami et al. | Dec 2007 | A1 |
20080005782 | Aziz | Jan 2008 | A1 |
20080016339 | Shukla | Jan 2008 | A1 |
20080016570 | Capalik | Jan 2008 | A1 |
20080028048 | Shekar et al. | Jan 2008 | A1 |
20080104046 | Singla et al. | May 2008 | A1 |
20080104703 | Rihn et al. | May 2008 | A1 |
20080134296 | Amitai et al. | Jun 2008 | A1 |
20080148381 | Aaron | Jun 2008 | A1 |
20080198005 | Schulak et al. | Aug 2008 | A1 |
20080262991 | Kapoor | Oct 2008 | A1 |
20080271143 | Stephens et al. | Oct 2008 | A1 |
20080285464 | Katzir | Nov 2008 | A1 |
20090007100 | Field et al. | Jan 2009 | A1 |
20090007220 | Ormazabal et al. | Jan 2009 | A1 |
20090115570 | Cusack, Jr. | May 2009 | A1 |
20090157574 | Lee | Jun 2009 | A1 |
20090164522 | Fahey | Jun 2009 | A1 |
20090193103 | Small et al. | Jul 2009 | A1 |
20090320136 | Lambert et al. | Dec 2009 | A1 |
20100054241 | Shah et al. | Mar 2010 | A1 |
20100071063 | Wang et al. | Mar 2010 | A1 |
20100107257 | Ollmann | Apr 2010 | A1 |
20100162400 | Feeney et al. | Jun 2010 | A1 |
20100197318 | Petersen et al. | Aug 2010 | A1 |
20100212013 | Kim et al. | Aug 2010 | A1 |
20100217861 | Wu | Aug 2010 | A1 |
20100235915 | Memon et al. | Sep 2010 | A1 |
20100268818 | Richmond et al. | Oct 2010 | A1 |
20100278054 | Dighe | Nov 2010 | A1 |
20100280978 | Shimada et al. | Nov 2010 | A1 |
20100284282 | Golic | Nov 2010 | A1 |
20100299430 | Powers et al. | Nov 2010 | A1 |
20110026521 | Gamage et al. | Feb 2011 | A1 |
20110035795 | Shi | Feb 2011 | A1 |
20110087779 | Martin et al. | Apr 2011 | A1 |
20110125770 | Battestini et al. | May 2011 | A1 |
20110153748 | Lee et al. | Jun 2011 | A1 |
20110185055 | Nappier et al. | Jul 2011 | A1 |
20110185421 | Wittenstein et al. | Jul 2011 | A1 |
20110214187 | Wittenstein et al. | Sep 2011 | A1 |
20110247071 | Hooks et al. | Oct 2011 | A1 |
20110265011 | Taylor et al. | Oct 2011 | A1 |
20110270957 | Phan et al. | Nov 2011 | A1 |
20110302653 | Frantz et al. | Dec 2011 | A1 |
20120042060 | Jackowski et al. | Feb 2012 | A1 |
20120079596 | Thomas et al. | Mar 2012 | A1 |
20120102359 | Hooks | Apr 2012 | A1 |
20120136802 | McQuade et al. | May 2012 | A1 |
20120137342 | Hartrell et al. | May 2012 | A1 |
20120143650 | Crowley et al. | Jun 2012 | A1 |
20120191660 | Hoog | Jul 2012 | A1 |
20120222120 | Rim et al. | Aug 2012 | A1 |
20120233311 | Parker et al. | Sep 2012 | A1 |
20120240185 | Kapoor | Sep 2012 | A1 |
20120275505 | Tzannes et al. | Nov 2012 | A1 |
20120331553 | Aziz et al. | Dec 2012 | A1 |
20130031600 | Luna et al. | Jan 2013 | A1 |
20130083700 | Sndhu et al. | Apr 2013 | A1 |
20130097706 | Titonis et al. | Apr 2013 | A1 |
20130111211 | Winslow et al. | May 2013 | A1 |
20130031037 | Brandt et al. | Jul 2013 | A1 |
20130196549 | Sorani | Aug 2013 | A1 |
20130298237 | Smith | Nov 2013 | A1 |
20130298243 | Kumar et al. | Nov 2013 | A1 |
20130333041 | Christodorescu et al. | Dec 2013 | A1 |
20140013434 | Ranum et al. | Jan 2014 | A1 |
20140143538 | Lindteigen | May 2014 | A1 |
20140165207 | Engel et al. | Jun 2014 | A1 |
20140230059 | Wang | Aug 2014 | A1 |
20140325643 | Bart et al. | Oct 2014 | A1 |
20150040219 | Garraway et al. | Feb 2015 | A1 |
20150047032 | Hannis et al. | Feb 2015 | A1 |
20150071308 | Webb, III et al. | Mar 2015 | A1 |
20150121461 | Dulkin et al. | Apr 2015 | A1 |
20150195300 | Adjaoute | Jul 2015 | A1 |
20150264069 | Beauchesne et al. | Sep 2015 | A1 |
20150304346 | Kim | Oct 2015 | A1 |
20150341380 | Heo | Nov 2015 | A1 |
20150341389 | Kurakami | Nov 2015 | A1 |
20160021141 | Liu et al. | Jan 2016 | A1 |
20160127390 | Lai et al. | May 2016 | A1 |
20160191918 | Lai et al. | Jun 2016 | A1 |
20160234167 | Engel et al. | Aug 2016 | A1 |
20160315954 | Peterson et al. | Oct 2016 | A1 |
20160323299 | Huston, III | Nov 2016 | A1 |
20170026395 | Mumcuoglu et al. | Jan 2017 | A1 |
20170054744 | Mumcuoglu et al. | Feb 2017 | A1 |
20170063921 | Fridman et al. | Mar 2017 | A1 |
20170078312 | Yamada | Mar 2017 | A1 |
20170111376 | Friedlander et al. | Apr 2017 | A1 |
20170262633 | Miserendino et al. | Sep 2017 | A1 |
20180004948 | Martin et al. | Jan 2018 | A1 |
20180332064 | Harris et al. | Nov 2018 | A1 |
20190044963 | Rajasekharan et al. | Feb 2019 | A1 |
20190334931 | Arlitt et al. | Oct 2019 | A1 |
20200007566 | Wu | Jan 2020 | A1 |
20200145435 | Chiu et al. | May 2020 | A1 |
Number | Date | Country |
---|---|---|
0952521 | Oct 1999 | EP |
2056559 | May 2009 | EP |
03083660 | Oct 2003 | WO |
Entry |
---|
Asrigo K, Litty L, Lie D. Using VMM-based sensors to monitor honeypots. In Proceedings of the 2nd international conference on Virtual execution environments. Jun. 14, 2006 (pp. 13-23). (Year: 2006). |
Skormin VA. Anomaly-Based Intrusion Detection Systems Utilizing System Call Data. State Univ of New York at Binghamton Dept of Electrical and Computer Engineering. Mar. 1, 2012. (Year: 2012). |
Barford, Paul, and David Plonka. “Characteristics of network traffic flow anomalies.” in Proceedings of the 1st ACM Sigcomm Workshop on Internet Measurement, pp. 69-73. 2001. (Year: 2001). |
U.S. Appl. No. 15/955,712 office action dated Aug. 13, 2019. |
Light Cyber Ltd, “LightCyber Magna”, 3 pages, year 2011. |
Tier-3 Pty Ltd, “Huntsman Protector 360”, Brochure, 2 pages, Apr. 1, 2010. |
Tier-3 Pty Ltd, “Huntsman 5.7 the Power of 2”, Brochure, 2 pages, Oct. 8, 2012. |
Bilge et at., “Disclosure: Detecting Botnet Command and Control Servers Through Large-Scale NetFlow Analysis”, ACSAC, 10 Pages, Dec. 3-7, 2012. |
Blum., “Combining Labeled and Unlabeled Data with Co-Training”, Carnegie Mellon University, Research Showcase @ CMU, Computer Science Department, 11 pages, Jul. 1998. |
Felegyhazi et al., “On the Potential of Proactive Domain Blacklisting”, LEET'10 Proceedings of the 3rd USENIX Conference on Large-scale exploits and emergent threats, 8 pages, San Jose, USA, Apr. 27, 2010. |
Frosch., “Mining DNS-related Data for Suspicious Features”, Ruhr Universitat Bochum, MasteesThesis, 88 pages, Dec. 23, 2011. |
Bilge at al., “Exposure: Finding Malicious Domains Using Passive DNS Analysis”, NDSS Symposium, 17 pages, Feb. 6-9, 2011. |
Gross et al., “Fire: Finding Rogue nEtworks”, Annual Conference on Computer Security Applications (ACSAC'09), 10 pages, Dec. 7-11, 2009. |
Markowitz, N., “Bullet Proof Hosting: A Theoretical Model”, Security Week, 5 pages, Jun. 29, 2010, downloaded from http://www.infosecisland.com/blogview/4487-Bullet-Proof-Hosting-A-Theoretical-Model.html. |
Konte et al., “ASwatch: An AS Reputation System to Expose Bulletproof Hosting ASes”, Sigcomm , pp. 625-638, Aug. 17-21, 2015. |
Markowitz, N., “Patterns of Use and Abuse with IP Addresses”, Security Week, 4 pages, Jul. 10, 2010, downloaded from http://infosecisland.com/blogview/5068-Patterns-of-Use-and-Abuse-with-IP-Addresses.html. |
Wei et al., “Identifying New Spam Domains by Hosting IPs: Improving Domain Blacklisting”, Department of Computer and Information Sciences, University of Alabama at Birmingham, USA, 8 pages, Dec. 8, 2010. |
Goncharov,M., “Criminal Hideouts for Lease: Bulletproof Hosting Services”, Forward-Looking Threat Research (FTR) Team, a TrendLabsSM Research Paper, 28 pages, Jul. 3, 2015. |
U.S. Appl. No. 15/075,343 office action dated Jul. 13, 2018. |
Niksun, “Network Intrusion Forensic System (NIFS) for Intrusion Detection and Advanced Post Incident Forensics”, Whitepaper, 12 pages, Feb. 15, 2010. |
Shulman, A., “Top Ten Database Security Threats How to Mitigate the Most Significant Database Vulnerabilities”, White Paper, 14 pages, year 2006. |
U.S. Appl. No. 15/955,712 office action dated Dec. 31, 2019. |
International Application # PCT/IB2019/060538 search report dated Feb. 28, 2020. |
Bhuyan et al., “Surveying Port Scans and Their Detection Methodologies”, Computer Journal, vol. 54, No. 10 pp. 1565-1581, Apr. 20, 2011. |
U.S. Appl. No. 16/261,606 Office Action dated Feb. 5, 2021. |
U.S. Appl. No. 16/261,608 Office Action dated Feb. 5, 2021. |
U.S. Appl. No. 16/261,655 Office Action dated Jan. 29, 2021. |
U.S. Appl. No. 16/261,634 Office Action dated Mar. 8, 2021. |
Number | Date | Country | |
---|---|---|---|
20190319981 A1 | Oct 2019 | US |