A device may be placed (e.g., below the local recursive domain name server (RDNS) or at the edge of a network) that may monitor DNS query/response messages from/to the computers within the network. For example, queries to domain names that result in non-existent domain (NX DOMAIN) response code (RCODE) messages (e.g, domain names for which no mapping to an IP addresses exists) may be analyzed. These types of unsuccessful domain name resolutions may be referred to as NX domains. To identify automatically generated domain names, NX application 105 may search for relatively large clusters of NX domains that have similar syntactic features and are queried by multiple (e.g., potentially compromised by DGA-enabled malware) computers during a given time period (e.g., one day). As multiple computers may be compromised with the same DGA-based bots, these computers may generate several DNS queries resulting in NX domain names, and many of these NX domain names may be queried by more than one computer. In addition, NX application 105 may identify and filter out natural user-generated NX domain names (e.g., due to typos, etc.). Once NX application 105 finds a cluster of NX domain names, this cluster may represent a sample of domain names generated by a DGA. At this point, NX application 105 may apply statistical data mining techniques to build a model of the DGA that may be used to track compromised assets running the same DGA-bot (e.g., in real-time or near real-time).
The DGA discovery module 205 may analyze streams of NX domains generated by assets 115 located within a monitored network 125 to identify, in an unsupervised way, clusters of NX domains that are being automatically generated by a DGA. The NX domains generated by network users querying non-existent domain names may be collected during a given epoch E. Then, the collected NX domains may be clustered. The DGA discovery module 205 may comprise: a name-based features clustering module 215, a graph clustering module 220, a daily clustering correlation module 225, a temporal clustering correlation module 226 or a DGA filtering module 230, or any combination thereof.
The name-based features clustering module 215 may cluster NX domains if the domain names strings have similar statistical features (e.g., similar length, similar level of randomness (e.g., Table 1 of
After obtaining the correlated clusters of NX domains via the daily cluster correlation module 225, the temporal clustering correlation module 226 may utilize a rolling window of two consecutive epochs to correlate temporal clusters that may include assets that consistently create NX domains that may tend to cluster together (e.g., according to the name-based features clustering and the graph clustering). The resulting temporally correlated clusters may be forwarded to the DGA filtering module 230. The DGA filtering module 230 may prune away clusters of NX domains that are already known, and therefore reduce the output to clusters that are composed by previously unseen and not modeled DGA NX domain traffic. In this way, modules 215, 220, 225, 226 and 230 may group together domain names that likely were automatically generated by the same algorithm running on multiple machines within the monitored network.
The DGA bot identification module 210 may analyze the assets collecting the NX domains. While the DGA discovery module 205 may consider collections of assets, the DGA bot identification module 210 may consider each asset separately. The DGA bot identification module 210 may comprise: a DGA classifier module 235 and/or a DGA modeling module 240. The DGA classifier module 235 may model DGA traffic and may use machine learning on NX domain data present in a knowledge base as well as reports of unknown DGAs arriving from the DGA discovery module 205. The clustering done by modules 215, 220, 225, 226 and 230 may contain groups of domain names that happen to be similar due to reasons other than that they were generated by the same algorithm (e.g., NX domains due to common typos or to misconfigured applications). Thus, the DGA classifier module 235 may be used as a filter that may help eliminate noisy clusters. The DGA classifier module 235 may prune NX domain clusters that appear to be generated by DGAs that have been previously discovered and modeled, or that contain domain names that are similar to popular legitimate domains. The DGA modeling module 240 may be responsible for two operations: the asset classification and the DGA modeling processes. Each asset may be classified on a daily basis using the stream of NX domains that is generated. These NX domains may be grouped in sets of a, and classified against models of known DGAs. The DGA classifier module 235 may be trained using the following sources of labeled data: a large list of popular legitimate domain names; NX domains collected by running known DGA-bot samples in a controlled environment; or clusters of NX domains output by the DGA discovery module 205 that undergo manual inspection and are confirmed to be related to yet-to-be-named botnets; or any combination thereof. For example, the DGA modeling module 240 may receive different sets of labeled domains (e.g., Legitimate, DGA-Bobax, DGA-Torpig/Sinowal, DGA-Conficker.C, DGA-UnknownBot-1, DGA-UnknownBot-2, etc.) and may output a new trained DGA classifier module 235.
The elements of the DGA discovery module 205 and the DGA bot identification module 210 have been set forth above. These are now described in more detail.
DGA Discovery Module 205.
The DGA discovery module 205 may comprise: a name-based features clustering module 215, a graph clustering module 220, a daily clustering correlation module 225, a temporal clustering correlation module 226 or a DGA filtering module 230, or any combination thereof. These modules are described in detail below.
Name-Based Features Clustering Module 215.
Using all the NX domains observed in an epoch, and using groups of α sequential NX domains per asset, multiple name-based features may be computed. For example, in one embodiment, multiple (e.g. 33) features from the following families may be computed: n-gram features (NGF); entropy-based features (EBF); or structural domain features (SDF); or any combination thereof (These feature families are described in more detail below.) After computing all vectors in the epoch, X-means clustering (XMC) may be used to cluster vectors composed by α NX domains that share similar name-based statistical properties. In some embodiments, this name-based features clustering does not need to take into consideration the individual assets that generated the NX domains.
For example, assume that for epoch ti, XMC is run to obtain a set of clusters:
CIDSXMCt
where CID may be classer identification; CIDS may be the set of all classer IDs, cidXMC
Statistical Features.
As a reference, the definitions and notation used for various example statistical features are described below. The statistical features may be referred to as name-based features. Name-based features may aim to model the statistical properties of domain name strings. The motivation behind these features may include the fact that DGAs typically take an input seed and generate domain name strings by following a deterministic algorithm. These domains may often have a number of similarities, such as similar length, similar levels of randomness, and a similar number of domain levels. Name-based statistical features may comprise: n-gram features (NGF); entropy-based features (EBF); or structural domain features (SDF); or any combination thereof.
Notation.
A domain name d comprises a set of labels separated by dots (e.g., www.example.com). The rightmost label may be called the top-level domain (TLD or TLD(d)) (e.g., com). The second-level domain (2LD or 2LD(d)) may represent the two rightmost labels separated by a period (e.g., example.com). The third-level domain (3LD or 3LD(d)) may contain the three rightmost labels (e.g., www.example.com). In some embodiments, only NX domains that have a valid TLD appearing in the ICANN TLD listing (e.g., http://www.icann.org/en/tlds/.) may be considered. A domain name d may be called an NX domain if it does not own any resource record (RR) of Type-A. In other words, d may be called an NX domain if it does not resolve to any IP address.
The IP address of an asset in a monitored network that relies on the network's local recursive DNS server (RDNS) to resolve domain names may be defined as IPj. (Note that the approach may be generalized to machines that use an external resolver (e.g., Google's Public DNS). This IP address could represent a single machine, a NAT or proxy device.) The sequence of NX domains generated by IP address IPj (e.g., the IP address of the requester) during an epoch ti may be defined as NIP
Given a sub-sequence of NX domains NXIP
n-Gram Features (NGF).
Given a sequence NXIP
This group of features may try to capture whether the n-grams are uniformly distributed, or if some n-grams tend to have a much higher frequency than others. While this group of features by itself may not be enough to discriminate domains generated by DGAs from domains generated by human typos, when combined with other features, they may yield high classification accuracy.
Entropy-Based Features (EBF).
This group of features may compute the entropy of the character distribution for separate domain levels. For example, the character entropy for the 2LDs and 3LDs extracted from the domains in NXIP
This group of features may determine the level of randomness for the domains in NXIP
Structural Domain Features (SDF).
This group of features may be used to summarize information about the structure of the NX domains in NXIP
Graph Clustering (GC) Module 220.
As discussed above, NX domains may be clustered based on the relationships of the assets that query them. A sparse association matrix M may be created, where columns represent NX domains, and rows represent assets that query more than two NX domains over the course of an epoch T. Mi,j=0 if asset i did not query NX domain j during epoch T. Conversely, Mi,j=1×w if i did query j, where w may be a weight.
To assign weights to M to influence clustering, an approach analogous to tfc-weighting in text classification tasks may be used. (For more information on tfc-weighting, see K. Aas and L. Eikvil, Text Categorization: A Survey, 1999; and R. Feldman and J. Sanger, The Text Mining Handbook: Advanced Approaches in Analyzing Unstructured Data, Cambridge Univ. Pr. 2007, which are both herein incorporated by reference.) The basic idea may revolve around a common property of document collections: as more documents contain a given term, the less likely the term is representative of a document. For example, in a representative sample of English documents, the term and definite article “the” likely appears in most of them and poorly describes document content. Conversely, the term “NASA” may appear in fewer documents and probably only in documents with content similar to the term. To adjust for this, the inverse document frequency, or idf, may be used to give a lower weight to terms that appear frequently across the document collection, and a higher weight to terms that are uncommon. In the example of NX domains, the NX domains may be considered the “documents” to be clustered, and the assets may be the “terms” used to generate weights. As such, as more NX domains are queried by an asset, the less likely the asset may be representative of the NX domains it queries. Therefore, a weight w may be assigned to all assets such that w˜1/k, where k may be the number of NX domains the asset queries.
Once the weights are computed, a spectral clustering graph partitioning strategy may be followed on the association matrix M. This technique may require the computation of the first p=15 eigenvectors of M. This may create a p-dimensional vector for each of the NX domains in M. We may use the derived vectors and X-means to cluster the NX domains based on their asset association. We may obtain a set of clusters, CIDSGCt
In some embodiments, the presented graph clustering technique may become less effective if a DGA generates a high volume of domain names daily while any DGA-infected asset only queries a very small subset of them, because this may significantly decrease the probability that a NX domain is looked up by two DGA-infected assets under the same epoch. On the other hand, a botnet based on such a DGA may come with a heavy toll for the botmaster because this botnet may significantly decrease the probability of establishing a C&C connection.
Daily Cluster Correlation Module 225.
After the completion of the XMC (by 215) and graph clustering (GC) (by 220), the daily cluster correlation module 225 may associate NX domains between the sub-clusters produced from both the XMC and graph clustering processes. This daily cluster correlation may be achieved by the intersection of tuples (e.g., NX domain names and the assets that generated them) between clusters derived from the name-based features clustering module 215 and clusters derived from the graph clustering module 220, during the same epoch. All possible daily correlation clusters that have at least one tuple in common may be created. The correlation process may then be defined. Let DCkt
If mϵcidXMC
Temporal Cluster Correlation Module 226.
After obtaining the correlated clusters of NX domains via the daily cluster correlation module 225, a rolling window of two consecutive epochs may be used to correlate temporal clusters that may include assets that consistently create NX domains that may tend to cluster together (according to the XMC (done by 215) and GC (done by 220)). The resulting temporally correlated clusters may be forwarded to the DGA filtering module 230. The process that gives us the temporally correlated clusters is defined below.
Let TCkt
DGA Filtering Module 230.
When the DGA filtering module 230 receives the new temporal clusters, the DGA filtering module 230 breaks the NX domains in each cluster in random sets of cardinality α. Then, in some embodiments, using the same three feature families (e.g., NGF, EBF and SDF), the DGA filtering module 230 may compute vectors and classify them against the models built from the known DGA models. Each temporal cluster may be characterized using the label with the highest occurrence frequency after the characterization process. The NX domains used to compile vectors that do not agree with the label with the highest occurrence may be discarded. The temporal cluster confidence may be computed by the average confidence from all vectors of the highest occurrence label in the temporal cluster.
Clusters may be excluded that are very similar (more than 98% similarity) to the already known DGA models. Any temporal clusters that were characterized as benign may also be discarded. All remaining temporal clusters that do not fit any known model may be candidates for compiling new DGA models (see 335 of
DGA Bot Identification Module 210.
As set forth above, the DGA bot identification module 210 may comprise a DGA modeling module 240 and/or a DGA classifier module 235. These modules are described in detail below.
DGA Classifier Module 235.
The DGA classifier module 235 may utilize a Random Forest meta-classification system. (The following reference, which is herein incorporated by reference, provides more information on the random forest meta classification system: http://en.wikipedia.org/wiki/Random_forest.) This DGA classifier module 235 may be able to model statistical properties of sets of NX domains that share some properties. The models may be derived from training datasets, which may be composed using sets with size α of domain names to compile vectors using the NGF, EBF and SDF statistical feature families. The DGA classifier module 235 may model successfully all known malware DGAs (e.g., Conficker, Murofet, etc.). Furthermore, the DGA classifier module 235 may clearly differentiate them from domain names in the top Alexa rankings. In some embodiments, the DGA classifier module 235 may operate as follows: First, vectors from each DGA family (or class) may need to be populated. To do so, NX domains from each class may be grouped into sets of α sequential domains. Using the statistical features from the families NGF, EBF and SDF, a single vector for the set of α domain names may be computed. Then, a tree-based statistical classifier may be used to train it against all classes of DGAs we have and against the benign class from the Alexa domain names. Once the DGA classifier module 235 is trained with a balanced number of vectors for each class, we may proceed with classification of α sequential NX domains (per asset) as they arrive from the ISP sensor.
DGA Modeling Module 240.
The temporal clusters that do not fit any known model according to the confidences that the DGA classifier module 235 assigns to them may be candidates for the DGA modeling module 240. One goal of the DGA modeling module 240 may be to harvest the maximum possible NX domains per temporal cluster. Another goal may be to use the collected NX domains to model the new DGAs. To achieve this, the NX domains from assets that comprise the temporal cluster that may be a candidate for DGA modeling may be aggregated. The set of NX domains that may be used to build the model may be all the NX domains in the tuples within the temporal cluster. Using the statistical features from the NGF, EBF and SDF families, and, as a seed, the NX domains derived from the assets in the temporal cluster, the training dataset for each new DGA variant may be compiled. The new training dataset may be compiled using again the same Random Forest meta-classification system used in the DGA classifier module 235. Once the new models are built, the models may be pushed to the DGA classifier module 235 and network assets may be classified against the newly discovered DGAs. A classification report per asset, as the set of α NX domains per asset gradually get classified, may be provided.
Once the clustering and cluster correlation is done, the results may include groups of NX domains, where each group of NX domains may contain domains that are likely to be automatically generated by the same DGA. Thus, for each of the NX domain clusters, it may be determined whether its domains are related to a newly discovered (e.g., never seen before) DGA, or a previously seen DGA. In 330, the DGA filtering module 230 may be used to distinguish between sets of NX domains generated by different known DGAs. If an NX domain cluster is found to match one of the previously modeled DGAs, it may be discarded (although it might also be used to further refine the supervised classifier) because at this stage discovering new DGAs may be useful. After it has been determined in 335 that a cluster of NX domains is related to a new (unknown) DGA, in 340 this information may be sent to the DGA modeling module 240 of the DGA-bot identification module 210, which may attempt to model the new (unknown) DGA(s). In 345, the newly modeled DGA may be pushed to the DGA classifier module 235, which may be retrained to account for the newly discovered domain generation pattern(s). In 350, the network assets may be classified against the previously known and new DGAs in the DGA classifier module 235. In 355, a classification report may be provided.
While various embodiments have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. Thus, the present embodiments should not be limited by any of the above-described embodiments.
In addition, it should be understood that any figures which highlight the functionality and advantages, are presented for example purposes only. The disclosed methodology and system are each sufficiently flexible and configurable, such that it may be utilized in ways other than that shown.
Further, the purpose of any Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. Any Abstract of the Disclosure is not intended to be limiting as to the scope of the present invention in any way.
It should also be noted that the terms “a”, “an”, “the”, “said”, etc. signify “at least one” or “the at least one” in the specification, claims and drawings. In addition, the term “comprises” signifies “including, but not limited to”.
Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112, paragraph 6. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112, paragraph 6.
This application claim the benefit of U.S. Patent Application No. 61/590,633, filed Jan. 25, 2012, which is incorporated by reference in its entirety. This application is related to U.S. patent application Ser. No. 12/985,140, filed Jan. 5, 2011, which is incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
7043759 | Kaaschoek et al. | May 2006 | B2 |
7185368 | Copeland, III | Feb 2007 | B2 |
7278163 | Banzhof | Oct 2007 | B2 |
7290283 | Copeland, III | Oct 2007 | B2 |
7475426 | Copeland, III | Jan 2009 | B2 |
7549169 | Sobel et al. | Jan 2009 | B1 |
7512980 | Copeland, III et al. | Mar 2009 | B2 |
7644150 | Nucci et al. | Jan 2010 | B1 |
7644151 | Jerrim et al. | Jan 2010 | B2 |
7712134 | Nucci et al. | May 2010 | B1 |
7882542 | Neystadt | Feb 2011 | B2 |
7886358 | Copeland, III | Feb 2011 | B2 |
7895326 | Jerrim et al. | Feb 2011 | B2 |
8065722 | Barford et al. | Nov 2011 | B2 |
8170966 | Musat et al. | May 2012 | B1 |
8180916 | Mucci et al. | May 2012 | B1 |
8200761 | Tevanian | Jun 2012 | B1 |
8260914 | Ranjan | Sep 2012 | B1 |
8402543 | Ranjan et al. | Mar 2013 | B1 |
8418249 | Nucci et al. | Apr 2013 | B1 |
8484377 | Chen et al. | Jul 2013 | B1 |
8516585 | Cao et al. | Aug 2013 | B2 |
8527592 | Gabe | Sep 2013 | B2 |
8631489 | Antonakakis et al. | Jan 2014 | B2 |
8826438 | Perdisci et al. | Sep 2014 | B2 |
8832829 | Manni et al. | Sep 2014 | B2 |
9306969 | Dagon et al. | Apr 2016 | B2 |
9516058 | Antonakakis et al. | Dec 2016 | B2 |
20010014093 | Yoda et al. | Aug 2001 | A1 |
20020166063 | Lachman et al. | Nov 2002 | A1 |
20020178238 | Fletcher et al. | Nov 2002 | A1 |
20030145233 | Poletto et al. | Jul 2003 | A1 |
20030236995 | Fretwell, Jr. | Dec 2003 | A1 |
20040039798 | Hotz et al. | Feb 2004 | A1 |
20040044791 | Pouzzner | Mar 2004 | A1 |
20040088346 | Yeager | May 2004 | A1 |
20040088348 | Yeager | May 2004 | A1 |
20040181687 | Nachenberg et al. | Sep 2004 | A1 |
20050036501 | Chung et al. | Feb 2005 | A1 |
20050044406 | Stute | Feb 2005 | A1 |
20050050377 | Chan et al. | Mar 2005 | A1 |
20050086523 | Zimmer et al. | Apr 2005 | A1 |
20050210534 | Krishnamurthy | Sep 2005 | A1 |
20050278540 | Cho | Dec 2005 | A1 |
20060026682 | Zakas | Feb 2006 | A1 |
20060031483 | Lund | Feb 2006 | A1 |
20060067240 | Kim et al. | Mar 2006 | A1 |
20060068806 | Nam | Mar 2006 | A1 |
20060078096 | Poyhonen | Apr 2006 | A1 |
20060150249 | Gassen et al. | Jul 2006 | A1 |
20060174345 | Flanagan et al. | Aug 2006 | A1 |
20060176822 | Doyle et al. | Aug 2006 | A1 |
20060212925 | Shull et al. | Sep 2006 | A1 |
20060212942 | Barford et al. | Sep 2006 | A1 |
20060253581 | Dixon | Nov 2006 | A1 |
20060265436 | Edmond | Nov 2006 | A1 |
20060288415 | Wong | Dec 2006 | A1 |
20070056038 | Lok | Mar 2007 | A1 |
20070076606 | Olesinski | Apr 2007 | A1 |
20070136455 | Lee et al. | Jun 2007 | A1 |
20070198679 | Duyanovich et al. | Aug 2007 | A1 |
20070209074 | Coffman | Sep 2007 | A1 |
20070253377 | Janneteau et al. | Nov 2007 | A1 |
20070294339 | Ala-Kleemola et al. | Dec 2007 | A1 |
20080005555 | Lotem et al. | Jan 2008 | A1 |
20080016570 | Capalik | Jan 2008 | A1 |
20080177755 | Stern et al. | Jul 2008 | A1 |
20080178293 | Keen et al. | Jul 2008 | A1 |
20080195369 | Duyanovich et al. | Aug 2008 | A1 |
20080222729 | Chen et al. | Sep 2008 | A1 |
20090138590 | Lee et al. | May 2009 | A1 |
20090171871 | Zhang et al. | Jul 2009 | A1 |
20090241190 | Todd et al. | Sep 2009 | A1 |
20090254989 | Achan | Oct 2009 | A1 |
20090265777 | Scott | Oct 2009 | A1 |
20100011420 | Drako | Jan 2010 | A1 |
20100017487 | Patinkin | Jan 2010 | A1 |
20100034109 | Shomura et al. | Feb 2010 | A1 |
20100037314 | Perdisci et al. | Feb 2010 | A1 |
20100043047 | Archer et al. | Feb 2010 | A1 |
20100071068 | Bauschert et al. | Mar 2010 | A1 |
20100077481 | Polyakov et al. | Mar 2010 | A1 |
20100082758 | Golan | Apr 2010 | A1 |
20100235915 | Memon et al. | Sep 2010 | A1 |
20100319069 | Granstedt | Dec 2010 | A1 |
20110040706 | Sen et al. | Feb 2011 | A1 |
20110055123 | Kennedy | Mar 2011 | A1 |
20110067106 | Evans et al. | Mar 2011 | A1 |
20110185423 | Sallam | Jul 2011 | A1 |
20110185428 | Sallam | Jul 2011 | A1 |
20120079101 | Muppala et al. | Mar 2012 | A1 |
20120084860 | Cao et al. | Apr 2012 | A1 |
20120102568 | Tarbotton et al. | Apr 2012 | A1 |
20120143650 | Crowley et al. | Jun 2012 | A1 |
20120151585 | Lamastra et al. | Jun 2012 | A1 |
20120198549 | Antonakakis | Aug 2012 | A1 |
20120215909 | Goldfarb et al. | Aug 2012 | A1 |
20130054802 | Donzis et al. | Feb 2013 | A1 |
20130174253 | Thomas | Jul 2013 | A1 |
20130232574 | Carothers | Sep 2013 | A1 |
20140068763 | Ward et al. | Mar 2014 | A1 |
20140068775 | Ward et al. | Mar 2014 | A1 |
20140075558 | Ward et al. | Mar 2014 | A1 |
20140090058 | Ward et al. | Mar 2014 | A1 |
20140289854 | Mahvi | Sep 2014 | A1 |
20150026808 | Perdisci | Jan 2015 | A1 |
20150222654 | Crowley et al. | Aug 2015 | A1 |
20160156660 | Dagon et al. | Jun 2016 | A1 |
20160285894 | Nelms et al. | Sep 2016 | A1 |
Entry |
---|
Nan Jiang, Jin Cao, Yu Jin, Li Erran Li, and Zhi-Li Zhang. 2010. Identifying suspicious activities through DNS failure graph analysis. In Proceedings of the The 18th IEEE International Conference on Network Protocols (ICNP '10). IEEE Computer Society, Washington, DC, USA, 144-153. |
Guofei Gu, Roberto Perdisci, Junjie Zhang, and Wenke Lee. 2008. BotMiner: clustering analysis of network traffic for protocol-and structure-independent botnet detection. In Proceedings of the 17th conference on Security symposium (SS'08). USENIX Association, Berkeley, CA, USA, 139-154. |
Sandeep Yadav, Ashwath Kumar Krishna Reddy, A.L. Narasimha Reddy, and Supranamaya Ranjan. 2010. Detecting algorithmically generated malicious domain names. In Proceedings of the 10th ACM SIGCOMM conference on Internet measurement (IMC '10). ACM, New York, NY, USA, 48-61. DOI=http://dx.doi.org/10.1145/1879141.1879148. |
Bailey et al., “Automated Classification and Analysis of Internet Malware”, 2007, Lecture Notes in Computer Science, vol. 4637. Springer, Berlin, Heidelberg, p. 178-197. |
Park et al., “Fast malware classification by automated behavioral graph matching”, 2010. In Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research (CSIIRW '10), ACM, p. 1-4. |
File History of U.S. Appl. No. 14/015,582. |
File History of U.S. Appl. No. 14/015,704. |
File History of U.S. Appl. No. 14/194,076. |
Leo Breiman, “Bagging Predictors”, Machine Learning, vol. 24, pp. 123-140 (1996). |
David S. Anderson et al., “Spamscatter: Characterizing Internet Scam Hosting Infrastructure”, Proceedings of the USENIX Security Symposium (2007) (14 pages). |
Sujata Garera et al., “A Framework for Detection and Measurement of Phishing Attacks”, WORM'07, pp. 1-8, Nov. 2, 2007. |
Torsten Horthorn et al., “Double-Bagging: Combining Classifiers by Bootstrap Aggregation”, Pattern Recognition, vol. 36, pp. 1303-1309 (2003). |
Roberto Perdisci et al., “Detecting Malicious Flux Service Networks Through Passive Analysis of Recursive DNS Traces”, Proceedings of ACSAC, Honolulu, Hawaii, USA (2009) (10 pages). |
Shuang Hao et al., “Detecting Spammers with SNARE: Spatiotemporal Network-Level Automatic Reputation Engine”, 18th USENIX Security Symposium, pp. 101-117 (2009). |
Kazumichi Sato et al., “Extending Black Domain Name List by Using Co-Occurrence Relation Between DNS Queries”, Presentation in the Third USENIX LEET Workshop (2010) (22 pages). |
Sushant Sinha et al., “Shades of Grey: On the Effectiveness of Reputation-Based Blacklists”, In 3rd International Conference on MALWARE (2008) (8 pages). |
Zhiyun Qian et al., “On Network-Level Clusters for Spam Detection”, In Proceedings of the USENIX NDSS Symposium (2010) (17 pages). |
Bojan Zdrnja et al., “Passive Monitoring of DNS Anomalies”, In Proceedings of DIMVA Conference (2007) (11 pages). |
Jian Zhang et al., “Highly Predictive Blacklisting”, In Proceedings of the USENIX Security Symposium (2008) (16 pages). |
http://www.uribl.com/about.shtml, retrieved from Internet Archive on Mar. 16, 2016, Archived Jul. 22, 2010 (4 pages). |
http://www.spamhaus.org/zen/, retrieved from Internet Archive on Mar. 16, 2016, Archived Jul. 6, 2010 (3 pages). |
Mathew Sullivan, “Fighting Spam by Finding and Listing Exploitable Servers”, Apricot 2006 (26 pages). |
Mekky et al. (Detecting Malicious HTTP Redirections Using Trees of User Browser Activity, IEEE INFOCOM 2014, pp. 1159-1167). |
File History of U.S. Appl. No. 14/010,016. |
File History of U.S. Appl. No. 14/015,663. |
File History of U.S. Appl. No. 14/305,998. |
File History of U.S. Appl. No. 15/019,272. |
Nan Jiang et al., “Identifying Suspicious Activities Through DNS Failure Graph Analysis”, In proceedings of the 18th IEEE International Conference on Network Protocols (ICNP'10) IEEE Computer Society, Washington, DC, USA, 144-153 (2010). |
File History of U.S. Appl. No. 12/538,612. |
File History of U.S. Appl. No. 13/205,928. |
File History of U.S. Appl. No. 13/309,202. |
File History of U.S. Appl. No. 14/015,621. |
File History of U.S. Appl. No. 14/015,661. |
File History of U.S. Appl. No. 14/041,796. |
File History of U.S. Appl. No. 14/096,803. |
File History of U.S. Appl. No. 14/317,785. |
File History of U.S. Appl. No. 14/616,387. |
File History of U.S. Appl. No. 14/668,329. |
U.S. Appl. No. 11/538,212, 2008-0028463, filed Jan. 31, 2008, U.S. Pat. No. 8,566,928. |
U.S. Appl. No. 12/538,612, 2010-0037314, filed Feb. 11, 2010, Pending. |
U.S. Appl. No. 12/985,140, 2011-0167495, filed Jul. 7, 2011, U.S. Pat. No. 8,578,497. |
U.S. Appl. No. 13/008,257, 2011-0283361, filed Nov. 17, 2011, U.S. Pat. No. 8,826,438. |
U.S. Appl. No. 13/205,928, 2012-0042381, filed Feb. 16, 2012, Pending. |
U.S. Appl. No. 13/309,202, 2012-0143650, filed Jun. 7, 2012, Pending. |
U.S. Appl. No. 13/358,303, 2012-0198549, filed Aug. 2, 2012, U.S. Pat. No. 8,631,489. |
U.S. Appl. No. 13/749,205, 2013-0191915, filed Jul. 25, 2013, Pending. |
U.S. Appl. No. 14/010,016, 2014-0059216, filed Feb. 27, 2014, Pending. |
U.S. Appl. No. 14/015,582, 2014-0068763, filed Mar. 6, 2014, Pending. |
U.S. Appl. No. 14/015,621, 2014-0075558, filed Mar. 13, 2014, Pending. |
U.S. Appl. No. 14/015,663, 2014-0090058, filed Mar. 27, 2014, Pending. |
U.S. Appl. No. 14/015,704, 2014-0068775, filed Mar. 6, 2014, Pending. |
U.S. Appl. No. 14/015,661, 2014-0245436, filed Aug. 28, 2014, Pending. |
U.S. Appl. No. 14/041,796, 2014-0101759, filed Apr. 10, 2014, Pending. |
U.S. Appl. No. 14/096,803, 2014-0157414, filed Jun. 5, 2014, Pending. |
U.S. Appl. No. 14/194,076, filed Feb. 28, 2014, Pending. |
U.S. Appl. No. 14/304,015, filed Jun. 13, 2014, Abandonded. |
U.S. Appl. No. 14/305,998, 2014-0373148, filed Dec. 18, 2014, Pending. |
U.S. Appl. No. 14/317,785, 2015-0026808, filed Jan. 22, 2015, Pending. |
U.S. Appl. No. 14/616,387, filed Feb. 6, 2015, Pending. |
U.S. Appl. No. 14/668,329, filed Mar. 25, 2015, Pending. |
Kristoff, Botnets, Detection and Mitigation: DNS-Based Techniques, NU Security Day (2005) 23 pages, www.it.northwesterd.edu/bin/docs/bots_Kristoff_jul05.ppt. |
Mekky et al. “Detecting Malicious HTTP Redirections Using Trees of User Browser Activity” (2014) IEEE Infocom 2014, pp. 1159-1167. |
Number | Date | Country | |
---|---|---|---|
20130191915 A1 | Jul 2013 | US |
Number | Date | Country | |
---|---|---|---|
61590633 | Jan 2012 | US |