A device may be placed (e.g., below the local recursive domain name server (RDNS) or at the edge of a network) that may monitor DNS query/response messages from/to the computers within the network. For example, queries to domain names that result in non-existent domain (NX DOMAIN) response code (RCODE) messages (e.g. domain names for which no mapping to an IP addresses exists) may be analyzed. These types of unsuccessful domain name resolutions may be referred to as NX domains. To identity automatically generated domain names, NX application 105 may search for relatively large clusters of NX domains that have similar syntactic features and are queried by multiple (e.g., potentially compromised by DGA-enabled malware) computers during a given time period (e.g., one day). As multiple computers may be compromised with the same DGA-based bots, these computers may generate several DNS queries resulting in NX domain names, and many of these NX domain names may be queried by more than one computer. In addition, NX application 105 may identify and filter out natural user-generated NX domain names (e.g., due to typos, etc.). Once NX application 105 finds a cluster of NX domain names, this cluster may represent a sample of domain names generated by a DGA. At this point, NX application 105 may apply statistical data mining techniques to build a model of the DGA that may be used to track compromised assets running the same DGA-bot (e.g., in real-time or near real-time).
The DGA discovery module 205 may analyze streams of NX domains generated by assets 115 located within a monitored network 125 to identify, in an unsupervised way, clusters of NX domains that are being automatically generated by a DGA. The NX domains generated by network users querying non-existent domain names may be collected during a given epoch E. Then, the collected NX domains may be clustered. The DGA discovery module 205 may comprise: a name-based features clustering module 215, a graph clustering module 220, a daily clustering correlation module 225, a temporal clustering correlation module 226 or a DGA filtering module 230, or any combination thereof.
The name-based features clustering module 215 may cluster NX domains if the domain names strings have similar statistical features (e.g., similar length, similar level of randomness (e.g., Table 1 of
After obtaining the correlated clusters of NX domains via the daily cluster correlation module 225, the temporal clustering correlation module 226 may utilize a roiling window of two consecutive epochs to correlate temporal clusters that may include assets that consistently create NX domains that may tend to cluster together (e.g., according to the name-based features clustering and the graph clustering). The resulting temporally correlated clusters may be forwarded to the DGA filtering module 230. The DGA filtering module 230 may prune away clusters of NX domains that are already known, and therefore reduce the output to clusters that are composed by previously unseen and not modeled DGA NX domain traffic. In this way, modules 215, 220, 225, 226 and 230 may group together domain names that likely were automatically generated by the same algorithm running on multiple machines within the monitored network.
The DGA bot identification module 210 may analyze the assets collecting the NX domains. While the DGA discovery module 205 may consider collections of assets, the DGA bot identification module 210 may consider each asset separately. The DGA bot identification module 210 may comprise: a DGA classifier module 235 and/or a DGA modeling module 240. The DGA classifier module 235 may model DGA traffic and may use machine learning on NX domain data present in a knowledge base as well as reports of unknown DGAs arriving from the DGA discovery module 205. The clustering done by modules 215, 220, 225, 226 and 230 may contain groups of domain names that happen to be similar due to reasons other than that they were generated by the same algorithm (e.g., NX domains due to common typos or to misconfigured applications). Thus, the DGA classifier module 235 may be used as a filter that may help eliminate noisy clusters. The DGA classifier module 235 may prune NX domain clusters that appear to be generated by DGAs that have been previously discovered and modeled, or that contain domain names that are similar to popular legitimate domains. The DGA modeling module 240 may be responsible for two operations: the asset classification and the DGA modeling processes. Each asset may be classified on a daily basis using the stream of NX domains that is generated. These NX domains may be grouped in sets of a, and classified against models of known DGAs. The DGA classifier module 235 may be trained using the following sources of labeled data: a large list of popular legitimate domain names; NX domains collected by running known DGA-bot samples in a controlled environment; or clusters of NX domains output by the DGA discovery module 205 that undergo manual inspection and are confirmed to be related to yet-to-be-named botnets; or any combination thereof. For example, the DGA modeling module 240 may receive different sets of labeled domains (e.g., Legitimate, DGA-Bobax, DGA-Torpig/SinowaL, DGA-Conficker. C, DGA-UnknownBot-1, DGA-UnknownBot-2, etc.) and may output a new trained DGA classifier module 235.
The elements of the DGA discovery module 205 and the DGA bot identification module 210 have been set forth above. These are now described in more detail.
DGA Discovery Module 205.
The DGA discovery module 205 may comprise: a name-based features clustering module 215, a graph clustering module 220, a daily clustering correlation module 225, a temporal clustering correlation module 226 or a DGA filtering module 230, or any combination thereof. These modules are described in detail below.
Name-Based Features Clustering Module 215.
Using all the NX domains observed in an epoch, and using groups of α sequential NX domains per asset, multiple name-based features may be computed. For example, in one embodiment, multiple (e.g. 33) features from the following families may be computed: n-gram features (NGF): entropy-based features (EBF); or structural domain features (SDF); or any combination thereof. (These feature families are described in more detail below.) After computing all vectors in the epoch, X-means clustering (XMC) may be used to cluster vectors composed by a NX domains that share similar name-based statistical properties. In some embodiments, this name-based features clustering does not need to take into consideration the individual assets that generated the NX domains.
For example, assume that for epoch ti, XMC is run to obtain a set of clusters:
CIDS
1,
XMC
={cid
1,
XMC·1
, . . . , cid
1,
XMC·β}
where CID may be classer identification; CIDS may be the set of all classer IDs, cidXMC={tupl, . . . , tupm} may be a set of m tuples tupp, =<d·ip>, d may be an NX domain, and ip may be the assets' IP addresses that generated the assets during epoch ti, and ended up in the kth cluster. β, where 1≦k≦β may be defined as the final number of clusters that the XMC process produced after clustering the NX domains from the epoch ti.
Statistical Features.
As a reference, the definitions and notation used for various example statistical features are described below. The statistical features may be referred to as name-based features. Name-based features may aim to model the statistical properties of domain name strings. The motivation behind these features may include the feet that DGAs typically take an input seed and generate domain name strings by following a deterministic algorithm. These domains may often have a number of similarities, such as similar length, similar levels of randomness, and a similar number of domain levels. Name-based statistical features may comprise: n-gram features (NGF); entropy-based features (EBF); or structural domain features (SDF); or any combination thereof.
Notation.
A domain name d comprises a set of labels separated by dots (e.g., www.example.com). The rightmost label may be called the top-level domain (TLD or TLD(d)) (e.g., com). The second-level domain (2LD or 2LD(d)) may represent the two rightmost labels separated by a period (e.g., example.com). The third-level domain (3LD or 3LD(d)) may contain the three rightmost labels (e.g., www.example.com). In some embodiments, only NX domains that have a valid TLD appearing in the ICANN TLD listing (e.g., http://www.icann.org/en/tlds/.) may be considered. A domain name d may be called an NX domain if it does not own any resource record (RR) of Type-A. In other words, d may be called an NX domain if it does not resolve to any IP address.
The IP address of an asset in a monitored network that relies on the network's local recursive DNS server (RDNS) to resolve domain names may be defined as IPj. (Note that the approach may be generalized to machines that use an external resolver (e.g., Google's Public DNS). This IP address could represent a single machine, a NAT or proxy device.) The sequence of NX domains generated by IP address IPj (e.g., the IP address of the requester) during an epoch ti may be defined as N1,IP,={d1,d2, . . . , dm}. In one embodiment, N1,IP, may be split into a number of subsequences NX={dr,dr+1, . . . dr+α+1} of length α, where r=α(k−1)+1 and k =1,2, . . . , (m/α). Subscript k may indicate the kth subsequence of length α in the sequence of m NX domains N1,IP
Given a sub-sequence of NX domains NXof length α generated during epoch ti, in some embodiments, three groups of name-based features may be extracted.
n-Gram Features (NGF).
Given a sequence NXof α NX domains, this group of features measures the frequency distribution of n-grams across the domain name strings, for a given value of n=1, . . . , 4. For example, for n=2, the frequency of each 2-gram may be computed. At this point, the median, average, and standard deviation of the obtained distribution of 2-gram frequency values may be computed, and thus three features may be obtained, in some embodiments, this may be done for each value of n=1, . . . , 4, thus producing 12 statistical features.
This group of features may try to capture whether the n-grams are uniformly distributed, or if some n-grams tend to have a much higher frequency than others. While this group of features by itself may not be enough to discriminate domains generated by DGAs from domains generated by human typos, when combined with other features, they may yield high classification accuracy.
Entropy-Based Features (EBF).
This group of features may compute the entropy of the character distribution for separate domain levels. For example, the character entropy for the 2LDs and 3LDs extracted from the domains in NXmay be separately computed. For example, consider a set NXof α domains. First, the 2LD of each domain di∈NXmay be extracted, and for each domain the entropy H(2LD(di)) of the characters of its 2LD may be computed. Then, the average and standard deviation of the set of values {H(2LD(di))}i=1 . . . α may be computed. This may be repeated for the 3LDs and for the overall domain name strings. At each level, the period characters separating the levels may be removed. Overall, six features based on character entropy may be created.
This group of features may determine the level of randomness for the domains in NX, and may be motivated by the fact that a number of DGAs used by botnets may produce random-looking domain name strings. These features may carry a signal that contributes to distinguishing between NX domains due to certain DGAs and legitimate NX domains related to typos.
Structural Domain Features (SDF).
This group of features may be used to summarize information about the structure of the NX domains in NX, such as their length, the number of unique TLDs, and the number of domain levels. In some embodiments, 14 features may be computed. Given the set of domains NX, the average, median, standard deviation, and variance may be computed for: the length of the domain names (adding up to four features) and the number of domain levels (adding up to four features). In addition, the number of distinct characters that appear in these NX domains may be computed (one feature). The number of distinct TLDs (one feature) and the ratio between the number of domains under the .com TLD and the number of NX domains that use other TLDs (one feature) may also be computed. In addition, the average, median and standard deviation of occurrence frequency distribution for the different TLDs (adding up to three features) may also be computed. Note that some or all of these features may be computed in some embodiments. Additional features may also be computed with none, some or all of these features in other embodiments.
Graph Clustering (GC) Module 220.
As discussed above, NX domains may be clustered based on the relationships of the assets that query them. A sparse association matrix M may be created, where columns represent NX domains, and rows represent assets that query more than two NX domains over the course of an epoch T·Mi,j=0 if asset i did not query NX domain j during epoch T. Conversely, Mi,j=1×w if i did query j, where w may be a weight.
To assign weights to M to influence clustering, an approach analogous to tfc-weighting in text classification tasks may be used. (For more information on tfc-weighting, see K. Aas and L. Eikvil, Text Categorization: A Survey, 1999; and R. Feldman and J. Sanger, The Text Mining Handbook: Advanced Approaches in Analyzing Unstructured Data, Cambridge Univ. Pr. 2007, which are both herein incorporated by reference.) The basic idea may revolve around a common property of document collections: as more documents contain a given term, the less likely the term is representative of a document. For example, in a representative sample of English documents, the term and definite article “the” likely appears in most of them and poorly describes document content. Conversely, the term “NASA” may appear in fewer documents and probably only in documents with content similar to the term. To adjust for this, the inverse document frequency, or idf, may be used to give a lower weight to terms that appear frequently across the document collection, and a higher weight to terms that are uncommon. In the example of NX domains, the NX domains may be considered the “documents” to be clustered, and the assets may be the “terms” used to generate weights. As such, as more NX domains are queried by an asset, the less likely the asset may be representative of the NX domains it queries. Therefore, a weight w may be assigned to all assets such that w˜1/k, where k may be the number of NX domains the asset queries.
Once the weights are computed, a spectral clustering graph partitioning strategy may be followed on the association matrix M. This technique may require the computation of the first p=15 eigenvectors of M. This may create a p-dimensional vector for each of the NX domains in M. We may use the derived vectors and X-means to cluster the NX domains based on their asset association. We may obtain a set of clusters CIDS={cid, . . . , cid}, where cid={tupl, . . . , tupm} may be a set of m tuples, tupm=<d,ip>, d may be an NX domain name, and ip may be the assets' IP address that generated the assets during epoch ti and ended up in the kth cluster. In addition, γ, where 1≦k≦γ, may be the final number of clusters produced by the XMC process.
In some embodiments, the presented graph clustering technique may become less effective if a DGA generates a high volume of domain names daily while any DGA-infected asset only queries a very small subset of them, because this may significantly decrease the probability that a NX domain is looked up by two DGA-infected assets under the same epoch. On the other hand, a botnet based on such a DGA may come with a heavy toll for the botmaster because this botnet may significantly decrease the probability of establishing a C&C connection.
Daily Cluster Correlation Module 225.
After the completion of the XMC (by 215) and graph clustering (GC) (by 220), the daily cluster correlation module 225 may associate NX domains between the sub-clusters produced from both the XMC and graph clustering processes. This daily cluster correlation may be achieved by the intersection of tuples (e.g., NX domain names and the assets that generated them) between clusters derived from the name-based features clustering module 215 and clusters derived from the graph clustering module 220, during the same epoch. All possible daily correlation clusters that have at least one tuple in common may be created. The correlation process may then be defined. Let DC={tup1·tup2, . . . , tupm} be the daily correlation cluster DCwhere the Jacard similarity index between the tuples of cidand cidmay be greater than θ=0, during the epoch ti for the kth cluster in the set CIDSand the jth cluster in the set CIDS.
If m∈cidand m∈cid, then cidand cidmay be correlated in epoch ti with m tuples in common, and may produce a new correlation cluster referred to as DC. This process may be repeated between all clusters in cidand cid. The set with all daily correlated clusters may be DCC=∪. The value of m may be
bounded by the number of tuples in each of the cidand cidclusters. However, the value of z may have an upper bound μ. This bound may be the maximum number of daily correlated cluster β×γ≦μ, where β may be the size of the set CIDSand γ the size of the set CIDS. In reality, the value of μ may be significantly lower than β×γ≦μ, because not all clusters may be correlated to each-other.
Temporal Cluster Correlation Module 226.
After obtaining the correlated clusters of NX domains via the daily cluster correlation module 225, a rolling window of two consecutive epochs may be used to correlate temporal clusters that may include assets that consistently create NX domains that may tend to cluster together (according to the XMC (done by 215) and GC (done by 220)). The resulting temporally correlated clusters may be forwarded to the DGA filtering module 230. The process that gives us the temporally correlated clusters is defined below.
Let TC={tup2, . . . , tupn} be the temporal correlation cluster TC, which may be derived on the epoch ti after the correlation of the following two clusters: DCand DC, where DC∈DCCand DC∈DCC. The n tuples within the TCcluster may be found already in daily correlated clusters created in the last two consecutive epochs, DCand DC. To derive the tuples within TC, a set Amay be selected that contains the IP addresses (or assets) in the tuples from the cluster DC. Using the same logic, a set Bmay be collected with all the IP addresses in the tuples from the cluster DC. The Jaccard similarity index may then be computed between the sets Aand B. (The following reference, which is herein incorporated by reference, provides more information on the Jaccard similarity index: http://en.wikipedia.org/wiki/Jaccard_index.) In some embodiments, the Jaccard similarity index must exceed the temporal correlation threshold τ (in our case τ=0) to form the temporal cluster TC. To assemble the tuples in the temporal cluster, the set of IP addresses where C, where C=ABmay be extracted. Using the IP addresses in the set C, the set of tuples may be aggregated by collecting all tuples in DCand DCthat contain IP addresses in the set C. Finally, the temporal cluster may be formed if more than α distinct NX domains are derived from the tuples collected. At least a unique NX domains may be needed to characterize the temporal cluster. This process may be repeated between all daily correlated clusters in DCCand DCC. All temporal correlated clusters may be forwarded to the DGA filtering module 230.
DGA Filtering Module 230.
When the DGA filtering module 230 receives the new temporal clusters, the DGA filtering module 230 breaks the NX domains in each cluster in random sets of cardinality α. Then, in some embodiments, using the same three feature families (e.g., NGF, EBF and SDF), the DGA filtering module 230 may compute vectors and classify them against the models built from the known DGA models. Each temporal cluster may be characterized using the label with the highest occurrence frequency after the characterization process. The NX domains used to compile vectors that do not agree with the label with the highest occurrence may be discarded. The temporal cluster confidence may be computed by the average confidence from all vectors of the highest occurrence label in the temporal cluster.
Clusters may be excluded that are very similar (more than 98% similarity) to the already known DGA models. Any temporal clusters that were characterized as benign may also be discarded. All remaining temporal clusters that do not fit any known model may be candidates for compiling new DGA models (see 335 of
DGA Bot identification Module 210.
As set forth above, the DGA bot identification module 210 may comprise a DGA modeling module 240 and/or a DGA classifier module 235. These modules are described in detail below.
DGA Classifier Module 235.
The DGA classifier module 235 may utilize a Random Forest meta-classification system. (The following reference, which is herein incorporated by reference, provides more information on the random forest meta classification system: http://en.wikipedia.org/wiki/Random_forest.) This DGA classifier module 235 may be able to model statistical properties of sets of NX domains that share some properties. The models may be derived from training datasets, which may be composed using sets with size a of domain names to compile vectors using the NGF, EBF and SDF statistical feature families. The DGA classifier module 235 may model successfully all known malware DGAs (e.g., Conficker, Murofet, etc.). Furthermore, the DGA classifier module 235 may clearly differentiate them from domain names in the top Alexa rankings. In some embodiments, the DGA classifier module 235 may operate as follows: First, vectors from each DGA family (or class) may need, to be populated. To do so, NX domains from each class may be grouped into sets of α sequential domains. Using the statistical features from the families NGF, EBF and SDF, a single vector for the set of α domain names may be computed. Then, a tree-based statistical classifier may be used to train it against all classes of DGAs we have and against the benign class from the Alexa domain names. Once the DGA classifier module 235 is trained with a balanced number of vectors for each class, we may proceed with classification of α sequential NX domains (per asset) as they arrive from the ISP sensor.
DGA Modeling Module 240.
The temporal clusters that do not fit any known model according to the confidences that the DGA classifier module 235 assigns to them may be candidates for the DGA modeling module 240. One goal of the DGA modeling module 240 may be to harvest the maximum possible NX domains per temporal cluster. Another goal may be to use the collected NX domains to model the new DGAs. To achieve this, the NX domains from assets that comprise the temporal cluster that may be a candidate for DGA modeling may be aggregated. The set of NX domains that may be used to build the model may be all the NX domains in the tuples within the temporal cluster. Using the statistical features from the NGF, EBF and SDF families, and, as a seed, the NX domains derived from the assets in the temporal cluster, the training dataset for each new DGA variant may be compiled. The new training dataset may be compiled using again the same Random Forest meta-classification system used in the DGA classifier module 235. Once the new models are built, the models may be pushed to the DGA classifier module 235 and network assets may be classified against the newly discovered DGAs. A classification report per asset, as the set of α NX domains per asset gradually get classified, may be provided.
Once the clustering and cluster correlation is done, the results may include groups of NX domains, where each group of NX domains may contain domains that are likely to be automatically generated by the same DGA. Thus, for each of the NX domain clusters, it may be determined whether its domains are related to a newly discovered (e.g., never seen before) DGA, or a previously seen DGA. In 330, the DGA filtering module 230 may be used to distinguish between sets of NX domains generated by different known DGAs. If an NX domain cluster is found to match one of the previously modeled DGAs, it may be discarded (although it might also be used to further refine the supervised classifier) because at this stage discovering new DGAs may be useful. After it has been determined in 335 that a cluster of NX domains is related to a new (unknown) DGA, in 340 this information may be sent to the DGA, modeling module 240 of the DGA-bot identification module 210, which may attempt to model the new (unknown) DGA(s). In 345, the newly modeled DGA may be pushed to the DGA classifier module 235, which may be retrained to account for the newly discovered domain generation pattern(s). In 350, the network assets may be classified against the previously known and new DGAs in the DGA classifier module 235. In 355, a classification report may be provided.
While various embodiments have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. Thus, the present embodiments should not be limited by any of the above-described embodiments
In addition, it should be understood that any figures which highlight the functionality and advantages, are presented for example purposes only. The disclosed methodology and system are each sufficiently flexible and configurable, such that it may be utilized in ways other than that shown.
Further, the purpose of any Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. Any Abstract of the Disclosure is not intended to be limiting as to the scope of the present invention in any way.
It should also be noted that the terms “a”, “an”, “the”, “said”, etc. signify “at least one” or “the at least one” in the specification, claims and drawings. In addition, the term “comprises” signifies “including, but not limited to”.
Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112, paragraph 6. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112, paragraph 6.
This application claim the benefit of U.S. patent application Ser. No. 61/590,633, filed Jan. 25, 2012, which is incorporated by reference in its entirety. This application is related to U.S. patent application Ser. No. 12/985,140, filed Jan. 5, 2011, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61590633 | Jan 2012 | US |