This disclosure relates in general to the field of network security, and more particularly, to a system and a method for botnet detection by comprehensive behavioral analysis of electronic mail.
The field of network security has become increasingly important in today's society. The Internet has enabled interconnection of different computer networks all over the world. The ability to effectively protect and maintain stable computers and systems, however, presents a significant obstacle for component manufacturers, system designers, and network operators. This obstacle is made even more complicated due to the continually-evolving array of tactics exploited by malicious operators. Of particular concern more recently are botnets, which may be used for a wide variety of malicious purposes. Once malicious software (e.g., a bot) has infected a host computer, a malicious operator may issue commands from a “command and control server” to control the bot. Bots can be instructed to perform any number of malicious actions such as, for example, sending out spam or malicious emails from the host computer, stealing sensitive information from a business or individual associated with the host computer, propagating the botnet to other host computers, and/or assisting with distributed denial of service attacks. In addition, a malicious operator can sell or otherwise give other malicious operators access to a botnet through the command and control servers, thereby escalating the exploitation of the host computers. Consequently, botnets provide a powerful way for malicious operators to access other computers and to manipulate those computers for any number of malicious purposes. Security professionals need to develop innovative tools to combat such tactics that allow malicious operators to exploit computers.
To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:
A method is provided in one example embodiment that includes receiving message sender traits associated with email senders, and receiving a dataset of known malware identifiers and network addresses from a spamtrap. The message sender traits may include behavior features and/or content resemblance factors in various embodiments. The method further includes classifying the email senders as malicious or benign based on the behavior features, and further classifying the malicious senders by malware identifiers based on similarity of content resemblance factors and the dataset of known malware identifiers and network addresses. In certain specific embodiments, a supervised classifier, such as a support vector machine, may be used to classify the malicious senders by malware identifiers. In yet other particular embodiments, the content resemblance factors may be message fingerprints and the behavior features indicate message distribution of each email sender and the delivery speed of each email sender. Noisy feature elements and feature elements originating from a relatively small number of email senders may also be pruned from content resemblance factors in some embodiments.
Turning to
Each of the elements of
Before detailing the operations and the infrastructure of
Botnets have become a serious Internet security problem. In many cases they employ sophisticated attack schemes that include a combination of well-known and new vulnerabilities. Usually, a botnet is composed of a large number of bots that are controlled through various channels, including Internet Relay Chat (IRC) and peer-to-peer (P2P) communication, by a particular botmaster using a C&C protocol. Once machines are exploited and become bots, they are often used to commit Internet crimes such as sending spam, launching DDoS attacks, phishing attacks, etc.
Botnet attacks generally follow the same lifecycle. First, desktop computers are compromised by malware, often by drive-by downloads, Trojans, or un-patched vulnerabilities. The term “malware” generally includes any software designed to access and/or control a computer without the informed consent of the computer owner, and is most commonly used as a label for any hostile, intrusive, or annoying software such as a computer virus, spyware, adware, etc. Once compromised, the computers may then be subverted into bots, giving a botmaster control over them. The botmaster may then use these computers for malicious activity, such as spamming.
Having a realtime botnet tracking system can prevent attacks originated from botnets, or at least reduce the risks of exploits from malicious contact. It can also provide researchers with valuable behavioral history of botnet IPs.
Under certain circumstances, internal activities of botnets may be observed to understand how they operate. For example, a botnet may be observed by taking over C&C channels and intercepting communications between bots and their C&C server. Such approaches, however, often require botnet related malware binaries to be installed and run in a sandboxed environment so that analysis can be performed securely. Moreover, active botnets can be very difficult to infiltrate and their protocols can change frequently. Thus, this approach can be very complex and time consuming, and generally is not able to provide comprehensive information on the numerous botnets that are active globally at any given time.
Much can also be learned from observing and analyzing the external behavior of botnets. This approach may be used to study different kinds of attack patterns. For example, it can be used to discover spam email sending patterns, correlation between inbound and outbound email, clustering of both TCP level traffic and application level traffic, etc.
These approaches are often confined to a local network, because building a distributed environment and minimizing the liability of potential harm to the rest of the Internet can require tremendous resources. Thus, at least within a short term, it is difficult to achieve a global visibility of botnet behavior using these approaches.
In accordance with one embodiment, network environment 10 can overcome these shortcomings (and others) by providing comprehensive behavioral analysis of email. A host's botnet membership may be inferred based on the host's behavior as observed from its email traffic patterns. The email traffic is observed from a network of email sensors, which may be deployed in EAs or other network elements throughout the Internet. The email traffic information may be aggregated and correlated to indicate the existence and the territory of various botnets.
Message sender traits, including behavioral features and content resemblance, can be captured in email traffic traces for effective email sender and botnet classification. To capture email sender behavior, EAs can record email SIPs, DIPs, time stamps, and other data when email arrives. Based on the recorded information, behavior features can be extracted. The types of behavior features that can be extracted may vary based on data available from external network infrastructure, but may include, for example, the number of DIPs to which a SIP sends messages, the number of messages that one SIP sends, the message sizes from a SIP, etc. With an appropriate classifier, behavioral analysis of this traffic may be used to classify each bot into specific botnets without detailed information about the botnet or any prior knowlege of any C&C communication, based on a comparison of sending behavior. For example, sending behavior of bot host 30a and bot host 30b may be compared based on data collected by different EAs, such as EA 20a and 20c. If bot host 30b exhibits sending behavior similar to bot host 30a, then both may be attributed to the same botnet. Classifiers may include, for example, support vector machines (SVMs), decision trees, decision forests, or neural networks.
Behavioral analysis may be extended further to include a resemblance factor of message content with a message transformation algorithm. A content resemblence factor may be used to infer similarity between two messages originating from the same botnet while protecting the privacy of legitimate messages. Message fingerprints are one example of a content resemblance factor. Message content analysis can then be performed based on resemblance factors, such as fingerprints, rather than original content, which may protect the privacy of legitimate content. The fingerprint is sufficiently resilient to the obfuscation that spammers usually apply to the content in order to circumvent spam filters. This technique can ensure that if the message content of two email messages differs by only a small amount, then the fingerprints will also differ by only a small amount, and it can be inferred that two SIPs that send similar spam messages belong to the same botnet.
Rule-based elements may also be combined with classification of behavioral features to achieve global visibility into different kinds of botnets. For example, a spamtrap may be used in certain embodiments to correlate spam messages with particular botnets. By applying known heuristics (e.g., the presence of certain text in email headers, the presence of certain text in email bodies, the order of email headers, certain non-standard compliant behavior when interacting with a spamtrap mail server, etc.) on spam received in the spamtrap, a dataset with known botnet membership can be obtained. Since the spam messages originate from a known IP address, a relationship between the address and a botnet can be established.
In one embodiment of network environment 10, a two-level supervised behavioral classifier may be used to compare behavior features and message content fingerprints from email traffic traces with spamtrap samples. This method does not require any knowledge of C&C communications between bots.
In such an embodiment, the first level classifier may be a binary classifier that discriminates benign SIPs from malicious ones, based solely on email sender behavior. The outcome of this first-level classification generally includes a group of IP addresses that are identified as malicious. The second-level classifier targets multi-objectives prediction, which can classify malicious SIPs into several individual botnets if the SIPs' behavior is substantially similar to that of a particular known bot. The second-level classifier can use email sender behavior, but may also use message content fingerprints collected from email traces and IP addresses with associated labels collected from a spamtrap. Once a classification model is generated, the second level classifier can classify the malicious IP addresses obtained from the first level classifier to group IP addresses into botnets.
Turning to
In one example implementation, EAs 20a-b, analyzer element 25, bot host 30, and/or spamtrap 40 are network elements, which are meant to encompass network appliances, servers, routers, switches, gateways, bridges, loadbalancers, firewalls, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. Moreover, the network elements may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information.
In regards to the internal structure associated with network environment 10, each of EAs 20a-b, analyzer element 25, bot host 30, and/or spamtrap 40 can include memory elements (as shown in
In one example implementation, EAs 20a-b, analyzer element 25, bot host 30, and/or spamtrap 40 include software (e.g., as part of analyzer module 65, etc.) to achieve, or to foster, botnet detection and analysis operations, as outlined herein. In other embodiments, this feature may be provided externally to these elements, or included in some other network device to achieve this intended functionality. Alternatively, these elements may include software (or reciprocating software) that can coordinate in order to achieve the operations, as outlined herein. In still other embodiments, one or all of these devices may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof.
Note that in certain example implementations, botnet detection and analysis functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit [ASIC], digital signal processor [DSP] instructions, software [potentially inclusive of object code and source code] to be executed by a processor, or other similar machine, etc.). In some of these instances, memory elements [as shown in
Additionally, two hashes may be calculated for each kgram. The first hash can be used to determine the smallest FEs and the second hash may be used as the actual FEs. This approach may provide several advantages. For example, FEs may be more evenly distributed throughout the space of possible values. Second, in the rare case of a collision of FEs, it is less likely that both are picked with the same probability since their first hash is likely to differ.
A quick classification of botnets and other threats is highly desirable since many threats on the Internet are ephemeral and fast-moving. One significant challenge for quick classification of bot-based message content is the large number of features generated from email content. Millions of messages may need to be processesed at the same time and each FE can increase the dimensionality of the feature space, which can easily create a classification problem that cannot be computed in a reasonable time. Noisy FEs can also decrease classification performance. In accordance with one embodiment, network environment 10 can overcome this challenge by pruning FEs that are unnecessary for classification, as may done at 330 in
First, a threshold may be defined such that FEs are pruned unless they are seen from a number of SIPs that exceeds the threshold value. Botmasters typically employ a large number of bots in spam campaigns to assure that they can achieve high throughput and delivery rates even if parts of the botnet are blacklisted, which implies that the FEs associated with spam campaigns are typically seen from a large number of SIPs. Thus, FEs from a relatively small number of SIPs can be pruned with a high degree of confidence that they are not associated with a spam campaign.
Second, FEs that are known to be from benign, whitelisted SIPs (as determined by classification at 325, for example) can be pruned to reduce noisy FEs. Noisy FEs may be the result of automatic signatures or confidentiality statements attached to the end of messages by many companies, for example. Another potential source of noisy FEs is the markup language used by many mail user agents, which can contain large blocks of boilerplate markup and styling. Such messages are likely to contain elements of similarity, but are nonetheless legitimate messages from reputable senders that do not belong to any botnet. Another potential problem may be presented by legitimate high-volume senders. Such senders can deliver a large number of different messages, which in turn can result in a large number of FEs. The large size of the FE space can significantly reduce classification performance. Thus, in some embodiments, only FEs that have been seen from potentially malicious SIPs or SIPs that are neither known to be benign nor malicious yet are be retained for further analysis.
Referring again to
Based on collected email traces having SIPs, DIPs, time stamps, and/or other message features, two different sets of features may be extracted, referred to herein as “breadth features” and “spectral features.” Breadth features contain information about the number of EAs to which one particular SIP tries to send messages, the number bursts of email delivery seen by each EA, the total message volume in a burst, and the number of outbreaks of a SIP during a spam campaign, etc. Spectral features capture the sending pattern of a SIP. A configurable timeframe may be divided into slices, which results in a sequence of messages delivered by a SIP in each slice. This sequence may be transformed into the frequency domain using a discrete Fourier transform. Since spam senders do not typically have a regular low-frequency sending pattern in a given twenty-four hour time window, these features may be used to distinguish spam patterns from legitimate email traffic.
Note that the behavior features available to a classifier may depend upon, vary with, and/or be constrained by the types of data accessible from various sources, and the various embodiments of classifiers described herein are generally not dependent upon a particular set of behavior features. Thus, a high-level discussion of the methodology and theory behind feature selection and extraction is provided here.
To classify spam senders with a particular botnet in one embodiment of network environment 10, as may be done at 335 of flowchart 300, behavioral analysis may be extended to include both message sending behavior and message content resemblance characteristics. In general, the results of the first level classification at 325 may be taken as input to this second level classifier at 335. In order to detect which botnet a malicious SIP may belong to, heuristics may be applied to spamtrap samples collected at 320 to obtain pairs of information, <malware ID, IP>, such that each SIP is correctly labeled with a botnet name. In one embodiment, these heuristics are regular expression rules that may be applied during the mail transport protocol conversation. These regular expression rules can be derived by running malware in a sandbox environment and analyzing the messaging traffic generated by the malware for idiosyncrasies in the protocol implementation or for common content templates, for example. All of the SIPs that appear both in the labeled pairs collected from spamtraps and the behavioral feature dataset may be used for training a classifier. In addition to the features used in the first level classifier, count information for each feature element in the fingerprint for all messages may be employed. SIPs can be safely labeled with detected botnet names found in the spamtrap samples because all the SIPs in this set are known to be delivering spam.
To combine the message content features with SIP behavior features, FEs from a SIP may be aggregated. Assuming that two SIPs, SIP-a and SIP-b, are members of the same botnet and are participating in the same spam campaign, then spam messages should have highly similar content and the FEs seen for SIP-a should have a significant portion overlapping with the FEs seen for SIP-b. Similarly, assuming that typical bots do not have significant differences in resources regarding processing capacity, bandwidth, and online continuity, then both SIP-a and SIP-b should demonstrate similar sending behavior regarding the message volume, frequency, and breadth of DIPs, etc. Also, some behavioral features can be independent of capacity. Examples include the local sender time when most email activity occurs, the number of different domains in the sender address field for all messages sent by a SIP (e.g. as determined by a hash count based on reputation query data), and the average message size.
In one embodiment of network environment 10, a combination of several binary SVMs with a one-vs-one strategy may be used for analysis, although other techniques may be used as appropriate. An SVM classifier can be built for each pair of two classes (botnets), and then (N*(N−1))/2 rounds of binary classification may be repeatedly performed, where N is the number of classes (botnets) to classify. By applying a one-vs-one strategy, a SIP can be repeatedly fit to every two of the N classes. The final decision can be made by major voting—a SIP is classified in a botnet with the maximum number of votes. If there is an equal number of maximum votes, then a SIP is classified in all of the botnets with the maximum number of votes.
Note that with the examples provided above, as well as numerous other examples provided herein, interaction may be described in terms of two, three, or four network elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that network environment 10 is readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of network environment 10 as potentially applied to a myriad of other architectures. Additionally, although described with reference to particular scenarios, where a particular module, such as a behavior analyzer module, is provided within a network element, these modules can be provided externally, or consolidated and/or combined in any suitable fashion. In certain instances, such modules may be provided in a single proprietary unit.
It is also important to note that the steps in the appended diagrams illustrate only some of the possible scenarios and patterns that may be executed by, or within, network environment 10. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of teachings provided herein. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by network environment 10 in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings provided herein.
Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.