This application is directed to an invention/inventions made as a result of activities undertaken within the scope of a Joint Research Agreement made between Lockheed Martin Corporation and the General Electric Company.
Embodiments of the present invention relate generally to methods, systems and computer-readable media for detection of network intrusions, more specifically, to methods and systems for detection of network intrusions using minimum description length (MDL) clustering.
Conventional signature-based intrusion detection systems may be easily defeated by polymorphic or zero day attacks. The present invention was conceived in light of the aforementioned limitation, among other things.
Embodiments include an MDL clustering technique that provides for unsupervised, semi-supervised and supervised machine learning. A clustering engine in an embodiment can include an MDL compress model. The MDL compress model is described in co-pending U.S. patent application Ser. No. 12/260,627, entitled “MDL Compress System and Method for Signature Inference and Masquerade Intrusion Detection” and filed on Oct. 29, 2008; Ser. No. 12/260,682, entitled “Network Intrusion Detection Using MDL Compress for Deep Packet Inspection” and filed on Oct. 29, 2008; and Ser. No. 12/398,432, entitled “Intrusion Detection Using MDL Compression” and filed on Mar. 5, 2009, which are each incorporated herein by reference in their entirety.
An embodiment can include a network intrusion detection system having a processor coupled to a nontransitory computer readable medium bearing software instructions that, when executed by the processor, cause the processor to perform a series of operations. The operations can include clustering network traffic files into a plurality of clusters based on minimum description length (MDL) similarity; building an MDL model for each cluster; and calculating distances from each traffic file to each MDL model to obtain a distance vector for each traffic file, each distance vector having a distance from a corresponding traffic file to each MDL model. The operations can also include building a decision model based on the distance vectors; analyzing network traffic using the decision model; and generating an output based on the analyzing, the output indicating potential matches between network traffic and an MDL model corresponding to malicious activity.
Another embodiment includes a computerized method for computer network intrusion detection. The method can include clustering, with a processor programmed to perform network intrusion detection, network traffic files into a plurality of clusters based on minimum description length (MDL) similarity, and building, with the processor, an MDL model for each cluster. The method can also include calculating, with the processor, distances from each traffic file to each MDL model to obtain a distance vector for each traffic file, each distance vector having a distance from a corresponding traffic file to each MDL model; and building, with the processor, a decision model based on the distance vectors.
Yet another embodiment can include a nontransitory computer-readable medium having software instructions stored thereon that, when executed by a processor, cause the processor to perform operations. The operations can include clustering network traffic files into a plurality of clusters based on minimum description length (MDL) similarity; building an MDL model for each cluster; and calculating distances from each traffic file to each MDL model to obtain a distance vector for each traffic file, each distance vector having a distance from a corresponding traffic file to each MDL model. The operations can also include building a decision model based on the distance vectors.
The identification module 102 is adapted to analyze malicious (or attack) network behavior (or traffic) to identify key components of an attack and to determine an MDL soft learning model of the key attack components.
The detection module 104 analyzes network traffic and employs the MDL learning model developed by the identification module 102 to detect and, optionally, to stop network traffic that has components or characteristics in common with one or more of the MDL models corresponding to malicious or attack behavior. The detection module 104 can also be adapted to recognize transformations in key components of an attack, learn those transformations and include new network behaviors associated with those transformations in the MDL models. The learning feature can provide the system with the ability to stop polymorphic attacks and to classify network behavior based on similarity to known attacks or behaviors. Also, the learning feature can include an unsupervised learning component and a supervised or semi-supervised learning component, as described in greater detail below in connection with
At 204, clustering is performed. In clustering, traffic files (e.g., a training set of network traffic files) are grouped into clusters based on MDL similarity. The traffic files can represent normal traffic, malicious traffic or both. Processing continues to 206.
At 206, MDL models of the cluster are built. The MDL models for a cluster can be built, for example, by concatenating all files in the cluster into a single string. Processing continues to 208.
At 208, distances to the MDL models are calculated for each file, resulting in a vector for each traffic file. Each vector includes a distance to each MDL model from the corresponding file. Processing continues to 210.
At 210, a decision making model is built. By treating distance vectors as instances and elements of the vectors as variables (or features), a decision making model can be built. The decision making model can be a minimum distance model, a support vector machine, or the like. A support vector machine (SVM) is a set of related supervised learning methods for analyzing data and recognizing patterns. SVMs can be used for classification and regression analysis. Typically, an SVM takes a set of input data and predicts, for each given input, which one of two possible classes (or groups) the input is a member of. Thus, SVMs can be used as non-probabilistic binary linear classifiers. Processing continues to 212.
At 212, the decision model is used to make decisions regarding traffic flowing in a network. Processing continues to 214, where processing ends. Optionally, processing can continue from 212 to 204 in order to perform an intrusion detection task.
At 304, traffic files (e.g., training files containing network traffic) or data primitives are clustered using MDL-based clustering. In MDL-based clustering, a pair-wise similarity matrix is generated using MDL to calculate a distance (or similarity) between pairs of traffic files. Hierarchical clustering is applied to the similarity matrix. The hierarchical clustering output can be visualized as a dendrogram showing how the traffic files or data samples have been grouped into clusters (dendrograms are discussed in greater detail below in connection with
Processing continues to 306.
At 306, the clusters are refined (or distilled). Refining can be needed because, as models change, clusters may move between cluster partitions. An example of a refining or distilling method is shown below:
Processing continues to 308.
At 308, the clusters are pruned. Pruning can include removing clusters that are providing the least amount of MDL benefit (e.g., the least compression). An example of a pruning method is shown below:
Processing continues to 309. At 309, it is determined whether additional refinements decrease the MDL cost of the model. If so, processing returns to 306. If not, processing continues to 310, where processing ends. It will be appreciated that 304-308 can be repeated in whole or in part in order to accomplish a contemplated clustering process.
In operation, the raw data files 402 (e.g., network traffic data files) are supplied to the MDL clustering module 404, which performs a clustering operation (e.g., similar to that discussed above in connection with
Labeled data 408 is supplied to the MDL clustering module 404 and is organized into trained clusters 410. The labeled data 408 and the trained clusters 410 form a supervised or semi-supervised learning section 407 of the system. The labeled data 408 may be labeled by a person, by a machine or by a combination of the two.
The natural clusters 406 and the trained clusters 410 are supplied to the feature selection module 412, which selects features for use in classification of network traffic. Selected features of network traffic are supplied from the feature selection module 412 to the classification system 414 in order to classify network traffic as normal traffic or as malicious/attack traffic.
The MDL-based clustering module can be adapted to perform heuristic clustering (508), cluster refining (510) and cluster pruning (512). The MDL-based clustering module can also be adapted to perform cluster fine tuning (514) and final cluster refining using labeled data (516). As output, the MDL-based clustering module 506 provides clusters 518. The clusters 518 can be used by a classifier to classify network traffic as normal traffic or attack/intrusion traffic.
If files are highly correlated, they will have a correlation value close to 1 and so D=1−C will have a value close to zero. Therefore, highly correlated clusters are nearer the bottom of the dendrogram. File clusters that are not correlated have a correlation value of zero and a corresponding distance value of 1. Files that are negatively correlated, i.e. showing opposite expression behavior, will have a correlation value of −1 (e.g., D=1−−1=2).
At 704, data corresponding to known valid and trusted network behavior is provided and is used to generate a prototype model. The known valid network data may be artificially generated to ensure all actions are known proper. Processing continues to 706.
At 706, new valid behavior data can be introduced into the system. Initially the new behavior data may cluster to form one or more new models. Elements common to all new models, yet different from the first models may indicate noise elements, which can provide opportunities to introduce filters or pre-processors to remove these common elements. By applying these filters or pre-processors to the prototype model, a tighter cluster of behaviors may be obtained. Processing continues to 708.
At 708, new behaviors are observed by the system. Some of these new behaviors can be borderline or exploit behaviors, e.g., protocol fuzzers, vulnerability scanners and exploitation frameworks can be presented to the system and observed. Some of these actions (e.g., protocol fuzzers) may fall into a previous cluster of safe behavior, while others can be categorized appropriately (e.g., exploit attempts, information gathering, borderline behaviors or the like). Processing continues to 710.
At 710, real world traffic is examined by the system. Categorizing attack traffic as normal would create false negatives, while categorizing normal traffic as attack would create false positives. Both of these are undesirable. Visualization of the learning process may aid an operator in providing input in a supervised or semi-supervised learning mode. A technique for visualization of clustered file data to facilitate intrusion detection and behavior classification is discussed in detail below in connection with
When new network traffic data is encountered, the new data can be tested against the two models to generate a new plot shown in
Based on the new data plotted in
New traffic can then be plotted on the new model space, as shown in
The analysis can be performed by a human operator. Alternatively, in situations where choosing incorrectly between normal or attack classifications may not be catastrophic, the system could automatically classify the new behavior as a normal behavior or attack behavior and adjust the classification later based on additional information collected or based on a operator adjustment to the classification. The system could perform automatic classification in real time and then present the classification and data supporting the classification to an operator for non-real time analysis.
It will be appreciated that the modules, processes, systems, and sections described above can be implemented in hardware, hardware programmed by software, software instructions stored on a nontransitory computer readable medium or a combination of the above. A system for network intrusion detection using MDL clustering, for example, can include using a processor configured to execute a sequence of programmed instructions stored on a nontransitory computer readable medium. For example, the processor can include, but not be limited to, a personal computer or workstation or other such computing system that includes a processor, microprocessor, microcontroller device, or is comprised of control logic including integrated circuits such as, for example, an Application Specific Integrated Circuit (ASIC). The instructions can be compiled from source code instructions provided in accordance with a programming language such as Java, C++, C#.net or the like. The instructions can also comprise code and data objects provided in accordance with, for example, the Visual Basic™ language, or another structured or object-oriented programming language. The sequence of programmed instructions and data associated therewith can be stored in a nontransitory computer-readable medium such as a computer memory or storage device which may be any suitable memory apparatus, such as, but not limited to ROM, PROM, EEPROM, RAM, flash memory, disk drive and the like.
Furthermore, the modules, processes systems, and sections can be implemented as a single processor or as a distributed processor. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor (single and/or multi-core, or cloud computing system). Also, the processes, system components, modules, and sub-modules described in the various figures of and for embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system. Exemplary structural embodiment alternatives suitable for implementing the modules, sections, systems, means, or processes described herein are provided below.
The modules, processors or systems described above can be implemented as a programmed general purpose computer, an electronic device programmed with microcode, a hard-wired analog logic circuit, software stored on a computer-readable medium or signal, an optical computing device, a networked system of electronic and/or optical devices, a special purpose computing device, an integrated circuit device, a semiconductor chip, and a software module or object stored on a computer-readable medium or signal, for example.
Embodiments of the method and system (or their sub-components or modules), may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like. In general, any processor capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or a computer program product (software program stored on a nontransitory computer readable medium).
Furthermore, embodiments of the disclosed method, system, and computer program product may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms. Alternatively, embodiments of the disclosed method, system, and computer program product can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design. Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized. Embodiments of the method, system, and computer program product can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the computer programming and network security arts.
Moreover, embodiments of the disclosed method, system, and computer program product can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like.
It is, therefore, apparent that there is provided, in accordance with the various embodiments disclosed herein, computer systems, methods and software for network intrusion detection using MDL clustering.
While the invention has been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications and variations would be or are apparent to those of ordinary skill in the applicable arts. Accordingly, Applicants intend to embrace all such alternatives, modifications, equivalents and variations that are within the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
5903676 | Wu et al. | May 1999 | A |
5956676 | Shinoda | Sep 1999 | A |
6189005 | Chakrabarti et al. | Feb 2001 | B1 |
6601048 | Gavan et al. | Jul 2003 | B1 |
6782377 | Agarwal et al. | Aug 2004 | B2 |
6973459 | Yarmus | Dec 2005 | B1 |
7007035 | Kamath et al. | Feb 2006 | B2 |
7017186 | Day | Mar 2006 | B2 |
7089592 | Adjaoute | Aug 2006 | B2 |
7134141 | Crosbie et al. | Nov 2006 | B2 |
7254273 | Sakanashi et al. | Aug 2007 | B2 |
7260846 | Day | Aug 2007 | B2 |
7313817 | Evans et al. | Dec 2007 | B2 |
7370357 | Sekar | May 2008 | B2 |
7409716 | Barnett et al. | Aug 2008 | B2 |
7613572 | Ben-Gal et al. | Nov 2009 | B2 |
8245301 | Evans et al. | Aug 2012 | B2 |
8245302 | Evans et al. | Aug 2012 | B2 |
20020147754 | Dempsey et al. | Oct 2002 | A1 |
20030061015 | Ben-Gal et al. | Mar 2003 | A1 |
20040157556 | Barnett et al. | Aug 2004 | A1 |
20040250128 | Bush et al. | Dec 2004 | A1 |
20050257269 | Chari et al. | Nov 2005 | A1 |
20050273274 | Evans et al. | Dec 2005 | A1 |
20050275655 | Stolze et al. | Dec 2005 | A1 |
20060070128 | Heimerdinger et al. | Mar 2006 | A1 |
20060212279 | Goldberg et al. | Sep 2006 | A1 |
20070087756 | Hoffberg | Apr 2007 | A1 |
20080016314 | Li et al. | Jan 2008 | A1 |
20080065765 | Hild et al. | Mar 2008 | A1 |
20080222725 | Chayes et al. | Sep 2008 | A1 |
20080222726 | Chayes et al. | Sep 2008 | A1 |
20080291934 | Christenson et al. | Nov 2008 | A1 |
20090021517 | Foslien | Jan 2009 | A1 |
20090138590 | Lee et al. | May 2009 | A1 |
20100017870 | Kargupta | Jan 2010 | A1 |
20100071061 | Crovella et al. | Mar 2010 | A1 |
20100082513 | Liu | Apr 2010 | A1 |
20100107253 | Eiland et al. | Apr 2010 | A1 |
20100107254 | Eiland et al. | Apr 2010 | A1 |
20100107255 | Eiland et al. | Apr 2010 | A1 |
20100132039 | Ji et al. | May 2010 | A1 |
20110016525 | Jeong et al. | Jan 2011 | A1 |
20110029657 | Gueta et al. | Feb 2011 | A1 |
20110066409 | Evans et al. | Mar 2011 | A1 |
20110067106 | Evans et al. | Mar 2011 | A1 |
20120054866 | Evans et al. | Mar 2012 | A1 |
20120284791 | Miller et al. | Nov 2012 | A1 |
Number | Date | Country |
---|---|---|
2000112917 | Apr 2000 | JP |
WO 2005055073 | Jun 2005 | WO |
Entry |
---|
Gerhard Munz et al. “Traffic Anomaly Detection Using K-Means Clustering” in Leistungs, Zuverlassigkeitsund Verlassichkeitsbewertung Kommunikationsnetzen and Verteilten Systemen, 4. Gl/itg-Wks. MMBNe. 2007, Hamburg Germany (8 pages) http://www.decom.ufop.br/menotti/rp122/sem/sem3-luciano-art.pdf. |
Peter Hines et al. “A non-parametric approach to simplicity clustering” Applied Artificial Intelligence 21, No. 8 (2007) (48 pages) http://webcache.googleusercontent.com/search?q=cache:jyGqVSQoTXIJ:www.peterhines.net/downloads/papers/AAI.pdf+&cd=7&hl=en&ct=clnk&gl=us&client=firefox-a. |
Pieter Adriaans et al. “The Power and Perils of MDL” ISIT2007 Nice, France, Jun. 24-29, 2007 (© 2007 IEEE) (5 pages). |
Proceedings of the Fourth IEEE, Computer Science, “An Application of Information Theory to Intrusion Detection”, E. Earl Eiland and Lorie M. Liebrock, Ph.D, Apr. 2006, 16 pages. |
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, No. 2 dated Feb. 1999, entitled, “Using Evolutionary Programming and Minimum Description Length Principle for Data Mining of Bayesian Networks”, pp. 174-178. |
Proceedings of the 28th Hawaii International Conference on System Sciences, 1995 IEEE, “Molecular Evolutionary Phylogenetic Trees Based on Minimum Description Length Principle”, Fengrong Ren et al., pp. 165-173. |
Axellson S., “The Base-Rate Fallacy and the Difficulty of Intrusion Detection”, Transactions on Information and System Security, 2000, 3:3, pp. 186-205. |
A. Liu, C. Martin, T. Hetherington and S. Matzner, “A Comparison of System Call Feature Representations for insider Threat Detection”, Proceedings of the 2005 IEEE Workshop on Information Assurance and Security United States Military Academy, West Point, NY, 8 pages. |
R.A. Maxion and T.N. Townsend, “Masquerade Detection Using Truncated Command Lines”, in International Conference on Dependable Systems and Networks (DSN-02), pp. 219-228, Los Alamitos, CA, Jun. 23-26, 2002, IEEE Computer Society Press, Washington, D.C., 10 pages. |
R.A. Maxion and T.N. Townsend, “Masquerade Detection Augmented with Error Analysis”. IEEE Transactions on Reliability, 53(1): 124-147, Mar. 2004. |
M. Schonlau, W. DuMouchel, W. Ju, A. Karr, M. Theus, Y. Vardi, (2001), “Computer Intrusion: Detecting Masquerades”, Statistical Science, 2001;16(1), 16 pages. |
R.A. Maxion, “Masquerade Detection Using Enriched Command Lines”. in International Conference on Dependable Systems and Networks (DSN-03), pp. 5-14, Los Alamitos, CA Jun. 22-25, 2003. IEEE Computer Society Press. San Francisco, CA, 10 pages. |
S.C. Evans, B. Barnett, G.J. Saulnier and S.F. Bush, “MDL Principles for Detection and Classification of FTP Exploits,”MILCOM 2004. |
S.F. Bush and S.C. Evans, “Information Assurance Design and Assessment: Final Report”, General Electric Research and Development Center, Aug. 2002, 84 pages. |
S. Goel and Stephen F. Busch, Kolmogorov Complexity Estimates for Detection of Viruses in Biologically Inspired Security Systems: A Comparison with Traditional Approaches, Complexity, 9:2, 2003, 45 pages. |
Benedetto, Caglioli and Loreto, “Language Trees and Zipping”, Physical Review Letters, 88, 2002. Grunwald, et al. Advances in Minimum Description Length Theory and Applications. MIT Press, 2005, 7 pages. |
C. de la Higuera, “A Bibliographical Study of Grammatical Inference”, Pattern Recognition vol. 38, pp. 1332-1348, 2005, 40 pages. |
S.C. Evans, G.J. Saulnier and S.F. Bush, “A New Universal Two Part Code for Estimation of String Kolomogorov Complexity and Algorithmic Minimium Sufficient Statistic,” DIMACS Workshop on Complexity and Inference, 2003, http://www.stat.ucla.edu/-cocteau/dimacs/evans.pdf, e pages. |
S.C. Evans, T.S. Markham, A. Torres, A. Kourtidis and D. Conlin, “An Improved Minimum Description Length Learning Algorithm for Nucleotide Sequence Analysis,” Proceedings of IEEE 40th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, Nov. 2006. |
S.C. Evans, A. Kourtidis, T S. Markham, J Miller, D. Conklin and A. Torres, “MicroRNA Target Detection and Analysis for Genes Related to Breast Cancer Using MDL Compress,” EURASIP Jounal on Bioinformatics and Systems Biology, Special Issue on Information Theoretic Methods for Bioinformatics, Sep. 2007, 1 page. |
M. Latendresse, “Masquerade Detection via Customized Grammars”, Lecture Notes in Computer Science, 3548. 141-159, Jun. 2005, 12 pages. |
C.G. Nevill-Manning and I.H. Witten (1997), “Identifying Hierarchical Structure in Sequences in a Linear Time Algorithm,” Journal of Artificial Intelligence Research, 7, 67-82. |
C.G. Nevill-Manning and I.H. Witten, I.H. http://sequitur.info/. Dept of Computer Science, University of Waikato, Hamilton, New Zealand, May 22, 2007, 1 page. |
P. Gacs, J.T. Tromp and P. Vitanyi, “Algorithmic Statistics”, IEEE Transactions on Information Theory, vol. 47, No. 6, Sep. 2001, 21 pages. |
T.M. Cover and J.A. Thomas, Elements of Information Theory, Wiley, NY 1991—book. |
M. Li and P. Vitanyi, “An Introduction to Kolmogorov Complexity and its Applications”, Spring, NY 1997. |
R. Richardson, CSI Survey 2007: The 12th Annual Computer Crime and Security Survey. 2007, Computer Security Institute: San Francisco, CA, 30 pages. |
2007 Threat Report, 2008 Threat and Tech. Forecast. 2008, Trend Micro, Inc. Tokyo, Japan, 32 pages. |
T. AbuHmed, A. Mohaisen and D. Nyang, “A Survey on Deep Packet Inspection for Intrusion Detection Systems”, Mar. 1, 2008, 10 pages. |
I. Zhang and G.B. White, “An Approach to Detect Executable Content for Anomaly Based Network Intrusion Detection”, in 21st Internat.1 Parallel and Distributed Processing Symp. 2007, IEEE, 8 pages. |
M.Z. Shafiq et al., Extended Thymus Action for Improving Response of AIS based NID System Against Malicious Traffic, in Congress on Evolutionary Computation, 2007. 2007, IEEE., 8 pages. |
J.M. Estevez-Tapiador, P. Garcia-Teodoro and J.E. Diaz-Verdejo, Measuring Normality in HTTP Traffic for Anomaly-based Intrusion Detection. Computer Networks, 2004. 45(2), p. 175-193. |
K.L. Ingham and A. Somayaji, A Methodology for Designing Accurate Anomaly Detection Systems, in Latin America Networking Conference. 2007, San Jose, CA ACM. |
S. Evans et al., Minimum description length principles for detection and classification of FTP exploits, in Military Communications Conference, 2004, MILCOM 2004, IEEE, 2004. |
S.C. Evans et al., microRNA Target Detection and Analysis for Genes Related to Breast Cancer Using MDLcompress. EURASIP Journal on Bioinformatics and Systems Biology, 2007. 2007(Special Issue on Information Theoretic Methods for Bioinformatics). |
P.D. Grunwald, “A Tutorial Introduction to the Minimum Description Length Principle”. 2007, Cambridge, MA: MIT Press. 703. |
P. Adriaans and P. Vitanyi, The Power and Perils of MDL, in ISIT 2007. Jun. 29, 2007.Nice, France. |
G. Munz, S. Li and G. Carie, “Traffic Anomaly Detection Using k-means Clustering”, in Leistungs, Zuverlässigkeitsund Verlässlichkeitsbewertung Kommunikationsnetzen und Verteilten Systemen, 4. Gl/itg-Wks. MMBne. 2007. Hamburg, Germany. |
K. Wang and S.J. Stolfo, “Anomalous Payload-based Network Intrusion Detection”, Recent Advances in Intrusion Detection SpringerLink, Editor. 2004, Springer:Berlin/Heidelberg, Germany. pp. 203-222. |
N. Athanasiades et al. “Intrusion Detection Testing and Benchmarking Methodologies”, in Proceedings of the First IEEE International Workshop on Info. Assurance. 2003, Los Alamitos, CA, IEEE, 10 pages. |
Staff. MIT Lincoln Laboratory Information Systems Technology [Web site] 2008 [cited May 21, 2008]; Umbrella site for 1999 DARPA Intrusion Detection Evaluation Data Set]. Available from: http://www.ll.mit.edu/mission/communications/ist/corpora/ideval/data/1999data.html. |
Staff. Tenable Network Security. [Web page] 2008 [cited May 21, 2008]; Home page for Nessus Vulnerability Tppl]. http://www.nessus.org/nessus/. |
J. Elson, topflow—TCP Flowrecorder. [Web page] Aug. 7, 2003 [cited May 22, 2008]; Available from: http://www.circlemud.org/jelson/software/tcpflow/. |
S. Wehner, “Analyzing Worms and Network Traffic Using Compression”, in arXiv:cs/0504045v1 [cs.CR] Apr. 12, 2005, 12 pages. |
R. Duda, P.E. Hart and D.G. Stork, “Pattern Classification”(2nd ) John Wiley and Sons, 2001, book. |
GE Research & Lockheed Martin Corporation; “MDLcompress for Intrusion Detection: Signature Inference and Masquerade Attack ”, 06/07 IEEE. |
Evans et al., Towards Zero-Day Attack Detection through Intelligent Icon Visualization of MDL Model Proximity: Extended Abstract, VizSec2008 Conference Poster (Sep. 15, 2008). |
Keogh, E., Wei, L., Xi, X., Lonardi, S., Sirowy, S., “Intelligent Icons: Integrating Lite-Weight Data Mining and Visualization into GUI Operating Systems,” ICDM (2006). |
Williamson, “Throttling viruses: restricting propagation to defeat malicious mobile code”, Computer Security Applications Conference Proceedings (2002). |
Evans et al., Network attack visualization and response through intelligent icons, Military Communications Conference (MILCOM) (Oct. 21, 2009). |
Ragsdale et al., “Adaptation Techniques for Intrustion Detection and Intrusion Response Systems”, 2000 IEEE International Conference on Systems, and Cybernetics (Oct. 8, 2000). |
Yang, Peng. Ward, Rundensteiner, “Interactive hierarchical dimension ordering, spacing and filtering for exploration of high dimensional datasets”, IEEE Symposium on Information Visualization—INFOVIS 2003, pp. 105-112 (2003). |
Edward Tufte, The Visual Display of Quantitative Information, 2 ed., p. 119 (Graphics Press 2001). |
Jonah Turner, “Easy as Pie Charts (Any Way You Slice 'Em)”, SAS Global Forum, p. 071-2008 (Mar. 16, 2008). |
LeRoy Bessler, “Communication-Effective Pie Charts”, SAS Global Forum 2007, Paper 134-2007 (2007). |
Keim, Kriegal, “Visualization techniques for mining large databases: a comparison”, IEEE Transactions on Knowledge and Data Engineering, pp. 923-938 (1996). |
Keim, “Designing pixel-oriented visualization techniques: theory and applications”, IEEE Transactions on Visualization and Computer Graphics, pp. 59-78 (2000). |
Hao, Keim, “Importance-driven visualization layouts for large time series data”, IEEE Symposium on Information Visualization—INFOVIS 2005, pp. 203-210 (2005). |
Ferreira de Oliveira, Levkowitz, “From visual data exploration to visual data mining: a survey”, IEEE Transactions on Visualization and Computer Graphics, pp. 378-394 (2003). |
Nonfinal Office Action dated May 7, 2011, in U.S. Appl. No. 12/560,297. |
Nonfinal Office Action dated May 6, 2011, in U.S. Appl. No. 12/882,637. |
Nonfinal Office Action dated Nov. 2, 2011 in U.S. Appl. No. 12/882,637. |
Nonfinal Office Action dated Nov. 7, 2011, in U.S. Appl. No. 12/560,297. |
Notice of Allowance dated May 4, 2012, in U.S. Appl. No. 12/560,297. |
Notice of Allowance dated May 9, 2012, in U.S. Appl. No. 12/882,637. |
Number | Date | Country | |
---|---|---|---|
20120284793 A1 | Nov 2012 | US |