1. Field of the Invention
The present invention relates generally to computer security, and more particularly but not exclusively to methods and systems for evaluating computer files for malicious code.
2. Description of the Background Art
Machine learning technology is commonly used to detect malware. Currently, machine learning for malware detection involves supervised learning to generate a machine learning model. Generally speaking, a training data set of known malicious files and known normal (i.e., benign) files are prepared. A malicious file is labeled as “malicious” and a normal file is labeled as “normal.” The training data set is input to a machine learning module, which employs a machine learning algorithm, such as Support Vector Machine (SVM) or Random Forest algorithm. The machine learning module learns from the training data set to make a prediction as to whether or not an unknown file is malicious or normal. A trained machine learning module is packaged as a machine learning model that is provided to a computer system. An unknown file received in the computer system is input to the machine learning model, which classifies the unknown file as either malicious or normal.
Currently available machine learning models are very sophisticated and are able to classify files with a high degree of accuracy. However, while a typical machine learning model can tell if an unknown file is malicious, the machine learning model is not able to identify which section or sections of the file are malicious.
In one embodiment, a training data set for training a machine learning module is prepared by dividing normal files and malicious files into sections. Each section of a normal file is labeled as normal. Each section of a malicious file is labeled as malicious regardless of whether or not the section is malicious. The sections of the normal files and malicious files are used to train the machine learning module. The trained machine learning module is packaged as a machine learning model, which is provided to an endpoint computer. In the endpoint computer, an unknown file is divided into sections, which are input to the machine learning model to identify a malicious section of the unknown file, if any is present in the unknown file.
These and other features of the present invention will be readily apparent to persons of ordinary skill in the art upon reading the entirety of this disclosure, which includes the accompanying drawings and claims.
The use of the same reference label in different drawings indicates the same or like components.
In the present disclosure, numerous specific details are provided, such as examples of apparatus, components, and methods, to provide a thorough understanding of embodiments of the invention. Persons of ordinary skill in the art will recognize, however, that the invention can be practiced without one or more of the specific details. In other instances, well-known details are not shown or described to avoid obscuring aspects of the invention.
Referring now to
The computer system 100 is a particular machine as programmed with one or more software modules 110, which comprise instructions stored non-transitory in the main memory 108 for execution by the processor 101 to cause the computer system 100 to perform corresponding programmed steps. An article of manufacture may be embodied as computer-readable storage medium including instructions that when executed by the processor 101 causes the computer system 100 to be operable to perform the functions of the one or more software modules 110. In the example of
In the example of
The pre-processor 210 may comprise instructions for dividing a file into a plurality of sections and assigning a classification label to each individual section. In one embodiment, the pre-processor 210 labels each section of a known malicious file as malicious and labels each section of a known normal file as normal. In marked contrast to supervised training where an entire file is assigned a single classification label, the pre-processor 210 assigns a classification label to each individual section of a file.
Because the pre-processor 210 labels each section of a malicious file as malicious regardless of whether or not the section is malicious, some of the sections of the malicious file may end up being labeled incorrectly. That is, a normal section (i.e., section with no malicious code) of the malicious file will also be labeled as malicious. This is markedly different from previous approaches where samples in the training data set are labeled correctly and each label applies to an entire file, rather than individual sections of a file.
In the example of
In the example of
With sufficient number of samples of known normal and known malicious files, a suitable machine learning algorithm will be able to ignore incorrectly-labeled file sections as noise. This is especially true with executable files. For example, assuming that the normal file of
More specifically, in the example of
Referring back to
As can be appreciated, by training the machine learning module 220 using the file sections 211 and 212, the resulting machine learning model 230 is able to detect malicious file sections and normal file sections. The machine learning model 230 may be deployed in a backend computer system to assist antivirus researchers in isolating malicious codes for research or signature development. The machine learning model 230 may also be deployed in an endpoint computer system to protect an endpoint computer system against malware as now described with reference to
In the example of
In the example of
More particularly, in the example of
In one embodiment, the malware detector 320 deems the target file to be malicious when at least one file section 321 of the target file is classified by the machine learning model 230 as malicious. In that case, the malware detector 320 may identify the particular section of the target file that is classified by the machine learning model 230 as malicious. The malware detector 320 may deem the target file to be normal if none of the file sections 321 of the target file is classified by the machine learning model 230 as malicious. The malware detector 320 may take a response action against a detected malicious file, such as putting the malicious file in quarantine, blocking the malicious file from being received in the endpoint computer system 300, cleaning the malicious file, alerting a user or administrator, etc.
Advantageously, the malware detector 320 is able to determine whether or not a file is malicious and which section of a malicious file contains malicious code (see arrow 305). This allows for a more thorough evaluation of a target file for malicious codes. Furthermore, by identifying the particular sections of a target file that contain malicious code, the malicious code may be extracted from the target file to clean the target file or to aid antivirus researchers in developing a signature for detecting the malicious code.
Methods and system for detecting malicious code sections of computer files have been disclosed. While specific embodiments of the present invention have been provided, it is to be understood that these embodiments are for illustration purposes and not limiting. Many additional embodiments will be apparent to persons of ordinary skill in the art reading this disclosure.
Number | Name | Date | Kind |
---|---|---|---|
5442699 | Arnold et al. | Aug 1995 | A |
5452442 | Kephart | Sep 1995 | A |
5640492 | Cortes et al. | Jun 1997 | A |
5649068 | Boser et al. | Jul 1997 | A |
5907834 | Kephart et al. | May 1999 | A |
6161130 | Horvitz et al. | Dec 2000 | A |
6192512 | Chess | Feb 2001 | B1 |
6279128 | Arnold et al. | Aug 2001 | B1 |
6622134 | Sorkin | Sep 2003 | B1 |
6650890 | Irlam et al. | Nov 2003 | B1 |
6711583 | Chess et al. | Mar 2004 | B2 |
6732157 | Gordon et al. | May 2004 | B1 |
6778941 | Worrell et al. | Aug 2004 | B1 |
6789200 | Fiveash et al. | Sep 2004 | B1 |
6813712 | Luke | Nov 2004 | B1 |
7076527 | Bellegarda et al. | Jul 2006 | B2 |
7089429 | Gustafsson | Aug 2006 | B2 |
7426510 | Libenzi et al. | Sep 2008 | B1 |
7756535 | Diao et al. | Jul 2010 | B1 |
8370938 | Daswani et al. | Feb 2013 | B1 |
8402543 | Ranjan et al. | Mar 2013 | B1 |
8621625 | Bogorad | Dec 2013 | B1 |
8667591 | Claudatos | Mar 2014 | B1 |
8838992 | Zhu et al. | Sep 2014 | B1 |
20040143713 | Niles | Jul 2004 | A1 |
20060075494 | Bertman et al. | Apr 2006 | A1 |
20060122957 | Chen | Jun 2006 | A1 |
20060123244 | Gheorghescu | Jun 2006 | A1 |
20060259543 | Tindall | Nov 2006 | A1 |
20070233463 | Sparre | Oct 2007 | A1 |
20070266436 | Ballard | Nov 2007 | A1 |
20100195909 | Wasson et al. | Aug 2010 | A1 |
20110040825 | Ramzan et al. | Feb 2011 | A1 |
20120084859 | Randinsky et al. | Apr 2012 | A1 |
20170193230 | Jevnisek | Jul 2017 | A1 |
Number | Date | Country |
---|---|---|
1 377 892 | Sep 2004 | EP |
WO 02084429 | Oct 2002 | WO |
Entry |
---|
MAMA: Scripting—quantities and sizes—Dev.Opera, Dec. 12, 2008, 1 sheet [retrieved on Apr. 22, 2011], retrieved from the internet: http://dev.opera.com/articles/view/mama-scripting-quantities-and-sizes/. |
InfoGainAttributeEval—Pentaho Data Mining—Pentaho Wiki, 1 sheet [retrieved on Apr. 22, 2011], retrieved from the internet: http://wiki.pentaho.com/display/DATAMINING/InfoGainAttributeEval. |
Lex (software)—Wikipedia, the free encyclopedia, 1 sheet [retrieved on Apr. 22, 2011], retrieved from the internet: http://en.wikipedia.org/wiki/Lex_(software). |
Weka:W-InfoGainAttributeEval—Rapid-I-Wiki, 2 sheets [retrieved on Apr. 22, 2011], retrieved from the internet: http://rapid-i.com/wiki/index.php?title=Weka:W-InfoGainAttributeEval. |
Tf-idf—Wikipedia, the free encyclopedia, 4 sheets [retrieved on Apr. 22, 2011], retrieved from the internet: http://en.wikipedia.org/wiki/Tf%E2%80%93idf. |
Support vector machine—Wikipedia, the free encyclopedia, 11 sheets [retrieved on Apr. 22, 2011], retrieved from the internet: http://en.wikipedia.org/wiki/Support_vector_machine. |
Number | Date | Country | |
---|---|---|---|
20180060576 A1 | Mar 2018 | US |