Machine learning is a technique that uses the high speed processing power of modern computers to execute algorithms to learn predictors of behavior or characteristics of data. Machine learning techniques may execute algorithms on a set of training samples (a training set) with a known class or label, such as a set of files known to exhibit malicious or benign behaviors, to learn characteristics that will predict the behavior or characteristics of unknown things, such as whether unknown files are malicious or benign.
Many current approaches to machine learning use algorithms that require a static training set. Such machine learning approaches using algorithms that require a static training set (such as those based on decision trees) assume that all training samples are available at training time. There exists a class of supervised machine learning algorithms known as on-line or continuous learning algorithms that update the model on each new sample. However, these algorithms assume each new sample will be classified by an expert user.
A relevant machine learning method is batch mode active learning (BMAL). BMAL constructs a new classifier that is retrained based on a batch of new samples in an optionally repeatable process. BMAL, however focuses on the selection of unlabeled samples to present to the user for adjudication. BMAL: conducts repeated learning until some objective performance criteria is met. Additionally, BMAL does not cover the case where the training data is split between multiple locations where original training and test data must be sent to the user where new samples are added.
Other relevant prior art methods are described in the following patents and published applications. For example, U.S. Pat. No. 6,513,025 (“the '025 patent”), entitled “Multistage Machine Learning Process,” involves partitioning of training sets by time intervals and generating multiple classifiers (one for each interval). Time intervals are cyclic/periodic (fixed frequency) in the preferred embodiment. The '025 patent leverages dependability models (method for selecting which classifier model to use based on the system input) to determine which classifier to use. In addition, the classifier update and training sample addition methods in this patent are continuous. The '025 patent is also limited to telecommunications network lines.
U.S. Pre-Grant Publication No. 20150067857 (“the '857 publication”) is directed towards an “In-situ Trainable Intrusion Detection System.” The described system in the '857 publication is based on semi-supervised learning (uses some unlabeled samples). Learning is based on network traffic patterns (netflow or other flow metadata) not files. The '857 publication uses a Laplacian Regularized Least Squares learner and does not include a method for allowing users to select between classifiers or view analysis of performance of multiple classifiers. The '857 publication also only uses in-situ samples (samples from the client enterprise).
U.S. Pre-Grant Publication No. 20100293117 (“the '117 publication”) entitled “Method and System for Facilitating Batch Mode Active Learning,” discloses a method for selecting documents to include in a training set based on an estimate of the “reward” gained by including each sample in the training set (estimate of performance increase). The reward can be based on an uncertainty associated with an unlabeled document or document length. The '117 publication does not disclose detecting malicious software or files. U.S. Pre-Grant Publication No. 20120310864 (“the '864 publication”), “Adaptive Batch Mode Active Learning for Evolving a Classifier,” focuses on applying this technique to image, audio, and text data (not binary files and not for the purpose of malware detection). Moreover, the '864 publication requires the definition of a stop criterion which is typically based on a predetermined desired level of performance. Importantly, the '864 publication method lacks accommodations for in-situ learning such as the potential need to provide a partial training corpus, expressing that corpus as feature vectors instead of maintaining the full sample, etc.
Existing machine learning techniques disclose learning algorithms and processes but do not cover the method for augmenting or retraining a classifier based on data not accessible to the original trainer. Existing machine learning techniques do not enable training on data samples that an end user does not wish to disclose to a 3rd-party which was originally responsible for conducting the machine learning.
Additionally, prior art malware sensors have an inherent problem in which each instance of a malware sensor (anti-virus, IDS, etc.) is identical provided their signatures or rule sets are kept up-to-date. In such instances, since each deployment of a cyber-defense sensor is identical, a bad actor or malware author may acquire the sensor and then test and alter their malware until it is not detected. This would make all such sensors vulnerable.
Described herein are embodiments that overcome the disadvantages of the prior art. These and other advantages are provided by a method for batched, supervised, in-situ machine learning classifier retraining for malware identification and model heterogeneity. The method produces a parent classifier model in one location and providing it to one or more in-situ retraining system or systems in a different location or locations, adjudicates the class determination of the parent classifier over the plurality of the samples evaluated by the in-situ retraining system or systems, determines a minimum number of adjudicated samples required to initiate the in-situ retraining process, creates a new training and test set using samples from one or more in-situ systems, blends a feature vector representation of the in-situ training and test sets with a feature vector representation of the parent training and test sets, conducts machine learning over the blended training set, evaluates the new and parent models using the blended test set and additional unlabeled samples, and elects whether to replace the parent classifier with the retrained version.
Described herein are embodiments of a system and method for in-situ classifier retraining for malware identification and model heterogeneity. Embodiments overcome the problems described above. For example, embodiments provide for the augmentation of an existing machine learning-based classification model based on user-driven confirmation or correction of the existing model's class prediction and in-situ retraining. In-situ herein refers to conducting the machine learning at the physical location of an installed instance of a classifier. Indeed, when applied across Multiple instances, embodiments allow each instance to create a model unique to that instance.
A preferred embodiment is applied to the problem of determining if unknown/untrusted software or software application files are either benign or malicious. The child classifiers created through this method are not only unique but maintain or improve the statistical performance of the parent classifier. In particular, embodiments have been demonstrated to reduce false positive rates of software classification. Embodiments are designed to allow retraining of the parent classifier using a combination of the original training set and a supplemental training set, referred to as the in-situ training set. The in-situ training set is generated within a local instance's environment, eliminating the need for potentially sensitive or proprietary data to be shared with any other party—including the party which constructed the parent classifier. In embodiments, users, however, may elect to form trusted relationships with other users and securely share some or all of their in-situ training data using an abstraction of the potentially sensitive or proprietary data.
The embodiments described herein include many significant differences over the prior art. As opposed to prior art described above, embodiments may require recurring batching at non-fixed intervals of new samples prior to retraining but does not assume every new sample will be classified by an expert user or otherwise be eligible for inclusion in a retraining batch. These differences make embodiments of the system and method distinct from the class of on-line or continuous learning techniques.
Additionally, as opposed to BMAL, embodiments allow a user to choose samples at will to adjudicate. Likewise, in embodiments a user determines the number of cycles of retraining rather than using an objective stop criteria. Additionally, embodiments cover the case where the training data is split between multiple locations where original training and test data must be sent to the user where new samples are added.
Further, as opposed to the '025 patent, the in-situ learning of the embodiments described herein involves the complete replacement of the current classifier, not the subdivision of the input space and continued use of the older models. Likewise, the embodiments described herein is user-driven batch learning (not all events are included in the additional learning). Alternatively, in another aspect of this disclosure, the batch learning may be driven by an automated process. Contrary to the '857 publication, in-situ embodiments described herein may be fully supervised, as opposed to Laplacian Regularized Least Squares learners which are semi-supervised. While unsupervised and semi-supervised learning may be implemented in aspects of the system, supervised learning is preferred because, for example, supervised learning may result in a classification determination of an unknown sample moreover, embodiments describe herein may use a mix of samples from the client enterprise and those provided by the manufacturer. As opposed to the '117 publication, embodiments utilize all labeled samples. As distinguished from the '864 publication, embodiments have a trivial stop criterion (single pass with user making the decision on whether the performance is adequate) which does not require the calculation of any distance function between the batch of unlabeled data and the remaining unlabeled data and does not select a batch of training elements based on an evaluation of an objective function.
Embodiments allow users to retrain machine learning-based classifiers in-situ to the users' deployment of the classification software/hardware. The retraining allows improvement of the overall classifier performance (e.g., reducing false positives and false negatives). The in-situ retaining also allows the creation or production of a unique version of the classification model. That version may be unique to that instance and unique for that user. By having a tailored model the user assures that a malware producer will not be able to test the malware against the detection technology prior to attempting to compromise the user's network. In addition, the tailoring may be used to create models biased toward particular types of malware that are proprietary or sensitive and, therefore, unavailable to the creator of the parent classifier model. In some embodiments, sharing may be facilitated amongst multiple users by using abstracted sample representations that completely obscure the sample content but still allow others to leverage the sample for retraining. Furthermore, users may elect to share models trained in one location with other locations in their network or even among trusted partners. Additionally, or alternatively, models generated as a result of in-situ retraining may be exported to or imported from trusted partners.
With reference now to
Once the third-party creates the classifier, the classifier is sent/deployed to the user facility (e.g., a customer) as a classifier instance, block 5. Such deployment 5 may be part of multiple user facility (e.g., multiple customer), multiple instance deployment. A user facility houses system hardware and software, e.g., such as shown in
With continuing reference to
In embodiments, there are a required threshold number of adjudicated events that must be accumulated before a retrain may occur. When the user exceeds the required threshold number of adjudicated events the user may elect to conduct a retrain. Adjudicated samples may be stored on one or more in-situ retraining systems. Information about adjudicated samples may also be acquired through sharing amongst other system users provided a trust relationship exists. When the user initiates a retrain, the retrain manager creates a training and test set from the adjudicated in-situ samples, block 8. Alternatively, the in-situ retraining system may, without human intervention, initiate a retrain. The training and test set may be selected from a subset of the adjudicated samples. Retrain manager may also extract feature vectors from both retrain training and test sets, block 9. Method 100 may then blend these in-situ feature vectors with the parent/base classifier's feature vectors (and feature vectors from sharing partners, if any), block 10 in a separate mode, an embodiment may use a subset of the parent/base classifier's feature vectors (and those from sharing partners) without any additional in-situ samples. This subset may be selected randomly from the full set of available feature vectors. In one aspect, a blending implementation, known as an additive method, the in-situ sample feature vectors may be added to the parent classifier feature vectors. In another aspect, in a second blending implementation, known as a replacement method, the in-situ sample feature vectors may replace an equal number of the parent classifier feature vectors. In another aspect, in a third blending implementation, known as a hybrid method, the in-situ sample feature vectors may be added to a subset of the parent classifier feature vectors. This may create a training set larger than parent set but smaller than one created by the additive method. Using a hybrid method of blending may allow the user to limit the influence of the in-situ samples on the new classification model. A new classification model is trained by the machine learner using the same machine learning algorithm used to create the parent/base classifier, block 11. Once a new classifier is created it is evaluated against the retrain test set, which includes sample feature vectors from both the third-party (base classifier test set feature vectors) and the user facility (retrain test set feature vectors), block 12. Evaluation 12 occurs against both labeled and unlabeled samples not included in the training set. A system GUI may be provided to aid the user in conducting the evaluation. Embodiments may also provide an automated recommendation, e.g., provided by the retraining manager, as to which classifier is better (see, e.g.,
With continuing reference to
In embodiments, successive retraining will use the previous round's training and test set as a basis for augmentation. System for in-situ classifier retraining for malware identification and model heterogeneity may optionally elect to “anchor” the retraining to the original 3rd-party base classifier and associated training and test set. When retraining in anchor mode, the original base classifier, original base classifier training, and original base classifier test set or subsets thereof are used for all subsequent anchored retrainings.
With reference again to
System 200 may also include a GUI to enable display of in-situ models, parent models, test results and classifications, output by server, to user. The GUI may also enable entry and receipt of user inputs that, e.g., confirm or correct classifications, elect to retrain, elect to accept a new in-situ model, etc., as described herein. In embodiments, server receives user inputs entered through GUI and executes steps as described herein. In another aspect of this disclosure, the server receives inputs generated by the in-situ retraining system.
With reference now to
With reference now to
With reference now to
When shared data is used in the construction of in-situ training and test sets, the user may elect to prioritize the inclusion of the shared data relative to their own and to each provider of the shared feature vectors. Each source prioritization is converted to a percent of the training and test set that will be taken from that source's adjudicated samples.
Embodiments of the system and method for in-situ classifier retraining for malware identification and model heterogeneity, as described, herein overcome a number of defects and disadvantages of the prior art. For example, embodiments described herein address the challenge of being able to train on data samples for which the end user does not wish to disclose to the 3rd-party which is responsible for conducting the machine learning. An example of this scenario is the identification of malicious pdf files. The third party may have a corpus of malicious and benign pdfs to train a classifier, but that classifier may produce an unacceptable number of false positives when applied to a user's pdf files. The user, however, does not wish to share their pdf files that are being incorrectly marked because the pdf files may contain sensitive or proprietary information. By allowing the user to conduct retraining in-situ the user gets the benefit of having their data added to the training set without the cost or risk of providing their samples to the third-party or other users. In another aspect of this disclosure, the in-situ retraining system may add data to the training set without incurring the cost or risk of providing their samples to the third-party or other users.
In addition, embodiments of the system and method for in-situ classifier retraining for malware identification and model heterogeneity solve the problem in cyber defense where each instance of a malware sensor (anti-virus, IDS, etc.) is identical (assuming each instance's signatures are kept up-to-date). Since each deployment of a cyber-defense sensor is identical, and a bad actor or malware author may acquire the sensor, it is possible for the bad actor to test/alter their malware until it is not detected. In-situ training allows each instance of a sensor to tailor itself on data not available to anyone but the local user; this method effectively guarantees all in-situ trained classifier models are unique. In other words, the set of all malware detection models is heterogeneous instead of homogeneous. Bad actors will no longer be able to rely on pre-testing their malware and incur greater risk of discovery across the community of users.
Embodiments of the system and method for in-situ classifier retraining for malware identification and model heterogeneity also address the issue of secure sharing of potentially sensitive or proprietary information for the purpose of machine learning. By establishing trust relationships among users in which they share the sample feature vectors, and not the samples themselves, each user gains the benefit of the others' work without having to expose the sensitive data.
Embodiments contain several innovative concepts. Embodiments use machine learning and in-situ retraining to produce a unique classification model for each user. The implementation of in-situ learning described here is based on a combination of third-party and in-situ datasets allowing the user the benefits of tailoring without requiring the user to release data to the third-party. Due to the blending of datasets and a tightly controlled and automated machine learning process, the user is less prone to unintentional errors introduced by poor machine learning methods that could result in poor performance. Embodiments of the system allow the user to define which samples are made eligible for retraining rather than relying on an automated analysis which may not reflect the user's priorities.
Testing of embodiments has demonstrated over all reductions in false positive rates on previously misclassified in-situ samples in excess of 99% with overall false positive performance improvements on a broad set of samples in excess of 30%. These improvements are achieved with little to no increase in false negative rates. In addition, test results have also shown that using different data to retrain classifiers results in different classification behaviors on the same sample.
A summary of an in-situ process including the formation of the base set prior to in-situ, according to embodiments described herein follows below (e.g., steps 1-5 occur in the 3rd-party facility while steps 6-14 occur in the user facility):
The terms and descriptions used herein are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations are possible within the spirit and scope of the invention as defined in the following claims, and their equivalents, in which all terms are to be understood in their broadest possible sense unless otherwise indicated.
This application is a continuation of U.S. patent application Ser. No. 16/916,049, filed Jun. 29, 2020, now U.S. Pat. No. 11,481,684 issued on Oct. 25, 2022, which is a continuation of U.S. patent application Ser. No. 16/180,790, filed Nov. 5, 2018, now U.S. Pat. No. 10,733,539 issued on Aug. 4, 2020, which is a continuation of U.S. patent application Ser. No. 15/176,784, filed Jun. 8, 2016, now U.S. Pat. No. 10,121,108 issued on Nov. 6, 2018, which claims benefit of U.S. Patent Application No. 62/199,390, filed Jul. 31, 2015, each of which is hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6513025 | Rosen | Jan 2003 | B1 |
8161548 | Wan | Apr 2012 | B1 |
8374975 | Cierniak | Feb 2013 | B1 |
8793201 | Wang et al. | Jul 2014 | B1 |
9100669 | Feng et al. | Aug 2015 | B2 |
9355067 | Monga et al. | May 2016 | B1 |
9818066 | Rammohan | Nov 2017 | B1 |
9935972 | Zhang | Apr 2018 | B2 |
10019740 | Simantov et al. | Jul 2018 | B2 |
10121108 | Miserendino et al. | Nov 2018 | B2 |
10318883 | Gerard | Jun 2019 | B2 |
10515378 | Modarresi | Dec 2019 | B2 |
10733539 | Miserendino et al. | Aug 2020 | B2 |
10810193 | Subramanya | Oct 2020 | B1 |
10977571 | Miserendino | Apr 2021 | B2 |
11037236 | Ram | Jun 2021 | B1 |
11138617 | Litmanovich | Oct 2021 | B2 |
11182691 | Zhang | Nov 2021 | B1 |
11271939 | Apostolopoulos et al. | Mar 2022 | B2 |
11310268 | Bowditch et al. | Apr 2022 | B2 |
11334928 | Chaudhari | May 2022 | B2 |
20040220892 | Cohen et al. | Nov 2004 | A1 |
20050267850 | Chen | Dec 2005 | A1 |
20060288038 | Zheng et al. | Dec 2006 | A1 |
20070282780 | Regier et al. | Dec 2007 | A1 |
20080103996 | Forman et al. | May 2008 | A1 |
20080154820 | Kirshenbaum et al. | Jun 2008 | A1 |
20100217732 | Yang et al. | Aug 2010 | A1 |
20100220906 | Abramoff et al. | Sep 2010 | A1 |
20100293117 | Xu | Nov 2010 | A1 |
20120166366 | Zhou et al. | Jun 2012 | A1 |
20120300980 | Yokono | Nov 2012 | A1 |
20120310864 | Chakraborty et al. | Dec 2012 | A1 |
20130080359 | Will et al. | Mar 2013 | A1 |
20130151443 | Kyaw | Jun 2013 | A1 |
20140090061 | Avasarala et al. | Mar 2014 | A1 |
20140187177 | Sridhara et al. | Jul 2014 | A1 |
20140358828 | Phillipps et al. | Dec 2014 | A1 |
20150066552 | Shami | Mar 2015 | A1 |
20150067857 | Symons et al. | Mar 2015 | A1 |
20150135262 | Porat et al. | May 2015 | A1 |
20150154353 | Xiang et al. | Jun 2015 | A1 |
20150178639 | Martin et al. | Jun 2015 | A1 |
20150235079 | Yokono | Aug 2015 | A1 |
20150324686 | Julian et al. | Nov 2015 | A1 |
20160063397 | Ylipaavalniemi et al. | Mar 2016 | A1 |
20160217349 | Hua et al. | Jul 2016 | A1 |
20160299785 | Anghel et al. | Oct 2016 | A1 |
20160335435 | Schmidtler | Nov 2016 | A1 |
20160342903 | Shumpert | Nov 2016 | A1 |
20160379135 | Shteingart et al. | Dec 2016 | A1 |
20190336097 | Bregman-Amitai et al. | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
2723034 | Apr 2014 | EP |
2646911 | Apr 2018 | EP |
2012-027710 | Feb 2012 | JP |
2014-504399 | Feb 2014 | JP |
2015-079504 | Apr 2015 | JP |
2005091214 | Sep 2005 | WO |
Entry |
---|
European Search Report and Search Opinion Received for EP Application No. 16833453.0, mailed on Jan. 4, 2019, mailed on Jan. 4, 2019, 10 pages. |
Iker Burguera et al: “Crowdroid” Security and Privacy in Smartphones and Mobile Devices, ACM, 2 Penn Plaza, Suite 701 New York NY 10121-0701 USA, Oct. 17, 2011 (Oct. 17, 2011), pp. 15-26. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2016/036408, mailed on Feb. 15, 2018, 12 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2016/036408, mailed on Sep. 13, 2016, 13 pages. |
Supplementary European Search Report for Application No. 16833453.0 (PCT/US2016/036408) dated Jan. 22, 2019, 1 page. |
US Patent Application filed on Jun. 29, 2020, entitled “System and Method for Machine Learning Model Determination and Malware Identification”, U.S. Appl. No. 16/916,049. |
US Patent Application filed on Sep. 9, 2022, entitled “System And Method For Machine Learning Model Determination And Malware Identification”, U.S. Appl. No. 17/930,827. |
Number | Date | Country | |
---|---|---|---|
20230222381 A1 | Jul 2023 | US |
Number | Date | Country | |
---|---|---|---|
62199390 | Jul 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16916049 | Jun 2020 | US |
Child | 17930827 | US | |
Parent | 16180790 | Nov 2018 | US |
Child | 16916049 | US | |
Parent | 15176784 | Jun 2016 | US |
Child | 16180790 | US |