System and method for detecting malware injected into memory of a computing device

Information

  • Patent Grant
  • 11151247
  • Patent Number
    11,151,247
  • Date Filed
    Thursday, July 13, 2017
    7 years ago
  • Date Issued
    Tuesday, October 19, 2021
    3 years ago
Abstract
A malicious code detection module identifies potentially malicious instructions in memory of a computing device. The malicious code detection module examines the call stack for each thread running within the operating system of the computing device. Within each call stack, the malicious code detection module identifies the originating module for each stack frame and determines whether the originating module is backed by an image on disk. If an originating module is not backed by an image on disk, the thread containing that originating module is flagged as potentially malicious, execution of the thread optionally is suspended, and an alert is generated for the user or administrator.
Description
FIELD OF THE DISCLOSURE

The present disclosure relates to a system and method for detecting the execution of malicious instructions injected into the memory of a computing device.


BACKGROUND

As computing devices become increasingly complex, viruses and malware also are becoming increasingly complex and difficult to detect and prevent. While the prior art includes many approaches for scanning non-volatile storage such as a hard disk drive for such threats, the prior art includes few satisfactory solutions for detecting malicious code loaded into memory or the processor itself.



FIG. 1 depicts an exemplary prior art computing device 100 comprising processor 110, memory 120, and storage device 130. In this example, memory 120 is volatile and can comprise DRAM, SRAM, SDRAM, or other known memory devices. Storage device 130 is non-volatile and can comprise a hard disk drive, solid state drive, flash memory, or other known storage devices. One of ordinary skill in the art will understand that processor 110 can include a single processor core or multiple processor cores as well as numerous cache memories, as is known in the prior art. Processor 110 executes operating system 140. Examples of operating system 140 include the operating systems known by the trademarks WINDOWS® by Microsoft and IOS® by Apple, CHROME OS® and ANDROID® by Google, LINUX and others.


In FIG. 2, data is stored in storage device 130. There are numerous mechanisms to store data in storage device 130, and two known mechanisms are shown for illustration purposes. In one mechanism, data is stored as blocks 220 and can be accessed by logical block address (LBA) or similar addressing scheme. In another mechanism, data is stored as files 230 and can be accessed using a file system. In the prior art, scanning module 210 can be executed by processor 110 and can scan either blocks 220 or files 230 to look for malicious code. This often is referred to as virus scan software and is well-suited for identifying and nullifying known malicious programs that are stored in non-volatile devices such as in storage device 130.


While prior art techniques are well-suited for detecting known malicious programs stored in storage device 130, there is no satisfactory technique for detecting malicious instructions that have been injected into memory 120 but not stored in storage device 130.


What is needed is a mechanism for detecting malicious instructions that have been injected into processor 110 or memory 120 but not stored in storage device 130 and generating an alert upon such detection and/or suspending execution of the malicious instructions.


BRIEF SUMMARY OF THE INVENTION

In the embodiments described herein, a malicious code detection module identifies potentially malicious instructions in memory of a computing device. The malicious code detection module examines the call stack for each thread running within the operating system of the computing device. Within each call stack, the malicious code detection module identifies the originating module for each stack frame and determines whether the originating module is backed by an image on disk. If an originating module is not backed by an image on disk, the thread containing that originating module is flagged as potentially malicious, execution of the thread optionally is suspended, and an alert is generated for the user or administrator.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts a prior art computing device.



FIG. 2 depicts prior art virus scan software.



FIG. 3 depicts multiple threads running within a prior art operating system.



FIG. 4 depicts attribute information for various programs running in a prior art computing device.



FIG. 5 depicts a malicious code detection module for identifying potentially malicious instructions.



FIG. 6 further depicts a malicious code detection module for identifying potentially malicious instructions.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Additional aspects of prior art systems will now be described. In FIG. 3, processor 110 executes operating system 140. Operating system 140 can manage multiple threads of instructions that are executed concurrently. Here, exemplary threads 301 and 302 are depicted, but it is to be understood that additional threads can be executed. Operating system 140 maintains a call stack for each thread. Here, exemplary call stack 312 is depicted, which is the call stack for thread 302. Threads can be related to one another. For example, in this example, thread 302 was initiated by thread 301.


In this simplified example, call stack 312 comprises variables 331, 332, and 335 and parameter 333, which were placed in call stack 312 by thread 302. Return address 334 also was placed on call stack 312 by thread 302. Return address 334 is the address corresponding to the instruction in thread 302 that placed stack frame 341 in call stack 312. A stack frame is a collection of data placed in a call stack as part of a procedure. Here, stack frame 341 comprises variables 331 and 332, parameter 333, and return address 334.


Operating system 140 further comprises application programming interface (API) module 320, which is a mechanism by which threads can invoke APIs specific to operating system 140.



FIG. 4 depicts further aspects of the prior art. Here, memory 120 contains various types of programs, user data, and unassigned portions. Each of these items is stored in a particular address range within memory 120, and memory 120 includes attribute information 410 that indicates whether each item is backed by a file stored in storage device 130 (e.g., whether that item is “backed on disk”). In this simple example, attribute information 410 includes address ranges for operating system 140, utility program 411, application program 412, program 413, user data 414, and an unassigned area 415. Attribute information 410 further indicates whether each item is backed by a file in storage device 130 or not. Here, all items are in fact backed by a file except for program 413. Attribute information 410 typically is established and managed by operating system 140.


With reference now to FIG. 5, an embodiment of computing device 500 is depicted. Computing device 500 comprises processor 110 and operating system 140 as in the prior art. Computing device 500 further comprises malicious code detection module 510, which comprises lines of code executed by processor 110. Malicious code detection module 510 optionally can be part of the kernel of operating system 140 or can be code outside of operating system 140 that is given special privileges by operating system 140, such as the ability to access attribute information 410 and/or to suspend execution of a thread.


The embodiments detect malicious code based on three characteristics that typically are present in malicious code. First, malicious code usually owns a thread of execution. Second, this thread of execution originates or operates from code that is not backed by a file on disk. Third, the thread of execution must call the operating system API module 320 directly in order for the malicious code to affect appreciable activity on the system. That is, in order for the malicious code to inflict harm, it inevitably must call operating system API module 320 directly. Although there are some exceptions, these three features generally are not found in benign application or operating system 140 itself.


Malicious code detection module 510 first enumerates the call stacks of each thread of execution. In one embodiment, malicious code detection module 510 assigns a unique identifier to each call stack. Once enumerated, each call stack is analyzed to determine if it is malicious in nature.


In the simplified example of FIG. 5, malicious code detection module 510 starts at the top of call stack 312 and works down. The top of a call stack is almost always a system call originating from an operating system library. If not, the thread was likely performing a CPU intensive task and the thread is immediately deemed non-malicious. Here, the top of the call stack 312 is system call 521 (which represents a call made to API module 320), and the analysis therefore continues.


Malicious code detection module continues down call stack 312 and determines the originating module for each stack frame in the reverse order in which the stack frames were added to call stack 312. Here, stack frames 502 and 501 are shown. Malicious code detection module 510 determines the return address for stack frames 502 and 501, which here are return addresses 512 and 511, and determines the procedure within thread 302 associated with the return address. Malicious code detection module 510 then consults attribute information 410 to determine whether the code in which that procedure is contained is backed by a file in storage device 130. If it is (as would be the case if the procedure is part of application program 412), then the procedure and the thread containing it are deemed non-malicious. If it is not (as would be the case if the procedure is part of program 413), then the procedure and the thread containing it are deemed potentially malicious.


With reference to FIG. 6, when malicious code detection module 510 determines that a procedure and the thread containing it are potentially malicious, it optionally suspends the thread (if operating system 140 has given malicious code detection module 510 permission to perform such an action) and/or it generates alert 610.


If suspended, thread 302 will not resume execution unless and until a user or administrator expressly instructs computing device 500 to proceed with execution of thread 302.


Alert 610 can take any variety of forms. Alert 610 can be a message displayed on a display operated by a user or administrator. Alert 610 also might be an email, SMS message, MMS message, or other message sent to a device operated by a user or administrator. Alert 610 also might be an audible sound generated by computing device 500.


With reference again to FIG. 5, it is understood that each stack frame is analyzed in the reverse order in which the stack frame was added to the call stack. Optionally, malicious code detection module 510 can stop this analysis when threshold/event 520 is reached. Examples of different threshold/event 520 possibilities include: (1) Malicious code detection module 510 can analyze X stack frames and then stop; (2) Malicious code detection module 510 can analyze the stack frames that were added to call stack 312 within the last Y seconds; (3) Malicious code detection module 510 can analyze the stack frames that were added to call stack 312 since the last analysis performed by malicious code detection module 510 was performed on call stack 312; or (4) Malicious code detection module 510 can analyze the stack frames until it identifies a return address associated with a certain type of procedure. This type of limitation might be desirable because call stacks can become very large, and continuing the analysis indefinitely may slow down processor 110 in performing other tasks.


The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures which, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various different exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art. In addition, certain terms used in the present disclosure, including the specification, drawings and claims thereof, can be used synonymously in certain instances, including, but not limited to, for example, data and information. It should be understood that, while these words, and/or other words that can be synonymous to one another, can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.

Claims
  • 1. A method of detecting malicious code in a computing device comprising a processor executing an operating system and a malicious code detection module, memory, and a non-volatile storage device, the method comprising: identifying, by the malicious code detection module, a call stack for a thread of execution within the operating system that includes an originating module and an attribute table;assigning, by the malicious code detection module, a unique identifier to the call stack; andwhen a top of the call stack contains a call to an application programing interface of the operating system:determining, by the malicious code detection module, the originating module that initiated a stack frame in the call stack,wherein the determining step is performed for additional stack frames in the call stack until a threshold or event is reached,wherein the threshold or event is based on stack frames, to be analyzed by the malicious code detection module, that were added to the call stack within a specific time period; andgenerating an alert, by the malicious code detection module, when the attribute table associated with the originating module indicates that the originating module is not backed by a file stored in the non-volatile storage device.
  • 2. The method of claim 1, wherein the malicious code detection module is part of a kernel of the operating system.
  • 3. The method of claim 1, wherein the malicious code detection module is not part of the operating system.
  • 4. A method of detecting malicious code in a computing device comprising a processor executing an operating system and a malicious code detection module, memory, and a non-volatile storage device, the method comprising: identifying, by the malicious code detection module, a call stack for a thread of execution within the operating system that includes an originating module and an attribute table;assigning, by the malicious code detection module, a unique identifier to the call stack; andwhen a top of the call stack contains a call to an application programing interface of the operating system:determining, by the malicious code detection module, the originating module that initiated a stack frame in the call stack,wherein the determining step is performed for additional stack frames in the call stack until a threshold or event is reached,wherein the threshold or event is based on stack frames, to be analyzed by the malicious code detection module, that were added to the call stack within a specific time period; andsuspending, by the malicious code detection module, the thread of execution containing the originating module when the attribute table associated with the originating module indicates that the originating module is not backed by a file stored in the non-volatile storage device.
  • 5. The method of claim 4, wherein the malicious code detection module is part of a kernel of the operating system.
  • 6. The method of claim 4, wherein the malicious code detection module is not part of the operating system.
  • 7. A computing device comprising: a processor executing an operating system and a malicious code detection module; memory; and a non-volatile storage device; wherein the malicious code detection module comprises instructions for: identifying a call stack for a thread of execution that includes an originating module and an attribute table within the operating system;assigning, by the malicious code detection module, a unique identifier to the call stack; andwhen a top of the call stack contains a call to an application programing interface of the operating system:determining the originating module that initiated a stack frame in the call stack,wherein the malicious code detection module further comprises instructions for performing the determining step for additional stack frames in the call stack until a threshold or event is reached,wherein the threshold or event is based on stack frames, to be analyzed by the malicious code detection module, that were added to the call stack within a specific time period; andgenerating an alert if the attribute table associated with the originating module indicates that the originating module is not backed by a file stored in the non-volatile storage device.
  • 8. The device of claim 7, wherein the malicious code detection module is part of a kernel of the operating system.
  • 9. The device of claim 7, wherein the malicious code detection module is not part of the operating system.
  • 10. The device of claim 7, wherein the malicious code detection module further comprises instructions for: suspending the thread of execution containing the originating module if the originating module is not backed by a file stored in the non-volatile storage device.
  • 11. A method of detecting malicious code in a computing device, the method comprising: identifying, using a processor, a call stack for a thread of execution that includes an originating module and an attribute table within an operating system;assigning, by a malicious code detection module, a unique identifier to the call stack; andwhen a top of the call stack contains a system call from an operating system library to an application programing interface of the operating system:analyzing, using the processor, the call stack in reverse order in which stack frames were added to the call stack, the analyzing comprising:when the call stack does not include a direct call from the thread of execution to the operating system, determining that the thread is non-malicious; andwhen the call stack includes a direct call from the thread of execution to an application programming interface of the operating system:determining the originating module that initiated a stack frame in the call stack; and when the attribute table associated with the originating module indicates that the originating module is not backed by a file stored in the non-volatile storage device, determining that the thread is malicious,wherein the malicious code detection module further comprises instructions for performing the step of determining the originating module that initiated a stack frame in the call stack, for additional stack frames in the call stack until a threshold or event is reached, andwherein the threshold or event is based on stack frames, to be analyzed by the malicious code detection module, that were added to the call stack within a specific time period.
  • 12. The method of claim 11, further comprising, in response to determining that the thread is malicious, generating an alert.
  • 13. The method of claim 12, wherein the malicious code detection module is part of a kernel of the operating system.
  • 14. The method of claim 12, wherein the malicious detection module is not part of the operating system.
  • 15. The method of claim 1, further comprising enumerating the call stack and the additional call stacks.
  • 16. The method of claim 1, further comprising assigning, by the malicious code detection module, a unique identifier to each of the additional call stacks.
  • 17. The method of claim 1, wherein the threshold or event is based on a specified number of stack frames to be analyzed by the malicious code detection module.
  • 18. The method of claim 1, further comprising determining, by the malicious code detection module, a return address for the stack frame.
  • 19. The method of claim 4, further comprising determining, by the malicious code detection module, a return address for the stack frame.
  • 20. The device of claim 7, wherein the malicious code detection module further comprises instructions for determining a return address for the stack frame.
US Referenced Citations (59)
Number Name Date Kind
5481684 Richter et al. Jan 1996 A
7085928 Schmid Aug 2006 B1
7640589 Mashevsky et al. Dec 2009 B1
8555385 Bhatkar et al. Oct 2013 B1
8555386 Belov Oct 2013 B1
9055093 Borders Jun 2015 B2
9292689 Chuo Mar 2016 B1
9356944 Aziz May 2016 B1
9407648 Pavlyushchik et al. Aug 2016 B1
9509697 Salehpour Nov 2016 B1
9690606 Ha et al. Jun 2017 B1
10045218 Stapleton Aug 2018 B1
10397255 Bhalotra et al. Aug 2019 B1
20030200464 Kidron Oct 2003 A1
20040199763 Freund Oct 2004 A1
20050102601 Wells May 2005 A1
20050160313 Wu Jul 2005 A1
20060026569 Oerting et al. Feb 2006 A1
20060143707 Song et al. Jun 2006 A1
20070180509 Swartz et al. Aug 2007 A1
20080034429 Schneider Feb 2008 A1
20080052468 Speirs et al. Feb 2008 A1
20080127292 Cooper et al. May 2008 A1
20080201778 Guo Aug 2008 A1
20090049550 Shevchenko Feb 2009 A1
20090077664 Hsu Mar 2009 A1
20090187396 Kinno Jul 2009 A1
20090222923 Dixon Sep 2009 A1
20100100774 Ding et al. Apr 2010 A1
20100293615 Ye Nov 2010 A1
20110167434 Gaist Jul 2011 A1
20110271343 Kim et al. Nov 2011 A1
20120054299 Buck Mar 2012 A1
20120159625 Jeong Jun 2012 A1
20120246204 Nalla Sep 2012 A1
20130283030 Drew Oct 2013 A1
20130332932 Teruya et al. Dec 2013 A1
20130347111 Karta et al. Dec 2013 A1
20140032915 Muzammil et al. Jan 2014 A1
20140137184 Russello et al. May 2014 A1
20140310714 Chan Oct 2014 A1
20140380477 Li Dec 2014 A1
20150020198 Mirski et al. Jan 2015 A1
20150150130 Fiala et al. Oct 2015 A1
20150264077 Berger et al. Oct 2015 A1
20150278513 Krasin et al. Oct 2015 A1
20150295945 Canzanese Oct 2015 A1
20150339480 Lutas Nov 2015 A1
20160180089 Dalcher Jun 2016 A1
20160232347 Badishi Aug 2016 A1
20160275289 Sethumadhavan Sep 2016 A1
20160328560 Momot Nov 2016 A1
20160357958 Guidry Dec 2016 A1
20160364236 Moudgill et al. Dec 2016 A1
20170004309 Pavlyushchik et al. Jan 2017 A1
20170126704 Premnath May 2017 A1
20180032728 Spisak Feb 2018 A1
20180307840 David et al. Oct 2018 A1
20190018962 Desimone Jan 2019 A1
Foreign Referenced Citations (6)
Number Date Country
2784716 Oct 2014 EP
3652639 May 2020 EP
3652667 May 2020 EP
WO2018026658 Feb 2018 WO
WO2019014529 Jan 2019 WO
WO2019014546 Jan 2019 WO
Non-Patent Literature Citations (6)
Entry
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/042005, dated Oct. 1, 2018, 7 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/041976, dated Sep. 28, 2018, 5 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2017/044478, dated Oct. 10, 2017, 7 pages.
Canzanese et al., “System Call-Based Detection of Malicious Processes”, 2015 IEEE International Converence on Software Quality, Reliability and Security, Aug. 3-5, 2015, IEEE, 6 pages.
“Extended European Search Report”, European Patent Application No. 18831224.3, dated Mar. 29, 2021, 8 pages.
“Extended European Search Report”, European Patent Application No. 18832453.7, dated Mar. 18, 2021, 9 pages.
Related Publications (1)
Number Date Country
20190018958 A1 Jan 2019 US