Embodiments of the disclosure relate to cyber security. More particularly, embodiments of the disclosure related to a system and method for mitigating false negatives and false positives in detection of cyberattacks involving the impersonation of a legitimate source such as a phishing cyberattack.
Over the past decade, cyberattacks directed to impersonating legitimate sources, such as phishing cyberattacks, have become a problem experienced by many users of the Internet. Phishing is a fraudulent attempt to obtain sensitive information from targets by disguising requests as being from a trustworthy (legitimate) entity. A phishing cyberattack can entail the transmission of an electronic communication to one or more recipients, where the electronic communication is any type of message (e.g., an email message, instant message, etc.) that purports to be from a known company with a seemingly legitimate intention, such as a bank, credit card company, telephone carrier, or the like. However, this message is actually intended to deceive the recipient into sharing his or her sensitive information. Often the message draws the recipient to a counterfeit version of the company's web page designed to elicit sensitive information, such as the recipient's username, password, credit card information, or social security number.
For example, a malware author may transmit an email message to a recipient purporting to be from a financial institution and asserting that a password change is required to maintain access to the recipient's account. The email includes a Uniform Resource Locator (URL) that directs the recipient to a counterfeit version of the institution's website requesting the recipient to enter sensitive information into one or more displayable input fields in order to change the recipient's password. Neither the email message nor the URL is associated with the actual financial institution or its genuine website, although the email message and the counterfeit website may have an official appearance and imitate a genuine email and website of that financial institution. The phishing attack is completed when the recipient of the email message enters and submits sensitive information to the website, which is then delivered to the malware author for illicit use.
Identifying phishing websites has been a challenging cybersecurity problem. Some conventional cybersecurity systems have been configured to rely on whitelists and blacklists of known benign (i.e., legitimate) URLs and malicious URLs, respectively, to protect users. Other conventional cybersecurity systems use computer vision-based techniques to identify phishing websites by (i) a virtual determination of display elements of a web page as to whether their respective renderings (i.e., visual appearances) are “too similar” to display elements of a known legitimate web page and (ii) an evaluation that these web pages have inconsistent domains (e.g., domain for a prospective phishing web page is different than a similar, legitimate web page). From a cyber-threat detection perspective, each of these conventional cybersecurity systems has one or more drawbacks, including increased inconclusiveness as whitelist/blacklist analyses do not provide a robust overall analysis and computer vision analyses experience time constraints that limit the number of displayable elements to analyze per web page, which adversely affects the thoroughness of such analyses.
Embodiments of the disclosure are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
Various embodiments of the disclosure are directed to an improved system that analyzes code segments associated with suspect web pages to detect a cyberattack involving impersonation, such as a phishing cyberattack for example. According to one embodiment of the disclosure, the phishing detection system is configured to retrieve display code, namely at least one code segment that pertains to displayable data accessible via a suspect Uniform Resource Locator (URL), which is submitted from separate electronic devices for a customer (or subscriber), for rendering the displayable data when provided to a suitable application (e.g. web browser). From the retrieved code segment and/or one or more code segments subsequently recovered based, either directly or indirectly, on addressing information within links and/or hyperlinks included as part of the retrieved code segment, the phishing detection system determines whether the suspect URL is associated with a phishing cyberattack. For embodiments described in this disclosure, the displayable data accessible via the suspect URL and/or addressing information within the links and/or hyperlinks (referred to as “link URLs”) may correspond to web pages. However, in other embodiments, the displayable data may correspond to a stored document when the URL relies on file transfer protocol (FTP).
In order to improve the accuracy of the verdict, namely the classification of the suspect URL as part of a phishing cyberattack or not, the phishing detection system is further configured to (i) parse the display code (e.g., a code segment retrieved using the suspect URL) to identify links and/or hyperlinks (hereinafter, generally referred to as “links”) included in the retrieved code segment; (ii) recover one or more additional code segments accessible via the link URLs contained within the links, where the recovery of the additional code segments is conducted in accordance with a code segment recovery scheme as described below; (iii) perform analytics on each code segment to determine whether that code segment is correlated with a code segment forming a malicious web page previously detected to be part of a prior phishing cyberattack; and (iv) generate an alert message including meta-information associated with the analytic results, code segments and/or URLs if any of the code segments is correlated with a prior phishing code segment. For example, this meta-information may include, but is not limited or restricted to the URL, Internet Protocol (IP) address of the electronic device providing the electronic communication including the URL received by the phishing detection system, domain of the phishing web page, target of the phishing cyberattack (e.g., destination address), or the like.
Herein, the term “code segment” generally refers to information returned in a response to a request for displayable data, such as a web page in which the information would be used to render the web page. As one example, the code segment may include, but is not limited or restricted to embedded JavaScript within an HTML page versus an external JavaScript call. As another example, the code segment may include (a) content associated with a web page (e.g., Hypertext Markup Language “HTML” content) and/or (b) information that at least partially controls a visual representation or style (e.g., color, font, spacing) of the HTML content to be rendered (e.g. Cascading Style Sheet “CSS” file), provided the style information is included as part of the code segment. In some configurations, however, the code segment may pertain to web page (HTML) content without the style information. Instead, the style information may be provided as a separate code segment where the request for displayable date results in multiple HTTP GET messages. As a result, the first code segment may be analyzed against HTML code segments based on prior phishing cyberattacks conducted through malicious alteration of the HTML code segment while the second code segment may be analyzed against portions or representations of CSS files, each based on a prior phishing cyberattack conducted through malicious alteration of the CSS file. Herein, as an illustrative example, the code segment may be content associated with the entire web page or a portion of the web page.
More specifically, according to one embodiment of the disclosure, the phishing detection system features information collection logic, parsing logic, heuristic logic, and fuzzy hash generation and detection logic. For this embodiment of the disclosure, the information collection logic is configured to obtain a code segment associated with a web page that is accessible via a suspect URL or a link URL. If the code segment for an error message is returned to the phishing detection system in lieu of the code segment for the web page, one or more analyses (e.g., statistical analysis, characteristic analysis, etc.) may be conducted on the code segment for the error message to determine whether the error message constitutes a customized error message being part of a phishing cyberattack. Otherwise, when the code segment associated with web page is acquired, the parsing logic is configured to parse through that code segment and identify any links within that code segment.
The level of parsing may depend on whether the phishing detection system is operating to support real-time analysis of the URL. If not, parsing may continue until completion of analysis of all recovered code segments accessed using link URLs. If operating in accordance with time constraints, the extent of code segments being recovered for analysis via URLs within the links may be limited by recovery rules that control operability of the parsing logic. For instance, for each URL analysis, the recovery rules may limit the parsing logic as to a number (maximum) of code segments to be recovered from links or may limit the number of nested link stages from which the code segments may be recovered (hereinafter, “code segment depth”).
Concurrently with operations of the parsing logic, the heuristic logic may conduct a statistical analysis (e.g., analyze of the number of advertisements, the number of Document Object Model “DOM” objects, the number of links, etc.) and/or an analysis of the characteristics of a code segment under analysis (e.g., presence of a displayable element such as a user interface “UI” element, etc.). The results of these analyses are used to determine whether the suspect URL may be associated with a phishing cyberattack. If the results suggest that the code segment is not associated with a phishing cyberattack, analysis of the code segment ceases. Otherwise, this “non-determinative” or “suspicious” code segment is provided to the fuzzy hash generation and detection logic for further analysis.
The fuzzy hash generation and detection logic performs a logical transformation of the code segment to produce a smaller sized representation (e.g., a hash value), which may be compared to representations (e.g., hash values) of code segments associated with known phishing web pages. The fuzzy hash generation and detection logic is also configured to conduct a “fuzzy hashing” detection, namely a comparison of hash values of two distinctly different items in order to determine a fundamental level of similarity (e.g., expressed as a percentage or value) between these two items. Where the hash value of the code segment is correlated with a code segment associated with a known phishing web page, namely a particular level of correlation (correlation threshold) has been met or exceeded, the code segment is considered to be part of a phishing cyberattack. Hence, the suspect URL is labeled as part of a phishing attack, where one or more alert messages may be generated to notify administrators, targeted electronic devices and source electronic devices that are part of the same enterprise network, and meta-information associated with the detected phishing cyberattack may be uploaded to a knowledge data store.
Herein, the term “correlation” refers to a level of similarity between two items, such as a hash value of a code segment acquired for a phishing analysis and any hash values of known phishing code segments for example, which meets or exceeds a prescribed correlation threshold. The “correlation threshold” is set between a first correlation range that represents a low-to-average likelihood of the URL being part of a phishing cyberattack and a second correlation range that represents an extremely high likelihood of the URL being part of a phishing cyberattack. This correlation threshold may be determined empirically in light of recent known phishing cyberattacks and may be configurable to optimize the accuracy of verdicts by placement of the correlation threshold below the second correlation range to reduce the number of false positives and above the first correlation range in efforts to eliminate false negatives.
Also, the correlation threshold may be programmable (updateable) and may differ depending on the type of displayable data being requested through the URL. For example, a correlation threshold relied upon for detection may be adjusted based, at least in part, on the current threat landscape. Hence, according to this embodiment, a first correlation threshold may be applied for a first data type identified as currently experiencing a greater concentration of phishing attacks than a second data type assigned a second correlation threshold. Here, the first correlation threshold would be lower than the second correlation threshold. As lower thresholds tend to reduce the likelihood of false negatives albeit potentially increase the likelihood of false positives, the adjustment (or intermittent throttling) of correlation thresholds may be conducted to take into account those data types currently being targeted for phishing attacks by reducing thresholds for particular data types with high threat activity to avoid false negatives and upwardly adjust the correlation threshold with threat activity reduction. This threshold throttling may be used to maximize phishing detection system performance and accuracy.
As an illustrative example, the phishing detection system may be configured to receive a URL for analysis and provide the URL to a first component (e.g., information collection logic). According to one embodiment of the disclosure, the information collection logic may be configured to conduct a preliminary filtering operation to identify URLs associated with web pages known to lead to false positives and false negatives (e.g. URLs matching known legitimate domains maintained as part of a URL whitelist and/or URLs matching known malicious domains maintained as part of a URL blacklist, etc.). If the URL is either suspicious (e.g., URL is identified in both the URL blacklist and URL whitelist) or non-determinative (e.g., URL is not identified in at least the URL whitelist or the URL blacklist if utilized), the information collection logic retrieves display code associated with a web page accessible via the suspect URL. The display code corresponds to one or more retrieved code segments (generally referred to as the “retrieved code segment”) for use in rendering a web page when provided to a web browser application. A second component, namely the parsing logic, is configured to parse the retrieved code segment to identify a presence of any links. The parsing logic, alone or in combination with the information collection logic, recursively recovers one or more code segments associated with each link within the retrieved code segment as well as any links within recovered code segments until reaching a maximum point of content configurable “code segment depth,” as described above.
The retrieved code segment and recovered code segment(s) continue to be provided to a third component, namely heuristic logic, for filtering as described below. Depending on the filtering results from the heuristic logic, a fourth component, namely the fuzzy hash generation and detection logic, analyzes some or all of the code segments to determine whether these code segments indicate that the suspect URL is associated with a phishing cyberattack.
According to one embodiment of the disclosure, the heuristics logic may include one or more interactive filters, layout filters, and/or error page filters. An interactive filter is configured to determine whether the code segment under analysis includes a displayable element operating as a user interface (UI) element or requesting activity by the user (e.g., call a particular telephone number or access a certain web page). The UI element may include one or more user input fields (e.g., text boxes, drop-down menus, buttons, radio dials, check boxes, etc.), which are configured to receive input from a user (e.g., account number, credit card information, user name, password, etc.).
The layout filters are configured to identify characteristics associated with known phishing web pages, where the characteristics may be programmable depending on the phishing threat landscape. For instance, one layout filter may determine the number of advertisements present on the web page, as the current threat landscape suggests that phishing web pages tend to fall below a first (minimum advertisement) threshold. Similarly, another layout filter may determine the number of HTML DOM objects or the number of links in HTML DOM objects, where the current threat landscape suggests that phishing web pages tend to feature DOMs that exceed a second (maximum DOM) threshold and/or links that exceed a third threshold.
The error page filter may be configured to ignore error pages returned in response to a network request for a webpage corresponding to a URL based on certain types of errors. For instance, these types of errors are often specified as HTML error codes, and may include general web server errors (e.g. HTML error code 500) or errors occurring when the requested webpage cannot be found (e.g., HTML error code 404) such as in the case of a mistyped URL, etc. The error page filter may be further used to compute term frequency-inverse document frequency for custom error pages, and if the computed frequency is greater than certain threshold then code segment will be sent for further analysis, otherwise filtered out. Additionally error filter could be used to compute similarity threshold with custom error pages knowledge base used for phishing. Similarity will work same way as described in similarity analysis of web page but will have a separate configurable threshold value.
Furthermore, in some embodiments, the heuristics logic may also filter out code segments conforming to code commonly featured in known legitimate (e.g., non-phishing) web pages, referred to as a “web page whitelist,” and/or code commonly featured in illegitimate (e.g., phishing) web pages referred to as a “web page blacklist.” For web page whitelist detection, the code segment may be discarded from further analysis. For web page blacklist detection, the heuristics logic may also filter the code segment from further analysis, but the reporting logic of the phishing detection system may generate and issue an alert message directed to an administrator associated with a system having provided the received URL or a system targeted to receive the URL (if the URL is intercepted in transit from the malicious source) for example. If the heuristics logic fails to filter the code segment from further analysis, as described above, the code segment is provided to the fourth component such as the fuzzy hash generation and detection logic.
According to one embodiment of the disclosure, the fuzzy hash generation and detection logic generates a hash value based on information associated with the code segment under analysis and compares (relying on a correlation threshold) the generated hash value with hash values associated with each of a known corpus of code segments associated phishing web pages. If the generated hash value is determined to be “correlated” with a hash value associated with a code segment associated with a known phishing web page, namely the generated hash value meets or exceeds a correlation threshold positioned by empirical data between the first correlation range and the second correlation range as described above, the received URL may be determined to be associated with phishing cyberattacks. The generated hash value represents a transformation of the code segment, which may include, for example, the HTML content, CSS file, JavaScript images, or the like. If a phishing determination is made by the fuzzy hash generation and detection logic, an alert message may be issued in accordance with a selected notification scheme such as to a security administrator for the network including the destination electronic device. If a determination of phishing cannot be made, the phishing detection system may be configured to issue a message to the source of the received URL and/or a security administrator that the analysis of the received web page content was inconclusive.
In some embodiments, where an inconclusive determination is made, the phishing detection system will not issue any message. Depending on the results of this inconclusive determination and reaching the threshold of “suspiciousness,” which may be a collective determination made by multiple (weighted) plug-ins deploying the phishing detection logic without any of these plug-ins having a conclusive determination, the phishing detection system may provide the suspect URL and any meta-information associated therewith (e.g., retrieved code segment, recovered code segments, link URLs associated with the recovered code segments, etc.) to a secondary phishing detection system. Hence, the phishing detection system may also be practiced in combination with prior phishing detection systems leveraging computer vision techniques to identify web pages that are visually similarly with known legitimate web pages but do not share a similar domain, thereby indicating a phishing cyberattack as described in U.S. patent application Ser. No. 15/721,948 filed Oct. 1, 2017 entitled “Phishing Attack Detection,” the contents of which are incorporated by reference herein. Herein, the below described phishing detection system combined with the computer vision techniques may collectively limit false positives while detecting phishing URLs.
In the following description, certain terminology is used to describe various features of the invention. For example, the terms “logic,” “component” and “module” are representative of hardware, firmware or software that is configured to perform one or more functions. As hardware, logic (or component or module) may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor (e.g., microprocessor with one or more processor cores, a digital signal processor, a graphics processing unit (GPU), a programmable gate array, a microcontroller, an application specific integrated circuit “ASIC”, etc.), a semiconductor memory, or combinatorial elements.
Logic (or component or module) may be software that includes code being one or more instructions, commands or other data structures that, when processed (e.g., executed) to perform a particular operation or a series of operations. Examples of software include an application, a process, an instance, Application Programming Interface (API), subroutine, plug-in, function, applet, servlet, routine, source code, object code, shared library/dynamic link library (dll), or a collection of HTML elements. This software may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); or persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the logic (or engine/component) may be stored in persistent storage.
As described above, the term “code segment” refers to information returned in a response to a request for a web page to be rendered on a display device. The code segment may include, but is not limited or restricted to (a) content associated with a web page (e.g., Hypertext Markup Language “HTML”) and/or (b) information associated with style (e.g., color, font, spacing) of the content to be rendered (e.g. Cascading Style Sheet “CSS” file. The “content” generally relates to a collection of information, whether in transit (e.g., over a network) or at rest (e.g., stored), often having a logical structure or organization that enables it to be classified for purposes of analysis for phishing detection. The content may include code data (e.g., code that assists in a web browser application to render the web page), executables (e.g., a script, JavaScript block, Flash file, etc.), and/or one or more non-executables. Examples of a non-executable may include an image. Other examples of non-executables, especially where the content is being routed to a document editor application in lieu of a web browser application, may include a document (e.g., a Portable Document Format “PDF” document, Microsoft® Office® document, Microsoft® Excel® spreadsheet, etc.), a file retrieved from a storage location over an interconnect, or the like.
The term “electronic device” should be generally construed as electronics with data processing capability and/or a capability of connecting to any type of network, such as a public network (e.g., Internet), a private network (e.g., a wireless data telecommunication network, a local area network “LAN”, etc.), or a combination of networks. Examples of an electronic device may include, but are not limited or restricted to, the following: a server, a mainframe, a firewall, a router; an info-entertainment device, industrial controllers, vehicles, or an endpoint device (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, gaming console, a medical device, or any general-purpose or special-purpose, user-controlled electronic device).
The term “message” generally refers to signaling (wired or wireless) as either information placed in a prescribed format and transmitted in accordance with a suitable delivery protocol or information made accessible through a logical data structure such as an API. Examples of the delivery protocol include, but are not limited or restricted to HTTP (Hypertext Transfer Protocol); HTTPS (HTTP Secure); Simple Mail Transfer Protocol (SMTP); File Transfer Protocol (FTP); iMESSAGE; Instant Message Access Protocol (IMAP); or the like. Hence, each message may be in the form of one or more packets, frame, or any other series of bits having the prescribed, structured format.
The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. In certain instances, the terms “compare,” comparing,” “comparison,” or other tenses thereof generally mean determining if a match (e.g., identical or a prescribed level of correlation) is achieved.
The term “interconnect” may be construed as a physical or logical communication path between two or more electronic devices or between different logic (engine/components). For instance, a physical communication path may include wired or wireless transmission mediums. Examples of wired transmission mediums and wireless transmission mediums may include electrical wiring, optical fiber, cable, bus trace, a radio unit that supports radio frequency (RF) signaling, or any other wired/wireless signal transfer mechanism. A logical communication path may include any mechanism that allows for the exchange of content between different logic.
Lastly, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
Referring now to
As shown in
More specifically, upon receipt of the suspect URL 150 for analysis, as an optional feature, the phishing detection system 110 may be configured to conduct a preliminary analysis of the suspect URL 150. The preliminary URL analysis may include accessing a URL blacklist and/or URL whitelist (not shown) to determine if the suspect URL 150 is associated with a known phishing website or is associated with a known trusted website. If the suspect URL 150 is found in both the URL blacklist and whitelist (i.e., suspect URL 150 is deemed “suspicious”) or is not found in either the URL blacklist or the URL whitelist (i.e., suspect URL 150 is deemed “non-determinative”), the phishing detection system 110 issues the request message 160 including the suspect URL 150 to retrieve the display code 165.
Upon receipt of the display code 165 (referred to as “retrieved code segment 165”), the phishing detection system 110 may parse the retrieved code segment 165 in order to recover any additional code segments associated with links included within the retrieved code segment 165. Using some or all of the links, depending on the code segment recovery scheme deployed, the phishing detection system 110 may generate additional request messages 170 to recover additional code segments 175. The code segment recovery scheme, which sets the ordering and selection of the link URLs included in the request message(s) 170, may be controlled through processing and enforced compliance to recovery rules 180 by logic 190 within the phishing detection system 110. The logic 190 features information collection logic 260 and parsing logic 265 as shown in
For instance, the recovery rules 180 may be configured to impose limits on a maximum number (R, R>1) of code segments to be obtained from links. For instance, the phishing detection system 110 may include a counter that is incremented (or decremented) to monitor the number of code segments (up to R) being recovered for analysis of a specific URL for phishing. The counter may be reset (e.g., set to “0” for incremental counter or “R” for decremental counter) for each URL analysis. When the logic 190 determines that the maximum number of code segments have been obtained, the logic 190 is precluded from recovering any more code segments and classification of the URL is based on the analytic results produced from the “R” recovered code segments.
Additionally, or in the alternative, the recovery rules 180 may control the selection and limit the depth (stages) in recovering additional code segments 175 using link URLs within nested links, such as linked URLs from a recovered code segment as shown in
For each of the received code segments 165/175 (i.e., either retrieved code segment 165 based on the suspect URL 150 or any recovered code segment 175 based on a subsequent link URL), the logic 190 within the phishing detection system 110 determines whether the code segment 165/175 corresponds to known benign code segments. If so, the phishing detection system 110 may determine that the suspect URL 150 is non-phishing and halt further analysis of the code segments 165/175. In contrast, upon determining that one or more code segments 165/175 correspond to at least one code segment from a known phishing web page, the phishing detection system 110 may generate and issue an alert message (not shown) directed to one or more of the following: (1) the electronic device 130 that supplied the suspect URL 150 to the phishing detection system 110, or (2) an electronic device (not shown) maintained by an administrator responsible for monitoring the electronic device 130, or (3) the resource 140 in the event that the resource 140 is hosted and maintained on a local network to which the electronic device 130 is connected. In contrast, where the suspect URL 150 is extracted from an intercepted electronic communication in transit to the electronic device 130, the phishing detection system 110 may generate and issue an alert message to the electronic device 130 or the electronic device (not shown) maintained by the administrator, in order to potentially restrict or block further communications with the resource 140 (e.g., illegitimate web server).
Where the code segment 165/175 does not correspond to known benign code segments or known phishing code segments, additional logic 195 within the phishing detection system 110 may conduct a statistical analysis (e.g., analyze of the number of advertisements, the number of Document Object Model “DOM” objects, the number of links and/or hyperlinks, etc.) and/or an analysis of the characteristics of that code segment 165/175 (e.g., presence of a displayable element such as an interactive UI element, etc.). These analytic results may be used to determine whether more detailed analyses of the code segment 165/175 are needed to render a verdict (determination) as to whether the URL is part of a phishing cyberattack. If the analytic results suggest that the received URL is not part of a phishing cyberattack, further analyses of the code segment are not needed. Otherwise, if the verdict is “non-determinative” or “suspicious,” the code segment 165/175 (e.g., the retrieved code segment 165 or any recovered code segment(s) 175) undergoes a hash operation to generate a hash value representative of the code segment 165/175.
Thereafter, this hash value undergoes a “fuzzy hash” comparison (as described above) with hash values associated with a known corpus of phishing web pages maintained within a knowledge data store 145 to determine whether the code segment 165/175 is part of a phishing cyberattack. The fuzzy hash comparison is further conducted by the logic 195 within the phishing detection system 110 to determine a correlation between the hash value of the (suspicious or non-determinative) code segment 165/175 and any hash value associated with the known corpus of phishing web pages, where the correlation represents equaling or exceeding a likelihood of the code segment 165/175 being associated with a phishing cyberattack. This likelihood corresponds to a correlation threshold that is selected to reside between a first correlation range (e.g., likelihood of phishing) and a second correlation range that represents a higher chance of the suspect URL 150 being part of a phishing cyberattack (e.g., likelihood of phishing). The correlation threshold is specifically selected to reside between particular correlation range to substantially eliminate false negatives and substantially reduce the number of false positives by selection of the threshold low enough to avoid negatively impeding the processing speed of the phishing detection system 110 while sufficiently eliminating a presence of false negatives.
Herein, the knowledge data store 145 includes hash values associated with the known corpus of phishing web pages, which may be occasionally updated based on (i) internal operations of the phishing detection system 110 and/or (ii) downloads from a global, knowledge aggregation data store 135. The knowledge aggregation data store 135 maintains and continues to augment its known, global corpus of phishing web pages received from the phishing detection system 110 as well as other phishing detection systems. Hence, the phishing detection system 110 may upload hash values of detected phishing content (web pages) and download a more robust corpus of phishing web pages from the knowledge aggregation data store 135 into the knowledge data store 145.
According to one embodiment of the disclosure, this correlation threshold may be programmable (updated) and may differ depending on the type of displayable data associated with the code segment 165/175 to be via the suspect (or link) URL. For instance, the correlation threshold relied upon for detection of a URL associated with a phishing web page may be lower than a correlation threshold relied upon for detection of a URL for retrieval of a document being used as part of a phishing attack, especially when the current threat landscape identifies a greater concentration of phishing attacks being directed against web page content than documents with integrated or embedded links. As lower thresholds tend to reduce the likelihood of false negatives albeit potentially increasing the likelihood of false positives, the adjustment of thresholds may be conducted to take into account those data types currently being targeted for phishing attacks by reducing thresholds for particular data types with high threat activity in order to avoid false negatives and upwardly adjusting the threshold as threat activity for that particular data type lessened. This correlation threshold throttling may be used to maximize performance of and accuracy of the results by the phishing detection system 110.
If the hash value of the code segment 165/175 is determined to be “correlated” with any hash value of known phishing code segments, the suspect URL 150 is determined to be associated with a phishing cyberattack. Responsive to detecting the code segment 165/175 (and the subject URL 150) is part of a phishing cyberattack, an alert message is issued to a predetermined destination (e.g., source of the received URL, a security administrator, etc.) that may be in a position to halt the phishing cyberattack and/or perform actions to remediate the cyberattack. If a phishing determination cannot be made, namely the correlation between the hash value of the code segment 165/175 and a hash of known phishing web pages is less than the correlation threshold (e.g., falls within the first correlation range), depending on the level of correlation, the phishing detection system 110 may provide the suspect URL 150 and/or meta-information associated with the suspect URL 150 to a secondary phishing detection system (e.g., computer vision-based system; network or third-party analyst system, etc.) or to the knowledge data store 145 for subsequent use. In some embodiments, where a non-determinative classification is made, the phishing detection system 110 will not issue an alert message, as described above.
Referring now to
Herein, according to one embodiment of the disclosure, the processor 210 is one or more multipurpose, programmable components that accept digital information as input, process the input information according to stored instructions, and provide results as output. One example of a processor may include an Intel® x86 central processing unit (CPU) with an instruction set architecture although other types of processors as described above may be utilized.
The memory 220 operates as system memory, which may include non-persistent storage and/or persistent storage. The memory 220 includes a URL data store 240, an analytic management logic 250, information collection logic 260, parsing logic 265, heuristics logic 270, fuzzy hash generation and detection logic 280, and reporting logic 290. Herein, as an illustrative example, the phishing detection system 110 may be configured to receive a suspect URL for analysis and the suspect URL is temporarily stored the URL data store 240. For this embodiment, the analytic management logic 250 monitors the URL data store 240 for any recently received URLs, and upon detection, provides the suspect URL to the information collection logic 260.
The information collection logic 260 is configured to conduct a preliminary filtering operation on the URL (e.g., suspect URL 150 of
The parsing logic 265 parses the retrieved code segment 165 to identify additional links included in the retrieved code segment 165. These links may include link URLs directed to the same domain (or subdomain) as the resource 140 providing the retrieved code segment 165 or may be directed to a different domain (and/or subdomain). For this embodiment, operating with the information collection logic 260, the parsing logic 265 performs an iterative process in the recovery of code one or more code segments associated with each identified link. These recovered code segment(s) may be provided to the heuristic logic 270 for further analysis.
According to one embodiment of the disclosure, the heuristics logic 270 may include filtering logic that is configured to determine whether a code segment under analysis (e.g., retrieved code segment 165 or any recovered code segment 175, generally referenced as “code segment 165/175”), includes certain displayable elements (e.g., element with user input fields) and/or certain layout characteristics associated with known phishing web pages or custom error pages that may be used in a phishing cyberattack in efforts to entice the user to perform an action (e.g., call a particular phone number, send an email message, etc.) to gain more information pertaining to the user.
Where the filtering logic of the heuristics logic 270 cannot definitively conclude that the code segment 165/175 part of a phishing attack or benign, namely the code segment 165/175 is determined by the filtering logic to be suspicious or non-determinative, the heuristics logic 270 provides the code segment 165/175 to the fuzzy hash generation and detection logic 280. The fuzzy hash generation and detection logic 280 conducts a further analysis of the code segment 165/175, which is conducted to improve reliability of the URL classification.
According to one embodiment of the disclosure, the fuzzy hash generation and detection logic 280 generates a hash value associated with one or more portions (or entirety) of the code segment 165/175. Thereafter, the fuzzy hash generation and detection logic 280 determines whether the generated hash value is correlated with one or more hash values associated with a known corpus of code segments from phishing web pages. According to one embodiment of the disclosure, if a generated hash value associated with the code segment 165/175 (e.g., retrieved code segment 165 or any recovered code segment 175) is determined to be correlated with a hash value of a code segment associated with a known phishing web page, the code segment 165/175 is determined to be associated with a phishing cyberattack. As a result, the suspect URL 150 is determined to be associated with a phishing cyberattack. Alternatively, additional logic may be deployed to collect correlation results (scores) for the retrieved code segment 165 and one or more recovered code segments 175, and perform an arithmetic operation on the results (e.g., average value, maximum value, median value, etc.) in determining whether the suspect URL 150 is associated with a phishing cyberattack.
If a phishing cyberattack determination is made by the fuzzy hash generation and detection logic 280, the reporting logic 290 may issue an alert message via network interface 230 to either a source of the suspect URL 150 or destination of an electronic communication including the suspect URL 150, and/or a security administrator. If such a determination cannot be made by the fuzzy hash generation and detection logic 280, the reporting logic 290 may provide the suspect URL 150, code segment 165/175 and/or any meta-information associated with the code segment 165/175 to a secondary phishing detection system as described above, although in some embodiments, the reporting logic 290 may refrain from involving other phishing detection systems, especially when there is a significant lack of correlation between the generated hash value ad hash values associated with known phishing web pages.
Referring still to
Referring to
As shown, the suspect URL 150 may be stored in the URL data store 240, which may operate as a URL input queue. Herein, the analytic management logic 250 monitors contents of the URL data store 240, and upon receipt of the suspect URL 150, the analytic management logic 250 determines one of a plurality of containers to handle analysis of the suspect URL 150. For instance, responsive to detection of the suspect URL 150, the analytic management logic 250 may assign processing of the suspect URL 150 to one of a plurality of containers 3001-300N (N>1). Each “container” features logic, such as a collection of software modules (e.g., software instances) that, upon execution, conducts processing of the suspect URL 150 to determine whether the URL is associated with a phishing cyberattack. As illustrated, each container 3001-300N, such as a second container 3002 for example, includes a preliminary filter module 3502, information collection module 3602 and a parsing module 3652, as well as a heuristics filter module 3702, and a fuzzy hash generation and detection module 3802. These modules include software utilized by the second container 3002, and correspond in functionality to the information collection logic 260, the parsing logic 265, the heuristics logic 270 and the fuzzy hash generation and detection logic 280 of
As shown, the analytic management logic 250 selects a second container 3002 during which the preliminary filter module 3502, upon execution by processor 210 of
Where the suspect URL 150 fails to match any URL associated with a known trusted domain (or perhaps a known phishing domain), the suspect URL 150 is considered to be “non-determinative.” Similarly, where the suspect URL 150 matches any URL associated with a known trusted domain and a known phishing domain, the suspect URL 150 is considered to be “suspicious.” If the suspect URL 150 is either “non-determinative” or “suspicious,” the information collection module 3602, upon execution by processor 210 of
The parsing module 3652 parses the code segment 165 to determine if any additional links (e.g., embedded links, hyperlinks, etc.) are located within the code segment 165. For any additional links uncovered by the parsing module 3652, the information collection module 3602 may further recover additional code segments 175 based on the link URLs included in each of these additional links, where the additional code segments correspond to “nested” web pages accessible through the web page accessible using the suspect URL 150. The heuristic and fuzzy hash generation and detection operations (described below) may be performed in parallel with the parsing of code segments associated with the suspect URL 150 (e.g., retrieved or a recovered code segment is being analyzed while one or more other code segments are being recovered) or may be performed serially (e.g., code segments are analyzed serially).
Each code segment 165 or 175 is requested by the information collection module 3602 and provided to the heuristics filter module 3702. Herein, multiple code segment(s) 165 and 175 associated with web pages accessible via the suspect URL 150 and link URLs included within the retrieved code segment and any recovered code segment, may be analyzed to assist in determining whether the suspect URL 150 could be associated with a phishing cyberattack.
The heuristics filter module 3702, according to one embodiment of the disclosure, may include one or more interactive display filters 372, layout filters 374, and error page filters 376. The interactive display filter 372 is configured to determine whether the code segment 165/175 includes a displayable element operating as a user interface (UI) element or requesting activity by the user (e.g., call a particular telephone number or access a certain web page). The UI element includes one or more user input fields (e.g., text boxes, drop-down menus, buttons, radio dials, check boxes, etc.) that are configured to receive input from a user (e.g., account number, credit card information, user name, password, etc.). If not, the code segment 165/175 is longer treated as a candidate for phishing attack within context of this phishing detection system 110.
However, if the code segment 165/175 includes the displayable element, the layout filter 374 commences an analysis of the code segment 165/175 to identify characteristics associated with known phishing web pages. Depending on the current threat landscape, the characteristics may be changed by installing different layout filters 374 into the phishing detection system 110. As described above, a first type of layout filter 374 may determine the number of advertisements present on the web page to be rendered by the code segment 165/175, where the code segment 165/175 may constitute a phishing web page where the number of advertisements within the portion of code segment 165/175 falls below a first (minimum ad) threshold. Likewise, a second type of layout filter 374 may determine the number of HTML DOM objects, where the code segment 165/175 may constitute a phishing web page upon including a number of HTML DOM objects exceed a second (maximum DOM) threshold. Additionally, or in the alternative, the layout filter 374 may determine a number of links within the HTML DOM objects, where the code segment 165/175 may constitute a phishing web page upon including a number of HTML DOM links exceeds a third threshold.
Being part of the heuristics filter module 3702, the error page filter 376 is configured to determine whether an error page is a customized (phishing) error page. This determination is based, at least in part, on the frequency-inverse document frequency, namely the frequency in repetition of occurrences of error pages that suggest legitimate behavior if the computed frequency is greater than a certain threshold. Additionally, the error page filter 376 could be used to compute similarity threshold with custom error pages knowledge base used for phishing. Similarity will work same way as described in similarity analysis of web page but will have a separate configurable threshold value. Additionally, or in the alternative, the error page filter 376 may ignore error pages with certain commonly occurring HTTP error codes from analysis (e.g., error page based on “HTTP error code 500,” error pages based on “HTTP error code 404,” etc.).
If the heuristics filter module 3702 filters the code segment 165/175 from further analysis, the phishing detection system 110 may continue its analysis associated with other (recovered) code segments 175. Optionally, depending on the findings that occur in connection with the filtering operations, the heuristics filter module 3702 may extract context information associated with the code segment 165/175 and the suspect URL 150 and provide the context information to a remote secondary system for analyst review. For instance, the context information may be used to provide intelligence that may be used to alter the heuristic filter module 3702, blacklist/whitelist (not shown), and the knowledge data store 145. If the code segment 165/175 (and corresponding URL(s)) is not filtered from further analysis by the heuristics filter module 3702, the code segment 165/175 are provided to the fuzzy hash generation and detection module 3802.
According to one embodiment of the disclosure, the fuzzy hash generation and detection module 3802 performs a one-way hash operation on at least a portion of the code segment 165/175 or its entirety, which generates a hash value. Thereafter, the fuzzy hash generation and detection module 3802 conducts a correlation evaluation between the hash value of the code segment 165/175 and hash values associated with code segments of a known corpus of phishing web pages included in the memcache server 320, and where not loaded therein, within the knowledge data store 145 (or subsequently fetched from the knowledge aggregation data store 135 of
Referring still to
As an illustrative example, referring now to
As shown in both
Depending on the recovery rules 180, the parsing logic 265 may be adapted to select a first link 4101 in the retrieved code segment 165 and, in operation with the information collection logic 260, recovers a first (recovered) code segment 4001 in response to issuance of a request (HTTP GET) message including the link URL associated with the first link 4101. The operation of the parsing logic 265 may be conducted in parallel with the heuristic analysis and “fuzzy hash” comparisons between a representation of the code segment 4001 and a representation of known phishing code segments to determine whether any of the code segments 4001-400R is correlated with a known phishing code segment.
More specifically, in an iterative operation, the parsing logic 265 may be adapted to select a first link 4201 in the first code segment 4001 and, in operation with the information collection logic 260, recovers a second code segment 4002 after issuance of a request (HTTP GET) message including information associated with the first link 4201. Thereafter, the parsing logic 265 may be adapted to select a first link 4301 in the second code segment 4002 and the information collection logic 260 recovers a third code segment 4003. Upon determining that the third code segment 4003 does not include any links, in accordance with its static or programmable code segment recovery scheme, the parsing logic 265 reverts back to its nearest code segment (e.g., the second code segment 4002) and select a second link 4302 in the second code segment 4002 to recover a fourth code segment 4004. This reiterative, code segment recovery scheme continues until no further recovered code segments are available or a maximum number (R) of code segments 4001-400R have been acquired for analysis by the phishing detection system 110.
Referring now to
As shown in
In an iterative operation, the parsing logic 265 may be adapted to further select a second link 4602 in the retrieved code segment 165 and, in operation with the information collection logic 260, recovers a second code segment 4502 after issuance of a request (HTTP GET) message including the link URL associated with the second link 4602. Thereafter, the parsing logic 265 continues in a reiterative manner by selecting a third link 4603 in the retrieved code segment 165 and, in operation with the information collection logic 260, recovers a third code segment 4503.
Upon determining that the retrieved code segment 165 does not include any further links, in accordance with the code segment recovery scheme deployed, the parsing logic 265 may advance to the first code segment 4501 and selects a first link 4701 in the first code segment 4501 to recover a fourth code segment 4404. This reiterative, code segment recovery scheme continues until a maximum code segment depth (L) have been met, in which case, no further recovery of lower-depth code segments is conducted. As shown, the parsing logic 265 refrains for selection of links 4801-4802 for recovery of the code segments thereof, as such code segments would exceed the maximum code segment depth. Instead, the parsing logic 265 reverts back to the first recovered code segment 4501, and given no further links, the parsing logic 265 again reverts to the second recovered code segment 4502.
As the second recovered code segment 4502 includes a plurality of links 4901-4902, the parsing logic 265 selects the first link 4901 in the second code segment 4501 to recover a fifth code segment 4505. This code segment recovery process continues operating in a similar matter to recover code segments 4506-4508.
Referring now to
Where the network addressing information is “non-determinative” or “suspicious” as described above, the phishing detection system performs further operations in determining whether the network addressing information is part of a phishing cyberattack. In particular, the phishing detection system generates at least one request message, including at least a portion of the network addressing information, which prompts receipt of one or more response messages (operation 540). Herein, a request message may prompt a response message that includes a code segment featuring HTML content and CSS (style) information associated with a web page accessible via the network addressing information. Alternatively, a request may cause generation of a first request message that prompts the return of a first response message that features a first code segment including HTML content and a second request message that prompts the return of a second response message that features a second code segment including CSS (style) information.
Each response message may include a code segment, and the phishing detection system performs analytics on each of the code segments (operation 550). Based on the results of these analytics, the phishing detection system may determine a verdict (phishing, benign, inconclusive) for the submitted network addressing information. Where the verdict concludes that the network address information is part of a phishing attack, the phishing detection system generates an alert message including at least the network address information and meta-information including at least a portion of the analytic results (operations 560 and 570). Also, based on the results of these analytics, the phishing detection system may make the analytic results available to a knowledge data store or network administrator (e.g., generate an alert message for a “phishing” verdict) or may provide the code segment and/or context information associated with the code segment (e.g., source or destination of the network addressing information, some or all of the analytic results, etc.) to a secondary phishing detection system (e.g., computer vision-based system) or other type of threat analysis system for analysis.
An illustrative example of the analytics conducted on the code segments received by the phishing detection system is described below, although other analytics or a variation of the analytics may be performed on the code segments in efforts to identify whether any of the code segments compares to code segments associated with a known phishing cyberattack.
Referring now to
In response to the URL being determined to be suspicious or non-determinative, the phishing detection system generates one or more request messages to retrieve a code segment associated with a web page (operation 615). Based on the response(s) from the one or more request messages, a determination is made whether a code segment associated with the web page is retrievable (operation 620). Where the web page is not retrievable, an error page is returned as the retrieved code segment (operation 625). As a result, the phishing detection system performs analytics on the error page code segment to determine whether the error page is suspicious and further analytics (e.g., fuzzy hash analysis) are needed to determine whether the URL is associated with a phishing cyberattack (operation 630).
For instance, upon receiving the error page, the phishing detection system may perform a content-based analysis of the error page. Additionally, or in lieu of a content-based analysis of the error page, the analytics may be directed to the frequency and/or timing of the error page and the type of error page, such as an error page other than specific common HTML error code(s). As an example, repeated receipt of error pages above a set threshold within a prescribed period of time, where both parameters may be static or programmable, the phishing detection system may determine that the error page may be associated with a phishing cyberattack. Similarly, the presence of error pages directed to specific HTML error codes “404” or “500” may be ignored while error pages directed to other HTML error codes may be further analyzed by the fuzzy hash generation and detection logic (module) as described below. Based on these analytic results, the phishing detection system may determine whether to provide the error page code segment to the fuzzy hash generation and detection logic for a more detailed analysis as to whether the error code page is part of a phishing cyberattack (operation 635).
However, where the web page is retrievable, the phishing detection system performs analytics on the retrieved code segment and any additional code segments recovered using links within the retrieved code segment (operation 640). The analytics may commence with a determination whether the web page includes one or more links (operation 645), as now illustrated in
Additionally, the analytics may include a determination as to whether the code segment includes a displayable element (operation 655). If not, phishing detection analysis for that particular code segment ends, where it is inconclusive whether the suspect URL is part of a phishing cyberattack (operation 660). However, if the code segment is determined to include one or more displayable elements, the phishing detection system further conducts analytics directed to statistical analysis of characteristics associated with the web page layout, as represented by the code segment (operation 665). These layout characteristics may be based on the presence of advertisements and/or HTML DOM objects, and in particular, whether the number of advertisements and/or HTML DOM objects exceed a respective threshold, as described above. Based on any or all of the above-described analytics, the phishing detection system determines whether the URL is considered to be suspicious (operation 670) and fuzzy hash correlation operations are performed on the code segment as illustrated in
More specifically, the code segment undergoes a hash operation, which generates a hash value representing the code segment (operation 675). The generated hash value is compared with hash values associated with known, phishing code segments to determine whether the generated hash value meets or exceeds a level of correlation established by a correlation threshold with any of the hash values associated with known, phishing code segments (operation 680). If so, a phishing cyberattack is detected for the URL and the phishing attack may be reported and/or meta-information associated with the URL detected as being part of a phishing cyberattack may be uploaded to a knowledge data store (operations 685 and 690). If not, the phishing analysis associated with that particular code segment completes and reiterative operations set forth in operations 620-690 continue for other code segments based on the URL until no further recovered code segments are available to analysis or no further recovered code segments are available based on limitations imposed by the recovery rules. At that time, depending on the comparison results, the phishing detection system may classify the URL as “inconclusive,” and provide the URL and/or meta-information associated with the URL to a secondary phishing detection system. Otherwise, the phishing detection system may classify the suspect URL as “benign”.
In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. For example, the phishing detection system may be deployed as a cloud service, an on-premises system, functionality implemented within a firewall or within an endpoint device, or the like. Also, while the disclosure is directed to retrieval and recovery of code segments associated with web pages in which the URLs include HTTP protocol information, it is contemplated that the URL phishing detection may be accomplished for retrieval/recovery of code segments associated with content types other than web pages, such as files where the URL is directed to a different protocol type (e.g., file transfer protocol “FTP”).
This application claims the benefit of priority on U.S. Provisional Application No. 62/784,304, filed Dec. 21, 2018, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6898632 | Gordy et al. | May 2005 | B2 |
6941348 | Petry et al. | Sep 2005 | B2 |
7080407 | Zhao et al. | Jul 2006 | B1 |
7080408 | Pak et al. | Jul 2006 | B1 |
7243371 | Kasper et al. | Jul 2007 | B1 |
7308716 | Danford et al. | Dec 2007 | B2 |
7448084 | Apap et al. | Nov 2008 | B1 |
7458098 | Judge et al. | Nov 2008 | B2 |
7467408 | O'Toole, Jr. | Dec 2008 | B1 |
7496961 | Zimmer et al. | Feb 2009 | B2 |
7519990 | Xie | Apr 2009 | B1 |
7540025 | Tzadikario | May 2009 | B2 |
7639714 | Stolfo et al. | Dec 2009 | B2 |
7698548 | Shelest et al. | Apr 2010 | B2 |
7779463 | Stolfo et al. | Aug 2010 | B2 |
7854007 | Sprosts et al. | Dec 2010 | B2 |
7937387 | Frazier et al. | May 2011 | B2 |
7949849 | Lowe et al. | May 2011 | B2 |
8006305 | Aziz | Aug 2011 | B2 |
8020206 | Hubbard et al. | Sep 2011 | B2 |
8045458 | Alperovitch et al. | Oct 2011 | B2 |
8069484 | McMillan et al. | Nov 2011 | B2 |
8171553 | Aziz et al. | May 2012 | B2 |
8201246 | Wu et al. | Jun 2012 | B1 |
8204984 | Aziz et al. | Jun 2012 | B1 |
8214905 | Doukhvalov et al. | Jul 2012 | B1 |
8291499 | Aziz et al. | Oct 2012 | B2 |
8370938 | Daswani et al. | Feb 2013 | B1 |
8370939 | Zaitsev et al. | Feb 2013 | B2 |
8375444 | Aziz et al. | Feb 2013 | B2 |
8438644 | Watters et al. | May 2013 | B2 |
8464340 | Ahn et al. | Jun 2013 | B2 |
8494974 | Watters et al. | Jul 2013 | B2 |
8516593 | Aziz | Aug 2013 | B2 |
8528086 | Aziz | Sep 2013 | B1 |
8539582 | Aziz et al. | Sep 2013 | B1 |
8549638 | Aziz | Oct 2013 | B2 |
8561177 | Aziz et al. | Oct 2013 | B1 |
8566476 | Shiffer et al. | Oct 2013 | B2 |
8566946 | Aziz et al. | Oct 2013 | B1 |
8584239 | Aziz et al. | Nov 2013 | B2 |
8595186 | Mandyam | Nov 2013 | B1 |
8635696 | Aziz | Jan 2014 | B1 |
8689333 | Aziz | Apr 2014 | B2 |
8713681 | Silberman et al. | Apr 2014 | B2 |
8776229 | Aziz | Jul 2014 | B1 |
8793278 | Frazier et al. | Jul 2014 | B2 |
8793787 | Ismael et al. | Jul 2014 | B2 |
8813050 | Watters et al. | Aug 2014 | B2 |
8832829 | Manni et al. | Sep 2014 | B2 |
8850571 | Staniford et al. | Sep 2014 | B2 |
8881271 | Butler, II | Nov 2014 | B2 |
8881282 | Aziz et al. | Nov 2014 | B1 |
8898788 | Aziz et al. | Nov 2014 | B1 |
8935779 | Manni et al. | Jan 2015 | B2 |
8949257 | Shiffer et al. | Feb 2015 | B2 |
8984638 | Aziz et al. | Mar 2015 | B1 |
8990939 | Staniford et al. | Mar 2015 | B2 |
8990944 | Singh et al. | Mar 2015 | B1 |
8997219 | Staniford et al. | Mar 2015 | B2 |
9009822 | Ismael et al. | Apr 2015 | B1 |
9009823 | Ismael et al. | Apr 2015 | B1 |
9015846 | Watters et al. | Apr 2015 | B2 |
9027135 | Aziz | May 2015 | B1 |
9071638 | Aziz et al. | Jun 2015 | B1 |
9104867 | Thioux et al. | Aug 2015 | B1 |
9106630 | Frazier et al. | Aug 2015 | B2 |
9106694 | Aziz et al. | Aug 2015 | B2 |
9118715 | Staniford et al. | Aug 2015 | B2 |
9159035 | Ismael et al. | Oct 2015 | B1 |
9171160 | Vincent et al. | Oct 2015 | B2 |
9176843 | Ismael et al. | Nov 2015 | B1 |
9189627 | Islam | Nov 2015 | B1 |
9195829 | Goradia et al. | Nov 2015 | B1 |
9197664 | Aziz et al. | Nov 2015 | B1 |
9223972 | Vincent et al. | Dec 2015 | B1 |
9225740 | Ismael et al. | Dec 2015 | B1 |
9241010 | Bennett et al. | Jan 2016 | B1 |
9251343 | Vincent et al. | Feb 2016 | B1 |
9262635 | Paithane et al. | Feb 2016 | B2 |
9268936 | Butler | Feb 2016 | B2 |
9275229 | LeMasters | Mar 2016 | B2 |
9282109 | Aziz et al. | Mar 2016 | B1 |
9292686 | Ismael et al. | Mar 2016 | B2 |
9294501 | Mesdaq et al. | Mar 2016 | B2 |
9300686 | Pidathala et al. | Mar 2016 | B2 |
9306960 | Aziz | Apr 2016 | B1 |
9306974 | Aziz et al. | Apr 2016 | B1 |
9311479 | Manni et al. | Apr 2016 | B1 |
9355247 | Thioux et al. | May 2016 | B1 |
9356944 | Aziz | May 2016 | B1 |
9363280 | Rivlin et al. | Jun 2016 | B1 |
9367681 | Ismael et al. | Jun 2016 | B1 |
9367872 | Visbal | Jun 2016 | B1 |
9398028 | Karandikar et al. | Jul 2016 | B1 |
9413781 | Cunningham et al. | Aug 2016 | B2 |
9426071 | Caldejon et al. | Aug 2016 | B1 |
9430646 | Mushtaq et al. | Aug 2016 | B1 |
9432389 | Khalid et al. | Aug 2016 | B1 |
9438613 | Paithane et al. | Sep 2016 | B1 |
9438622 | Staniford et al. | Sep 2016 | B1 |
9438623 | Thioux et al. | Sep 2016 | B1 |
9459901 | Jung et al. | Oct 2016 | B2 |
9467460 | Otvagin et al. | Oct 2016 | B1 |
9483644 | Paithane et al. | Nov 2016 | B1 |
9495180 | Ismael | Nov 2016 | B2 |
9497213 | Thompson et al. | Nov 2016 | B2 |
9507935 | Ismael et al. | Nov 2016 | B2 |
9516057 | Aziz | Dec 2016 | B2 |
9519782 | Aziz et al. | Dec 2016 | B2 |
9536091 | Paithane et al. | Jan 2017 | B2 |
9537972 | Edwards et al. | Jan 2017 | B1 |
9560059 | Islam | Jan 2017 | B1 |
9565202 | Kindlund et al. | Feb 2017 | B1 |
9591015 | Amin et al. | Mar 2017 | B1 |
9591020 | Aziz | Mar 2017 | B1 |
9594904 | Jain et al. | Mar 2017 | B1 |
9594905 | Ismael et al. | Mar 2017 | B1 |
9594912 | Thioux et al. | Mar 2017 | B1 |
9609007 | Rivlin et al. | Mar 2017 | B1 |
9626509 | Khalid et al. | Apr 2017 | B1 |
9628498 | Aziz et al. | Apr 2017 | B1 |
9628507 | Haq et al. | Apr 2017 | B2 |
9633134 | Ross | Apr 2017 | B2 |
9635039 | Islam et al. | Apr 2017 | B1 |
9641546 | Manni et al. | May 2017 | B1 |
9654485 | Neumann | May 2017 | B1 |
9661009 | Karandikar et al. | May 2017 | B1 |
9661018 | Aziz | May 2017 | B1 |
9674298 | Edwards et al. | Jun 2017 | B1 |
9680862 | Ismael et al. | Jun 2017 | B2 |
9690606 | Ha et al. | Jun 2017 | B1 |
9690933 | Singh et al. | Jun 2017 | B1 |
9690935 | Shiffer et al. | Jun 2017 | B2 |
9690936 | Malik et al. | Jun 2017 | B1 |
9736179 | Ismael | Aug 2017 | B2 |
9740857 | Ismael et al. | Aug 2017 | B2 |
9747446 | Pidathala et al. | Aug 2017 | B1 |
9749343 | Watters et al. | Aug 2017 | B2 |
9749344 | Watters et al. | Aug 2017 | B2 |
9756074 | Aziz et al. | Sep 2017 | B2 |
9773112 | Rathor et al. | Sep 2017 | B1 |
9781144 | Otvagin et al. | Oct 2017 | B1 |
9787700 | Amin et al. | Oct 2017 | B1 |
9787706 | Otvagin et al. | Oct 2017 | B1 |
9792196 | Ismael et al. | Oct 2017 | B1 |
9824209 | Ismael et al. | Nov 2017 | B1 |
9824211 | Wilson | Nov 2017 | B2 |
9824216 | Khalid et al. | Nov 2017 | B1 |
9825976 | Gomez et al. | Nov 2017 | B1 |
9825989 | Mehra et al. | Nov 2017 | B1 |
9838408 | Karandikar et al. | Dec 2017 | B1 |
9838411 | Aziz | Dec 2017 | B1 |
9838416 | Aziz | Dec 2017 | B1 |
9838417 | Khalid et al. | Dec 2017 | B1 |
9846776 | Paithane et al. | Dec 2017 | B1 |
9876701 | Caldejon et al. | Jan 2018 | B1 |
9888016 | Amin et al. | Feb 2018 | B1 |
9888019 | Pidathala et al. | Feb 2018 | B1 |
9892261 | Joram et al. | Feb 2018 | B2 |
9904955 | Watters et al. | Feb 2018 | B2 |
9910988 | Vincent et al. | Mar 2018 | B1 |
9912644 | Cunningham | Mar 2018 | B2 |
9912681 | Ismael et al. | Mar 2018 | B1 |
9912684 | Aziz et al. | Mar 2018 | B1 |
9912691 | Mesdaq et al. | Mar 2018 | B2 |
9912698 | Thioux et al. | Mar 2018 | B1 |
9916440 | Paithane et al. | Mar 2018 | B1 |
9921978 | Chan et al. | Mar 2018 | B1 |
9934376 | Ismael | Apr 2018 | B1 |
9934381 | Kindlund et al. | Apr 2018 | B1 |
9946568 | Ismael et al. | Apr 2018 | B1 |
9954890 | Staniford et al. | Apr 2018 | B1 |
9973531 | Thioux | May 2018 | B1 |
10002252 | Ismael et al. | Jun 2018 | B2 |
10019338 | Goradia et al. | Jul 2018 | B1 |
10019573 | Silberman et al. | Jul 2018 | B2 |
10025691 | Ismael et al. | Jul 2018 | B1 |
10025927 | Khalid et al. | Jul 2018 | B1 |
10027689 | Rathor et al. | Jul 2018 | B1 |
10027690 | Aziz et al. | Jul 2018 | B2 |
10027696 | Rivlin et al. | Jul 2018 | B1 |
10033747 | Paithane et al. | Jul 2018 | B1 |
10033748 | Cunningham et al. | Jul 2018 | B1 |
10033753 | Islam et al. | Jul 2018 | B1 |
10033759 | Kabra et al. | Jul 2018 | B1 |
10050998 | Singh | Aug 2018 | B1 |
10063583 | Watters et al. | Aug 2018 | B2 |
10068091 | Aziz et al. | Sep 2018 | B1 |
10075455 | Zafar et al. | Sep 2018 | B2 |
10083302 | Paithane et al. | Sep 2018 | B1 |
10084813 | Eyada | Sep 2018 | B2 |
10089461 | Ha et al. | Oct 2018 | B1 |
10097573 | Aziz | Oct 2018 | B1 |
10104102 | Neumann | Oct 2018 | B1 |
10108446 | Steinberg et al. | Oct 2018 | B1 |
10121000 | Rivlin et al. | Nov 2018 | B1 |
10122746 | Manni et al. | Nov 2018 | B1 |
10133863 | Bu et al. | Nov 2018 | B2 |
10133866 | Kumar et al. | Nov 2018 | B1 |
10146810 | Shiffer et al. | Dec 2018 | B2 |
10148693 | Singh et al. | Dec 2018 | B2 |
10165000 | Aziz et al. | Dec 2018 | B1 |
10169585 | Pilipenko et al. | Jan 2019 | B1 |
10176321 | Abbasi et al. | Jan 2019 | B2 |
10181029 | Ismael et al. | Jan 2019 | B1 |
10191861 | Steinberg et al. | Jan 2019 | B1 |
10192052 | Singh et al. | Jan 2019 | B1 |
10198574 | Thioux et al. | Feb 2019 | B1 |
10200384 | Mushtaq et al. | Feb 2019 | B1 |
10210329 | Malik et al. | Feb 2019 | B1 |
10216927 | Steinberg | Feb 2019 | B1 |
10218740 | Mesdaq et al. | Feb 2019 | B1 |
10242185 | Goradia | Mar 2019 | B1 |
10282548 | Aziz et al. | May 2019 | B1 |
10284574 | Aziz et al. | May 2019 | B1 |
10284575 | Paithane et al. | May 2019 | B2 |
10296437 | Ismael et al. | May 2019 | B2 |
10335738 | Paithane et al. | Jul 2019 | B1 |
10341363 | Vincent et al. | Jul 2019 | B1 |
10341365 | Ha | Jul 2019 | B1 |
10366231 | Singh et al. | Jul 2019 | B1 |
10380343 | Jung et al. | Aug 2019 | B1 |
10395029 | Steinberg | Aug 2019 | B1 |
10404725 | Rivlin et al. | Sep 2019 | B1 |
10417031 | Paithane et al. | Sep 2019 | B2 |
10430586 | Paithane et al. | Oct 2019 | B1 |
10432649 | Bennett et al. | Oct 2019 | B1 |
10445502 | Desphande et al. | Oct 2019 | B1 |
10447728 | Steinberg | Oct 2019 | B1 |
10454950 | Aziz | Oct 2019 | B1 |
10454953 | Amin et al. | Oct 2019 | B1 |
10462173 | Aziz et al. | Oct 2019 | B1 |
10467411 | Pidathala et al. | Nov 2019 | B1 |
10467414 | Kindlund et al. | Nov 2019 | B1 |
10469512 | Ismael | Nov 2019 | B1 |
10474813 | Ismael | Nov 2019 | B1 |
10476906 | Siddiqui | Nov 2019 | B1 |
10476909 | Aziz et al. | Nov 2019 | B1 |
10491627 | Su | Nov 2019 | B1 |
10503904 | Singh et al. | Dec 2019 | B1 |
10505956 | Pidathala et al. | Dec 2019 | B1 |
10511614 | Aziz | Dec 2019 | B1 |
10515214 | Vincent et al. | Dec 2019 | B1 |
10523609 | Subramanian | Dec 2019 | B1 |
10528726 | Ismael | Jan 2020 | B1 |
10534906 | Paithane et al. | Jan 2020 | B1 |
10552610 | Vashisht et al. | Feb 2020 | B1 |
10554507 | Siddiqui et al. | Feb 2020 | B1 |
10565377 | Zheng | Feb 2020 | B1 |
10565378 | Vincent et al. | Feb 2020 | B1 |
10567405 | Aziz | Feb 2020 | B1 |
10572665 | Jung et al. | Feb 2020 | B2 |
10581874 | Khalid et al. | Mar 2020 | B1 |
10581879 | Paithane et al. | Mar 2020 | B1 |
10581898 | Singh | Mar 2020 | B1 |
10587636 | Aziz et al. | Mar 2020 | B1 |
10587647 | Khalid et al. | Mar 2020 | B1 |
10592678 | Ismael et al. | Mar 2020 | B1 |
10601848 | Jeyaraman et al. | Mar 2020 | B1 |
10601863 | Siddiqui | Mar 2020 | B1 |
10601865 | Mesdaq et al. | Mar 2020 | B1 |
10616266 | Otvagin | Apr 2020 | B1 |
10621338 | Pfoh et al. | Apr 2020 | B1 |
10623434 | Aziz et al. | Apr 2020 | B1 |
10637880 | Islam et al. | Apr 2020 | B1 |
10642753 | Steinberg | May 2020 | B1 |
10657251 | Malik et al. | May 2020 | B1 |
10666686 | Singh et al. | May 2020 | B1 |
10671721 | Otvagin et al. | Jun 2020 | B1 |
10671726 | Paithane et al. | Jun 2020 | B1 |
10701091 | Cunningham et al. | Jun 2020 | B1 |
10706149 | Vincent | Jul 2020 | B1 |
10713358 | Sikorski et al. | Jul 2020 | B2 |
10713362 | Vincent et al. | Jul 2020 | B1 |
10715542 | Wei et al. | Jul 2020 | B1 |
10726127 | Steinberg | Jul 2020 | B1 |
10728263 | Neumann | Jul 2020 | B1 |
10735458 | Haq et al. | Aug 2020 | B1 |
10740456 | Ismael et al. | Aug 2020 | B1 |
10747872 | Ha et al. | Aug 2020 | B1 |
10757120 | Aziz et al. | Aug 2020 | B1 |
10757134 | Eyada | Aug 2020 | B1 |
10785255 | Otvagin et al. | Sep 2020 | B1 |
10791138 | Siddiqui et al. | Sep 2020 | B1 |
10795991 | Ross et al. | Oct 2020 | B1 |
10798112 | Siddiqui et al. | Oct 2020 | B2 |
10798121 | Khalid et al. | Oct 2020 | B1 |
10805340 | Goradia | Oct 2020 | B1 |
10805346 | Kumar et al. | Oct 2020 | B2 |
10812513 | Manni et al. | Oct 2020 | B1 |
10817606 | Vincent | Oct 2020 | B1 |
10826931 | Quan et al. | Nov 2020 | B1 |
10826933 | Ismael et al. | Nov 2020 | B1 |
10834107 | Paithane et al. | Nov 2020 | B1 |
10846117 | Steinberg | Nov 2020 | B1 |
10848397 | Siddiqui et al. | Nov 2020 | B1 |
10848521 | Thioux et al. | Nov 2020 | B1 |
10855700 | Jeyaraman et al. | Dec 2020 | B1 |
10868818 | Rathor et al. | Dec 2020 | B1 |
10872151 | Kumar et al. | Dec 2020 | B1 |
10873597 | Mehra et al. | Dec 2020 | B1 |
10887328 | Paithane et al. | Jan 2021 | B1 |
10893059 | Aziz et al. | Jan 2021 | B1 |
10893068 | Khalid et al. | Jan 2021 | B1 |
10902117 | Singh et al. | Jan 2021 | B1 |
10902119 | Vashisht et al. | Jan 2021 | B1 |
10904286 | Liu | Jan 2021 | B1 |
10929266 | Goradia et al. | Feb 2021 | B1 |
11070579 | Kiernan | Jul 2021 | B1 |
20020038430 | Edwards et al. | Mar 2002 | A1 |
20020091819 | Melchione et al. | Jul 2002 | A1 |
20020095607 | Lin-Hendel | Jul 2002 | A1 |
20020169952 | DiSanto et al. | Nov 2002 | A1 |
20020184528 | Shevenell et al. | Dec 2002 | A1 |
20020188887 | Largman et al. | Dec 2002 | A1 |
20030084318 | Schertz | May 2003 | A1 |
20030188190 | Aaron et al. | Oct 2003 | A1 |
20030191957 | Hypponen et al. | Oct 2003 | A1 |
20040015712 | Szor | Jan 2004 | A1 |
20040019832 | Arnold et al. | Jan 2004 | A1 |
20040117478 | Triulzi et al. | Jun 2004 | A1 |
20040117624 | Brandt et al. | Jun 2004 | A1 |
20040236963 | Danford et al. | Nov 2004 | A1 |
20040255161 | Cavanaugh | Dec 2004 | A1 |
20040268147 | Wiederin et al. | Dec 2004 | A1 |
20050021740 | Bar et al. | Jan 2005 | A1 |
20050086523 | Zimmer et al. | Apr 2005 | A1 |
20050091513 | Mitomo et al. | Apr 2005 | A1 |
20050108562 | Khazan et al. | May 2005 | A1 |
20050125195 | Brendel | Jun 2005 | A1 |
20050149726 | Joshi et al. | Jul 2005 | A1 |
20050157662 | Bingham et al. | Jul 2005 | A1 |
20050238005 | Chen et al. | Oct 2005 | A1 |
20050262562 | Gassoway | Nov 2005 | A1 |
20050283839 | Cowburn | Dec 2005 | A1 |
20060010495 | Cohen et al. | Jan 2006 | A1 |
20060015715 | Anderson | Jan 2006 | A1 |
20060015747 | Van de Ven | Jan 2006 | A1 |
20060021029 | Brickell et al. | Jan 2006 | A1 |
20060031476 | Mathes et al. | Feb 2006 | A1 |
20060070130 | Costea et al. | Mar 2006 | A1 |
20060117385 | Mester et al. | Jun 2006 | A1 |
20060123477 | Raghavan et al. | Jun 2006 | A1 |
20060150249 | Gassen et al. | Jul 2006 | A1 |
20060161987 | Levy-Yurista | Jul 2006 | A1 |
20060173992 | Weber et al. | Aug 2006 | A1 |
20060191010 | Benjamin | Aug 2006 | A1 |
20060242709 | Seinfeld et al. | Oct 2006 | A1 |
20060251104 | Koga | Nov 2006 | A1 |
20060288417 | Bookbinder et al. | Dec 2006 | A1 |
20070006288 | Mayfield et al. | Jan 2007 | A1 |
20070006313 | Porras et al. | Jan 2007 | A1 |
20070011174 | Takaragi et al. | Jan 2007 | A1 |
20070016951 | Piccard et al. | Jan 2007 | A1 |
20070064689 | Shin et al. | Mar 2007 | A1 |
20070143827 | Nicodemus et al. | Jun 2007 | A1 |
20070157306 | Elrod et al. | Jul 2007 | A1 |
20070192858 | Lum | Aug 2007 | A1 |
20070208822 | Wang et al. | Sep 2007 | A1 |
20070240218 | Tuvell et al. | Oct 2007 | A1 |
20070240220 | Tuvell et al. | Oct 2007 | A1 |
20070240222 | Tuvell et al. | Oct 2007 | A1 |
20070250930 | Aziz et al. | Oct 2007 | A1 |
20080005782 | Aziz | Jan 2008 | A1 |
20080040710 | Chiriac | Feb 2008 | A1 |
20080072326 | Danford et al. | Mar 2008 | A1 |
20080077793 | Tan et al. | Mar 2008 | A1 |
20080134334 | Kim et al. | Jun 2008 | A1 |
20080141376 | Clausen et al. | Jun 2008 | A1 |
20080184367 | McMillan et al. | Jul 2008 | A1 |
20080189787 | Arnold et al. | Aug 2008 | A1 |
20080307524 | Singh et al. | Dec 2008 | A1 |
20080320594 | Jiang | Dec 2008 | A1 |
20090003317 | Kasralikar et al. | Jan 2009 | A1 |
20090064332 | Porras et al. | Mar 2009 | A1 |
20090077383 | de Monseignat | Mar 2009 | A1 |
20090083855 | Apap et al. | Mar 2009 | A1 |
20090125976 | Wassermann et al. | May 2009 | A1 |
20090126015 | Monastyrsky et al. | May 2009 | A1 |
20090144823 | Lamastra et al. | Jun 2009 | A1 |
20090158430 | Borders | Jun 2009 | A1 |
20090172815 | Gu et al. | Jul 2009 | A1 |
20090198651 | Shiffer et al. | Aug 2009 | A1 |
20090198670 | Shiffer et al. | Aug 2009 | A1 |
20090198689 | Frazier et al. | Aug 2009 | A1 |
20090199274 | Frazier et al. | Aug 2009 | A1 |
20090241190 | Todd et al. | Sep 2009 | A1 |
20090300589 | Watters et al. | Dec 2009 | A1 |
20100017546 | Poo et al. | Jan 2010 | A1 |
20100030996 | Butler, II | Feb 2010 | A1 |
20100058474 | Hicks | Mar 2010 | A1 |
20100077481 | Polyakov et al. | Mar 2010 | A1 |
20100115621 | Staniford et al. | May 2010 | A1 |
20100132038 | Zaitsev | May 2010 | A1 |
20100154056 | Smith et al. | Jun 2010 | A1 |
20100192223 | Ismael et al. | Jul 2010 | A1 |
20100281542 | Stolfo et al. | Nov 2010 | A1 |
20110078794 | Manni et al. | Mar 2011 | A1 |
20110093951 | Aziz | Apr 2011 | A1 |
20110099633 | Aziz | Apr 2011 | A1 |
20110099635 | Silberman et al. | Apr 2011 | A1 |
20110167493 | Song et al. | Jul 2011 | A1 |
20110173213 | Frazier et al. | Jul 2011 | A1 |
20110178942 | Watters et al. | Jul 2011 | A1 |
20110219450 | McDougal et al. | Sep 2011 | A1 |
20110225624 | Sawhney et al. | Sep 2011 | A1 |
20110247072 | Staniford et al. | Oct 2011 | A1 |
20110307954 | Melnik et al. | Dec 2011 | A1 |
20110307955 | Kaplan et al. | Dec 2011 | A1 |
20110307956 | Yermakov et al. | Dec 2011 | A1 |
20110314546 | Aziz et al. | Dec 2011 | A1 |
20120117652 | Manni et al. | May 2012 | A1 |
20120174186 | Aziz et al. | Jul 2012 | A1 |
20120174218 | McCoy et al. | Jul 2012 | A1 |
20120210423 | Friedrichs et al. | Aug 2012 | A1 |
20120222121 | Staniford et al. | Aug 2012 | A1 |
20120233698 | Watters et al. | Sep 2012 | A1 |
20120278886 | Luna | Nov 2012 | A1 |
20120331553 | Aziz et al. | Dec 2012 | A1 |
20130036472 | Aziz | Feb 2013 | A1 |
20130047257 | Aziz | Feb 2013 | A1 |
20130097706 | Titonis et al. | Apr 2013 | A1 |
20130185795 | Winn et al. | Jul 2013 | A1 |
20130227691 | Aziz et al. | Aug 2013 | A1 |
20130232577 | Watters et al. | Sep 2013 | A1 |
20130247186 | LeMasters | Sep 2013 | A1 |
20130282426 | Watters et al. | Oct 2013 | A1 |
20130291109 | Staniford et al. | Oct 2013 | A1 |
20130318038 | Shiffer et al. | Nov 2013 | A1 |
20130318073 | Shiffer et al. | Nov 2013 | A1 |
20130325791 | Shiffer et al. | Dec 2013 | A1 |
20130325792 | Shiffer et al. | Dec 2013 | A1 |
20130325871 | Shiffer et al. | Dec 2013 | A1 |
20130325872 | Shiffer et al. | Dec 2013 | A1 |
20140032875 | Butler | Jan 2014 | A1 |
20140181131 | Ross | Jun 2014 | A1 |
20140189687 | Jung et al. | Jul 2014 | A1 |
20140189866 | Shiffer et al. | Jul 2014 | A1 |
20140189882 | Jung et al. | Jul 2014 | A1 |
20140237600 | Silberman et al. | Aug 2014 | A1 |
20140280245 | Wilson | Sep 2014 | A1 |
20140283037 | Sikorski et al. | Sep 2014 | A1 |
20140283063 | Thompson et al. | Sep 2014 | A1 |
20140297494 | Watters et al. | Oct 2014 | A1 |
20140331321 | Witherspoon | Nov 2014 | A1 |
20140337836 | Ismael | Nov 2014 | A1 |
20140344926 | Cunningham et al. | Nov 2014 | A1 |
20140380473 | Bu et al. | Dec 2014 | A1 |
20140380474 | Paithane et al. | Dec 2014 | A1 |
20150007312 | Pidathala et al. | Jan 2015 | A1 |
20150047032 | Hannis | Feb 2015 | A1 |
20150096022 | Vincent et al. | Apr 2015 | A1 |
20150096023 | Mesdaq et al. | Apr 2015 | A1 |
20150096024 | Haq et al. | Apr 2015 | A1 |
20150096025 | Ismael | Apr 2015 | A1 |
20150180886 | Staniford et al. | Jun 2015 | A1 |
20150186645 | Aziz et al. | Jul 2015 | A1 |
20150199513 | Ismael et al. | Jul 2015 | A1 |
20150199531 | Ismael et al. | Jul 2015 | A1 |
20150199532 | Ismael et al. | Jul 2015 | A1 |
20150220735 | Paithane et al. | Aug 2015 | A1 |
20150372980 | Eyada | Dec 2015 | A1 |
20160004869 | Ismael et al. | Jan 2016 | A1 |
20160006756 | Ismael et al. | Jan 2016 | A1 |
20160044000 | Cunningham | Feb 2016 | A1 |
20160127393 | Aziz et al. | May 2016 | A1 |
20160191547 | Zafar et al. | Jun 2016 | A1 |
20160191550 | Ismael et al. | Jun 2016 | A1 |
20160241580 | Watters et al. | Aug 2016 | A1 |
20160241581 | Watters et al. | Aug 2016 | A1 |
20160261612 | Mesdaq et al. | Sep 2016 | A1 |
20160285914 | Singh et al. | Sep 2016 | A1 |
20160301703 | Aziz | Oct 2016 | A1 |
20160323295 | Joram et al. | Nov 2016 | A1 |
20160335110 | Paithane et al. | Nov 2016 | A1 |
20170034185 | Green | Feb 2017 | A1 |
20170070523 | Bailey | Mar 2017 | A1 |
20170083703 | Abbasi et al. | Mar 2017 | A1 |
20170195353 | Taylor | Jul 2017 | A1 |
20180013770 | Ismael | Jan 2018 | A1 |
20180048660 | Paithane et al. | Feb 2018 | A1 |
20180069891 | Watters et al. | Mar 2018 | A1 |
20180121316 | Ismael et al. | May 2018 | A1 |
20180288077 | Siddiqui et al. | Oct 2018 | A1 |
20190014141 | Segal | Jan 2019 | A1 |
20190104154 | Kumar et al. | Apr 2019 | A1 |
20190132334 | Johns et al. | May 2019 | A1 |
20190207966 | Vashisht et al. | Jul 2019 | A1 |
20190207967 | Vashisht et al. | Jul 2019 | A1 |
20190364061 | Higbee | Nov 2019 | A1 |
20200013124 | Obee | Jan 2020 | A1 |
Number | Date | Country |
---|---|---|
2840992 | Jan 2013 | CA |
105074717 | Nov 2015 | CN |
2439806 | Jan 2008 | GB |
2490431 | Oct 2012 | GB |
0206928 | Jan 2002 | WO |
0223805 | Mar 2002 | WO |
2007117636 | Oct 2007 | WO |
2008041950 | Apr 2008 | WO |
2011084431 | Jul 2011 | WO |
2011112348 | Sep 2011 | WO |
2012075336 | Jun 2012 | WO |
2012145066 | Oct 2012 | WO |
2013067505 | May 2013 | WO |
Entry |
---|
“Mining Specification of Malicious Behavior”—Jha et al, UCSB, Sep. 2007 https://www.cs.ucsb.edu/.about.chris/research/doc/esec07.sub.--mining.pdf-. |
“Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003). |
“When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.isp?reload=true&arnumbe- r=990073, (Dec. 7, 2013). |
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108. |
Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003). |
Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126. |
Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006. |
Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184. |
Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77. |
Boubalos, Chris , “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003). |
Chaudet, C. , et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82. |
Chen, P. M. and Noble, B. D., “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”) (2001). |
Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012). |
Cohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-S120. |
Costa, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005). |
Didier Stevens, “Malicious PDF Documents Explained”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 9, No. 1, Jan. 1, 2011, pp. 80-82, XP011329453, ISSN: 1540-7993, DOI: 10.1109/MSP.2011.14. |
Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007). |
Dunlap, George W. , et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002). |
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010. |
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010. |
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011. |
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28. |
Gregg Keizer: “Microsoft's HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-id/1035069? [retrieved on Jun. 1, 2016]. |
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase @ CMU, Carnegie Mellon University, 2007. |
Hiroshi Shinotsuka, Malware Authors Using New Techniques to Evade Automated Threat Analysis Systems, Oct. 26, 2012, http://www.symantec.com/connect/blogs/, pp. 1-4. |
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University. |
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware detection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011. |
Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003). |
Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6. |
Khaled Salah et al: “Using Cloud Computing to Implement a Security Overlay Network”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 11, No. 1, Jan. 1, 2013 (Jan. 1, 2013). |
Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286. |
King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”), (2003). |
Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003). |
Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages. |
Lastline Labs, The Threat of Evasive Malware, Feb. 25, 2013, Lastline Labs, pp. 1-8. |
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711. |
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011. |
Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001). |
Moore, D. , et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910. |
Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34. |
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg. |
Natvig, Kurt , “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002). |
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987. |
Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005). |
Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302. |
Oberheide et al., CloudAV.sub.—N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security '08 Jul. 28-Aug. 1, 2008 San Jose, CA. |
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doorn, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”). |
Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25. |
Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004). |
Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998). |
Venezia, Paul , “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003). |
Vladimir Getov: “Security as a Service in Smart Clouds—Opportunities and Concerns”, Computer Software and Applications Conference (COMPSAC), 2012 IEEE 36th Annual, IEEE, Jul. 16, 2012 (Jul. 16, 2012). |
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350. |
Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages. |
Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9. |
Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1. |
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82. |
Number | Date | Country | |
---|---|---|---|
20200252428 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62784304 | Dec 2018 | US |