Various embodiments concern computer programs and associated computer-implemented techniques for discovering and then remediating scams carried out over email.
Email has become vastly more sophisticated with the Internet connecting millions of individuals in real time. These advancements in connectivity have incentivized cyber actors (also referred to as “attackers”) to send malicious emails in greater numbers than ever before. Because email represents the primary communication channel for most enterprises (also referred to as “companies” or “organizations”), it is a primary point of entry for attackers.
Historically, enterprises employed secure email gateways to protect on-premises email. A secure email gateway is a mechanism—implemented in hardware or software—that monitors inbound and outbound emails to prevent the transmission of unwanted emails. However, such an approach is largely unsuitable for examining the vast number of emails handled by collaboration suites such as Microsoft Office 365® and G Suite™. For that reason, enterprises have begun employing security operations center (SOC) analysts who use security tools that employ artificial intelligence models, machine learning models, and filters to stop malware and email scams. Examples of email scams include phishing campaigns and business email compromise (BEC) campaigns. As an example, some enterprises define, prioritize, and respond to incidents through an approach referred to as mail-focused Security Orchestration, Automation, and Response (M-SOAR).
Various features of the technologies described herein will become more apparent to those skilled in the art from a study of the Detailed Description in conjunction with the drawings. Embodiments are illustrated by way of example and not limitation in the drawings. While the drawings depict various embodiments for the purpose of illustration, those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technologies. Accordingly, while specific embodiments are shown in the drawings, the technology is amenable to various modifications.
Email account compromise represents one type of business email compromise (BEC) campaign. Traditionally, enterprises have protected themselves against BEC campaigns by employing defenses such as anti-spam filters that quarantine malicious emails, rules that flag emails with extensions similar to the domain of the enterprise, and identification schemes that cause internal emails to be visibly distinguishable from external emails. But these approaches are largely ineffective in discovering instances of email account compromise since the attacks originate from within the enterprise. This is problematic due to the significant threat that email account compromise represents.
To address this issue, some enterprises have begun employing security operations center (SOC) analysts who are responsible for discovering these attacks and then performing the actions necessary to protect those enterprises. For example, upon discovering an email indicative of email account compromise, a SOC analyst may define a rule that is intended to detect similar emails. When asked to triage emails that have been reported by employees as potentially malicious, SOC analysts have traditionally done the following:
This process is not only burdensome due to the amount of time involved, but is also inconsistent and imprecise due to its reliance on the judgment of SOC analysts. Although some SOC analysts employ tools to help with the aforementioned tasks, the overall process is still prone to errors. Accordingly, there is a need for a product that can partially or entirely replace this labor-intensive process for investigating potential threats.
Introduced here are computer programs and computer-implemented techniques for discovering malicious emails and then remediating the threat posed by those malicious emails in an automated manner. As further discussed below, a threat detection platform (or simply “platform”) may monitor a mailbox to which employees of an enterprise are able to forward emails deemed to be suspicious for analysis. This mailbox may be referred to as an “abuse mailbox” or “phishing mailbox.” Generally, an abuse mailbox will be associated with a single enterprise whose employees are able to forward emails to an email address (or simply “address”) associated with the abuse mailbox. The threat detection platform can examine emails contained in the abuse mailbox and then determine whether any of those emails represent threats to the security of the enterprise. For example, the threat detection platform may classify each email contained in the abuse mailbox as being malicious or non-malicious. Said another way, the threat detection platform may be responsible for verifying whether emails forwarded to the abuse mailbox are safe. Thereafter, the threat detection platform may determine what remediation actions, if any, are appropriate for addressing the threat posed by those emails determined to be malicious. The appropriate remediation actions may be based on the type, count, or risk of malicious emails.
With the abuse mailbox, the threat detection platform may be able to accomplish several goals.
First, the threat detection platform may use the abuse mailbox to retain and then display all emails reported by employees as suspicious. In some embodiments, the abuse mailbox is designed so that the reported emails can be filtered, searched, or queried using the threat detection platform. Additionally, the threat detection platform may be able to parse the reported emails to determine whether any emails have been delivered as part of a campaign. If the threat detection platform determines that a given email was delivered as part of a campaign, the threat detection platform may be able to identify and then extract other emails delivered as part of the campaign from the inboxes of other employees.
Second, the threat detection platform may place judgment on each email forwarded to the abuse mailbox. For example, each reported email could be classified as either malicious or non-malicious. In some embodiments, the abuse mailbox is designed to display the attack type for those reported emails deemed to be malicious. Such an approach ensures that an individual (e.g., a SOC analyst) can readily establish the types of malicious emails being received by employees. Moreover, the threat detection platform can be configured to generate notifications based on the analysis of the reported emails. For example, the threat detection platform may generate an alert responsive to discovering that a reported email was deemed malicious, or the threat detection platform may generate a reminder that includes information regarding a reported email. As further discussed below, the abuse mailbox may persist details of campaigns discovered through the analysis of reported emails. These details may include the targets, content, strategy, and the like.
Third, the threat detection platform may automatically remediate emails deemed to be malicious. Assume, for example, that the threat detection platform discovers that an email reported by an employee was delivered as part of a campaign. In such a situation, the threat detection platform may examine the inboxes of other employees and then extract similar or identical emails delivered as part of the campaign. Additionally or alternatively, the threat detection platform may create a filter that is designed to identify similar or identical emails when applied to inbound emails. The threat detection platform may identify, establish, or otherwise determine the appropriate remediation action(s) based on information that is known about a malicious email. For example, the threat detection platform may prompt an employee to reset a password for a corresponding email account responsive to determining that she opened a malicious email.
Fourth, the threat detection platform may permit individuals (e.g., SOC analysts) to remediate or restore reported emails. In some embodiments, emails contained in the abuse mailbox can be moved into a folder for deletion or deleted permanently. The abuse mailbox may also permit emails to be restored. That is, the threat detection platform may return an email contained in the abuse mailbox to its intended destination upon receiving input indicative of a request to do so. Moreover, the threat detection platform may update—either continually or periodically—the remediation state of individual emails, batches of emails related to campaigns, etc.
The success of an abuse mailbox can be measured in several respects, namely, from the user and performance perspectives. Further information on these perspectives can be found in Table I. Note that the term “user” refers to an individual who interacts with the abuse mailbox (and, more specifically, the emails contained therein) through interfaces generated by the threat detection platform.
Accordingly, the abuse mailbox may be used as an incident response tool by individuals, such as SOC analysts or other security professionals, to review emails reported by employees as suspicious. By examining these employee-reported emails, email scams such as phishing campaigns and BEC campaigns can be more easily discovered. Thus, the abuse mailbox may be employed by an individual to perform M-SOAR. While email scams are primarily discoverable in external emails, some email scams may be discoverable in internal emails. At a high level, the abuse mailbox is designed to replace or supplement the conventional investigation process in which SOC analysts manually investigate reported emails, for example, by examining employee device data, network telemetry data, threat intelligence data feeds, and the like, and then making a determination about what to do. As an example, SOC analysts may be responsible for specifying what actions should be performed if an employee clicked a link in a malicious email. Moreover, SOC analysts are then responsible for searching the mail tenant to discover whether the same email was received by any other employees. Because each of these steps has historically been manually completed by SOC analysts, the process is slow, time consuming, and inconsistent. For instance, different SOC analysts may investigate different information and/or reach different determinations.
Embodiments may be described in the context of computer-executable instructions for the purpose of illustration. However, aspects of the technology can be implemented via hardware, firmware, or software. As an example, a set of algorithms representative of a computer-implemented model (or simply “model”) may be applied to an email contained in the abuse mailbox in order to determine whether the email is malicious. Based on the output produced by the model, the threat detection platform can classify the email based on its likelihood of being a threat and then determine what remediation actions, if any, are necessary to address the threat.
Terminology
References in this description to “an embodiment,” “one embodiment,” or “some embodiments” mean that the particular feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another.
Unless the context clearly requires otherwise, the terms “comprise,” “comprising,” and “comprised of” are to be construed in an inclusive sense rather than an exclusive or exhaustive sense (i.e., in the sense of “including but not limited to”). The terms “connected,” “coupled,” or any variant thereof is intended to include any connection or coupling between two or more elements, either direct or indirect. The connection/coupling can be physical, logical, or a combination thereof. For example, devices may be electrically or communicatively coupled to one another despite not sharing a physical connection.
The term “based on” is also to be construed in an inclusive sense rather than an exclusive or exhaustive sense. Thus, unless otherwise noted, the term “based on” is intended to mean “based at least in part on.”
The term “module” refers broadly to software components, firmware components, and/or hardware components. Modules are typically functional components that generate data or other output(s) based on specified input(s). A module may be self-contained. A computer program may include one or more modules. Thus, a computer program may include multiple modules responsible for completing different tasks or a single module responsible for completing all tasks.
When used in reference to a list of multiple items, the word “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list.
The sequences of steps performed in any of the processes described here are exemplary. However, unless contrary to physical possibility, the steps may be performed in various sequences and combinations. For example, steps could be added to, or removed from, the processes described here. Similarly, steps could be replaced or reordered. Thus, descriptions of any processes are intended to be open-ended.
Overview of Threat Detection Platform
At a high level, the threat detection platform 100 can acquire data related to the digital conduct of accounts associated with employees and then determine, based on an analysis of the data, how to handle threats in a targeted manner. The term “account” may refer to digital profiles with which employees can engage in digital activities. These digital profiles are normally used to perform activities such as exchanging emails and messages, and thus may also be referred to as “email accounts” or “messaging accounts.” The term “digital conduct,” meanwhile, may refer to the digital activities that are performed with those accounts. Examples of digital activities include transmitting and receiving digital communications, creating, modifying, and deleting filters to be applied to incoming digital communications, initiating sign-in activities, and the like. Examples of digital communications include emails and messages.
As shown in
The threat detection platform 100 can be implemented, partially or entirely, within an enterprise network 112, a remote computing environment (e.g., through which data regarding digital conduct is routed for analysis), a gateway, or another suitable location. The remote computing environment can belong to, or be managed by, the enterprise or another entity. In some embodiments, the threat detection platform 100 is integrated into the enterprise's email system (e.g., at the gateway) as part of an inline deployment. In other embodiments, the threat detection platform 100 is integrated into the enterprise's email system via an application programming interface (API) such as the Microsoft Outlook® API. In such embodiments, the threat detection platform 100 may obtain data via the API. Thus, the threat detection platform 100 can supplement and/or supplant other security products employed by the enterprise.
In a first variation, the threat detection platform 100 is maintained by a threat service (also referred to as a “security service”) that has access to multiple enterprises' data. In this variation, the threat detection platform 100 can route data that is, for example, related to incoming emails to a computing environment managed by the security service. The computing environment may be an instance on Amazon Web Services® (AWS). The threat detection platform 100 may maintain one or more databases for each enterprise that include, for example, organizational charts, attribute baselines, communication patterns, and the like. Additionally or alternatively, the threat detection platform 100 may maintain federated databases that are shared amongst multiple entities. Examples of federated databases include databases specifying vendors and/or individuals who have been deemed fraudulent, domains from which incoming emails determined to represent security threats originated, and the like. The security service may maintain different instances of the threat detection platform 100 for different enterprises, or the security service may maintain a single instance of the threat detection platform 100 for multiple enterprises. The data hosted in these instances can be obfuscated, encrypted, hashed, depersonalized (e.g., by removing personal identifying information), or otherwise secured or secreted. Accordingly, each instance of the threat detection platform 100 may only be able to access/process data related to the accounts associated with the corresponding enterprise(s).
In a second variation, the threat detection platform 100 is maintained by the enterprise whose accounts are being monitored, either remotely or on premises. In this variation, all relevant data may be hosted by the enterprise itself, and any information to be shared across multiple enterprises can be transmitted to a computing system that is maintained by the security service or a third party.
As shown in
The enterprise network 112 may be a mobile network, wired network, wireless network, or some other communication network maintained by the enterprise or an operator on behalf of the enterprise. As noted above, the enterprise may utilize a security service to examine emails (among other things) to discover potential threats. The enterprise may grant permission to the security service to monitor the enterprise network 112 by examining emails (e.g., incoming emails or outgoing emails) and then addressing those emails that represent threats. For example, the threat detection platform 100 may be permitted to remediate the threats posed by those emails, or the threat detection platform 100 may be permitted to surface notifications regarding the threats posed by those emails. In some embodiments, the enterprise further grants permission to the security service to obtain data regarding other digital activities involving the enterprise (and, more specifically, employees of the enterprise) in order to build a profile that specifies communication patterns, behavioral traits, normal content of emails, etc. For example, the threat detection platform 100 may identify the filters that have been created and/or destroyed by each employee to infer whether any significant variations in behavior have occurred.
The threat detection platform 100 may manage one or more databases in which data can be stored. Examples of such data include enterprise data (e.g., email data, message data, sign-in data, and mail filter data), remediation policies, communication patterns, behavioral traits, and the like. The data stored in the database(s) may be determined by the threat detection platform 100 (e.g., learned from data available on the enterprise network 112), provided by the enterprise, or retrieved from an external database (e.g., associated with LinkedIn®, Microsoft Office 365®, or G Suite™). The threat detection platform 100 may also store outputs produced by the various modules, including machine- and human-readable information regarding insights into threats and any remediation actions that were taken.
As shown in
A profile could include a number of behavioral traits associated with the corresponding account. For example, the profile generator 102 may determine the behavioral traits based on the email data, message data, sign-in data, or mail filter data obtained from the enterprise network 112. The email data may include information on the senders of past emails received by a given email account, content of those past emails, frequency of those past emails, temporal patterns of those past emails, topics of those past emails, geographical locations from which those past emails originated, formatting characteristics (e.g., usage of HTML, fonts, styles, etc.), and more. Thus, the profile generator 102 may attempt to build a profile for each email account that represents a model of normal behavior of the corresponding employee. As further discussed below, the profiles may be helpful in identifying digital activities and communications that indicate that a threat to the security of the enterprise may exist.
The monitoring module 106 may be responsible for monitoring communications (e.g., messages and emails) handled by the enterprise network 112. These communications may include inbound emails (e.g., external and internal emails) received by accounts associated with employees of the enterprise, outbound emails (e.g., external and internal emails) transmitted by those accounts, and messages exchanged between those accounts. In some embodiments, the monitoring module 106 is able to monitor inbound emails in near real time so that appropriate action can be taken if a malicious email is discovered. For example, if an inbound email is found to be similar to one that the threat detection platform 100 determined was delivered as part of a phishing campaign of a BEC campaign (e.g., based on an output produced by the scoring module 108), then the inbound email may be prevented from reaching its intended destination by the monitoring module 106 at least temporarily. In some embodiments, the monitoring module 106 is able to monitor communications only upon the threat detection platform 100 being granted permission by the enterprise (and thus given access to the enterprise network 112).
The scoring module 108 may be responsible for examining digital activities and communications to determine the likelihood that a security threat exists. For example, the scoring module 108 may examine emails forwarded by employees to an abuse mailbox to determine whether those emails were delivered as part of an email scam, as further discussed below. As another example, the scoring module 108 may examine each incoming email to determine how its characteristics compare to past emails received by the intended recipient. In such embodiments, the scoring module 108 may determine whether characteristics such as timing, formatting, and location of origination (e.g., in terms of sender email address or geographical location) match a pattern of past emails that have been determined to be non-malicious. For instance, the scoring module 108 may determine that an email is likely to be malicious if the sender email address (support-xyz@gmail.com) differs from an email address (John.Doe@CompanyABC.com) that is known to be associated with the alleged sender (John Doe). As another example, the scoring module 108 may determine that an account may have been compromised if the account performs a sign-in activity that is impossible or improbable given its most recent sign-in activity.
The scoring module 108 can make use of heuristics, rules, neural networks, or other trained machine learning (ML) algorithms such as decision trees (e.g., gradient-boosted decision trees), logistic regression, and linear regression. Accordingly, the scoring module 108 may output discrete outputs or continuous outputs, such as a probability metric (e.g., specifying the likelihood that an incoming email is malicious), a binary output (e.g., malicious or non-malicious), or a classification (e.g., specifying the type of malicious email).
The reporting module 110 may be responsible for reporting insights derived from the outputs produced by the scoring module 108. For example, the reporting module 110 may provide a summary of the emails contained in an abuse mailbox that have been examined by the scoring module 108 to an electronic device 114. The electronic device 114 may be managed by the employee associated with the account under examination, an individual associated with the enterprise (e.g., a member of the information technology department), or an individual associated with a security service. As another example, the reporting module 110 may transmit a notification to the electronic device 114 responsive to a determination that an email contained in the abuse mailbox is malicious (e.g., based on an output produced by the scoring module 108). As further discussed below, the reporting module 110 can surface this information in a human-readable format for display on an interface accessible via the electronic device 114.
Some embodiments of the threat detection platform 100 also include a training module 104 that operates to train the models employed by the other modules. For example, the training module 104 may train the models applied by the scoring module 108 to the email data, message data, sign-in data, and mail filter data by feeding training data into those models. The training data could include emails that have been labeled as malicious or non-malicious, policies related to attributes of emails (e.g., specifying that emails originating from certain domains should not be considered malicious), etc. The training data may be employee- or enterprise-specific so that the model(s) are able to perform personalized analysis. In some embodiments, the training data ingested by the model(s) includes emails that are known to be representative of malicious emails sent as part of an attack campaign. These emails may have been labeled as such during a training process, or these emails may have been labeled as such by other employees.
Overview of Abuse Mailbox
To facilitate the discovery of email scams, employees of an enterprise may be instructed to forward suspicious emails to an address associated with a mailbox. This mailbox may be referred to as an “abuse mailbox” or “phishing mailbox.” In some embodiments, the abuse mailbox is associated with an address with which only addresses corresponding to a domain associated with the enterprise are permitted to communicate. As further discussed below, a threat detection platform may monitor the mailbox and then examine emails forwarded thereto by employees.
Assume, for example, that the threat detection platform discovers that an email is located in the abuse mailbox. In such a situation, the threat detection platform may apply one or more models to the email and then determine, based on outputs produced by those models, whether the email is representative of a threat to the security of the enterprise. The threat detection platform can then take several steps to address the threat. For example, the threat detection platform may examine inboxes associated with other employees to determine whether the email was delivered as part of an email scam. If the threat detection platform discovers additional instances of the email, then those additional instances can be removed from the corresponding inboxes. Additionally or alternatively, the threat detection platform may generate a notification regarding the email. This notification may be sent to, for example, the employee responsible for forwarding the email to the abuse mailbox or a security professional employed by the enterprise or a security service.
In a first case, a first user discovers that an email has been reported by an employee by being forwarded to the abuse mailbox. The first user visits the abuse mailbox and then identifies an appropriate remediation action for addressing a threat posed by the email. Thus, the first user may specify, through the threat detection platform, how to address the threat posed by the email. Note that while the first user may be described as accessing the abuse mailbox, the first user may actually access an interface that summarizes the content of the abuse mailbox rather than the abuse mailbox itself.
In a second case, a second user receives a notification from the threat detection platform that serves as an alert that an email contained in the abuse mailbox was determined to be malicious and then remediated. The second user can then complete the remainder of the workflow, as appropriate.
In a third case, a third user visits the abuse mailbox to verify that a campaign of malicious emails has been appropriately remediated. As discussed above, when in an “active mode,” the threat detection platform may automatically discover and then remediate email scams. For example, upon determining that an email forwarded by an employee to the abuse mailbox is malicious, the threat detection platform can examine the inboxes of other employees to determine whether the email was delivered as part of an email scam. The term “email scam,” as used herein, refers to a coordinated campaign in which multiple emails—usually similar or identical in content—are delivered to multiple recipients (e.g., multiple employees of a single enterprise). If the threat detection platform discovers additional instances of the email, those additional instances can be moved into a junk folder or deleted folder or permanently deleted.
As shown in
From a technical standpoint, the abuse mailbox in combination with the threat detection platform can be said to:
First, the abuse mailbox can serve as a “queue” in which emails deemed suspicious by employees can be held for examination. Generally, the threat detection platform (and, more specifically, the scoring module) is configured to process emails in a continual manner, for example, as those emails arrive in the abuse mailbox. Such an approach allows the emails to be processed within a reasonable amount of time (e.g., within several hours). However, in some embodiments, the threat detection platform is configured to process emails in an ad hoc manner. In such embodiments, the threat detection platform may retrieve and/or examine emails in the abuse mailbox responsive to receiving input indicative of a request to do so (e.g., from a user via an interface).
Second, by examining emails contained in the abuse mailbox, the threat detection platform can autonomously discover emails that are identical or similar to reported emails. This allows the threat detection platform to more easily discover campaigns and then take appropriate remediation actions. Assume, for example, that the threat detection platform determines an email contained in the abuse mailbox is malicious. In this situation, the threat detection platform can examine inboxes in an effort to search for other emails that are identical or similar in terms of content, attributes, etc. The threat detection platform can then define these emails as being part of a campaign by programmatically linking them together. Moreover, the threat detection platform can take appropriate remediation actions if those emails are deemed to be malicious. For example, the threat detection platform may permanently delete those emails so as to prevent any further interaction with the campaign. As another example, the threat detection platform may move those emails into junk folders in the respective inboxes responsive to determining that the emails are representations of spam rather than a malicious campaign.
Third, the abuse mailbox may permit users to manually search, move, or remediate emails as needed. For example, a user may browse the emails contained in the abuse mailbox through an interface generated by a computer program. The computer program may be representative of an aspect of the threat detection platform (e.g., reporting module 110 of
In
In some embodiments, notifications are sent in near real time. That is, a notification may be generated proximate to the time at which the event prompting the notification occurs. For example, the user may be alerted immediately after an email arrives in the abuse mailbox, immediately after a determination has been made regarding risk of the email, or immediately after a remediation action has been performed. In other embodiments, notifications are sent on a periodic basis. For example, the threat detection platform may queue notifications for delivery at a predetermined time (e.g., every morning at 7 AM), or the threat detection platform may queue notifications for delivery until a predetermined number of notifications have been generated (e.g., 3 or 5).
User Workflow
The technology described herein addresses these drawbacks. As discussed above, the threat detection platform may have additional insight into the digital conduct of employees. Because the threat detection platform is not solely looking at the content and characteristics of inbound external emails, the threat detection platform is able to establish whether malicious emails were forwarded within an enterprise network and what actions were taken upon receipt of those malicious emails.
While each of these phases can be automated, there are two main embodiments: one in which the investigation process is fully automated and one in which the investigation process is partially automated. These approaches are shown in
In some embodiments, users may be interested in further integrating the threat detection platform into the suite of security tools employed to protect an enterprise against threats. As an example, a user may want to allow the threat security platform to access Proofpoint® APIs in order to provide enriched detection and remediation efficacy. By integrating with Proofpoint® Targeted Attack Protection (TAP), for example, the user can provide the threat detection platform an additional source from which to ingest information regarding threats. The threat detection platform can orchestrate and then perform remediation actions to address malicious emails, further reduce the amount of manual labor needed to complete the investigation process, and improve detection efficacy by learning about malicious emails that are caught by Proofpoint® TAP. As shown in
The threat detection platform can then automatically integrate with Proofpoint® TAP through its Security Information and Event Management (SIEM) API in order to periodically obtain emails that are representative of threats. As shown in
Methodologies for Discovering Malicious Emails and Scams
Initially, a threat detection platform can determine that an email is contained in the abuse mailbox to which employees are able to forward emails deemed suspicious for analysis (step 1001). In some embodiments, the threat detection platform continually monitors the contents of the abuse mailbox or periodically checks the contents of the abuse mailbox. In other embodiments, the threat detection platform is notified (e.g., by a controller) when emails are received by the abuse mailbox.
Then, the threat detection platform can determine whether the email is representative of a threat to the enterprise by applying one or more models thereto. In some situations, the threat detection platform will establish that the email is representative of a threat based on the output(s) produced by the model(s) (step 1002). Thereafter, the threat detection platform can generate a record of the threat by populating a data structure with information related to the email (step 1003). This information can include the sender identity, sender domain, sender address, geographical origin, subject, threat type, target, or any combination thereof. Additional examples of such information can be seen in the interfaces shown in
The record of the threat can be used in several different ways to protect the enterprise. As an example, the threat detection platform may protect the enterprise against the threat (step 1004) by (i) applying the data structure to inboxes of the employees to determine whether the email was delivered as part of a campaign and/or (ii) applying the data structure as a filter to inbound emails addressed to the employees. Thus, the record may be used to remediate past threats and protect against future threats. If the threat detection platform discovers, in the inboxes of the employees, a series of emails that are similar or identical to the email at issue, then the threat detection platform can define a campaign by programmatically associating these email messages with one another.
Note that, in some embodiments, the abuse mailbox is designed to be browsable and searchable. Thus, the threat detection platform may generate an interface through which a user is able to examine the contents of the abuse mailbox, as well as any determinations made as to the risk of the emails contained therein. If the threat detection platform receives input indicative of a query that specifies a criterion, the threat detection platform may search the abuse mailbox to identify emails that satisfy the criterion and then present those emails for display on an interface. Similarly, the threat detection platform may use the criterion to search the database of records corresponding to emails that were deemed malicious. The criterion could be, for example, a sender identity, sender domain, sender address, threat type, target, or timeframe.
Thereafter, the threat detection platform can establish that the first email message was delivered as part of a BEC campaign (step 1102). For example, the threat detection platform may apply a first model to the first email to produce a first output that indicates a likelihood of the first email being malicious. This first model may consider features such as the header, body, sender identity, sender address, geographical origin, time of transmission, etc. If the first output indicates that the first email is malicious, then the threat detection platform may apply a second model to the first email to produce a second output that indicates the type of malicious email.
The threat detection platform can then examine an inbox associated with a second employee of the enterprise to identify a second email that was delivered as part of the BEC campaign (step 1103). The second employee may be one of multiple employees whose inboxes are searched by the threat detection platform. For example, the threat detection platform could examine the inboxes of all employees of the enterprise, or the threat detection platform could examine the inboxes of only some employees of the enterprise (e.g., those included in a group or department that appears to have been targeted).
The threat detection platform can remediate the BEC campaign in an automated manner by extracting the second email from the inbox so as to prevent further interaction with the second email by the second employee (step 1104). As discussed, the threat detection platform may also generate a record of the BEC campaign by populating a data structure with information related to the first and second emails. In such embodiments, the threat detection platform may apply the data structure to inbound emails addressed to employees of the enterprise so as to filter emails related to the BEC campaign prior to receipt by the employees. Thus, the threat detection platform may use information learned about the BEC campaign to remediate past threats and protect against future threats.
In some embodiments, the threat detection platform is further configured to notify a user of the threat posed by the BEC campaign (step 1105). For example, the threat detection platform may extract or derive information regarding the BEC campaign from the first and second emails, generate a report summarizing the threat posed by the BEC campaign that includes the information, and then provide the report to the user. This report may be provided to the user in the form of a visualization component that is rendered on an interface generated by the threat detection platform.
As discussed above, some emails forwarded to the abuse mailbox may ultimately be deemed non-malicious. Assume, for example, that the threat detection platform examines a second email forwarded to the mailbox by a second employee of the enterprise for analysis (step 1204) and then determines that the second email does not pose a threat to the enterprise (step 1205). In such a situation, the threat detection platform may move the second email to the inbox associated with the second employee (step 1206). Thus, the threat detection platform may move the second email to its original (and intended) destination responsive to determining that it is not malicious.
Unless contrary to possibility, these steps could be performed in various sequences and combinations. For example, a threat detection platform may be designed to simultaneously monitor multiple abuse mailboxes associated with different enterprises. As another example, a threat detection platform may be programmed to generate notifications for delivery along multiple channels. For instance, upon determining that an email contained in an abuse mailbox is malicious, the threat detection platform may transmit notifications indicating the same via text and email. Other steps could also be included in some embodiments. Assume, for example, that the threat detection platform determines an email contained in an abuse mailbox is malicious and then discovers that the email originated from an account associated with an employee. In such a situation, all digital activities performed with the account could be scored in order to determine the likelihood that it is compromised.
For example, upon establishing that an email forwarded to the abuse mailbox is malicious, the threat detection platform may notify a user who is responsible for monitoring the security of the enterprise. Further information on scoring digital activities can be found in Patent Cooperation Treaty (PCT) Application No. PCT/US2019/67279, titled “Threat Detection Platforms for Detecting, Characterizing, and Remediating Email-Based Threats in Real Time,” which is incorporated by reference herein in its entirety.
Processing System
The processing system 1300 may include one or more central processing units (“processors”) 1302, main memory 1306, non-volatile memory 1310, network adapter 1312 (e.g., a network interface), video display 1318, input/output devices 1320, control device 1322 (e.g., a keyboard or pointing device), drive unit 1324 including a storage medium 1326, and signal generation device 1330 that are communicatively connected to a bus 1316. The bus 1316 is illustrated as an abstraction that represents one or more physical buses or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. The bus 1316, therefore, can include a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as “Firewire”).
The processing system 1300 may share a similar processor architecture as that of a desktop computer, tablet computer, mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness tracker), network-connected (“smart”) device (e.g., a television or home assistant device), virtual/augmented reality systems (e.g., a head-mounted display), or another electronic device capable of executing a set of instructions (sequential or otherwise) that specify action(s) to be taken by the processing system 1300.
While the main memory 1306, non-volatile memory 1310, and storage medium 1326 are shown to be a single medium, the terms “machine-readable medium” and “storage medium” should be taken to include a single medium or multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions 1328. The terms “machine-readable medium” and “storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the processing system 1300.
In general, the routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions 1304, 1308, 1328) set at various times in various memory and storage devices in an electronic device. When read and executed by the processors 1302, the instruction(s) cause the processing system 1300 to perform operations to execute elements involving the various aspects of the present disclosure.
Moreover, while embodiments have been described in the context of fully functioning electronic devices, those skilled in the art will appreciate that some aspects of the technology are capable of being distributed as a program product in a variety of forms. The present disclosure applies regardless of the particular type of machine- or computer-readable media used to effect distribution.
Further examples of machine- and computer-readable media include recordable-type media, such as volatile and non-volatile memory devices 1310, removable disks, hard disk drives, and optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS) and Digital Versatile Disks (DVDs)), and transmission-type media, such as digital and analog communication links.
The network adapter 1312 enables the processing system 1300 to mediate data in a network 1314 with an entity that is external to the processing system 1300 through any communication protocol supported by the processing system 1300 and the external entity. The network adapter 1312 can include a network adaptor card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, bridge router, a hub, a digital media receiver, a repeater, or any combination thereof.
The network adapter 1312 may include a firewall that governs and/or manages permission to access/proxy data in a network. The firewall may also track varying levels of trust between different machines and/or applications. The firewall can be any number of modules having any combination of hardware, firmware, or software components able to enforce a predetermined set of access rights between a set of machines and applications, machines and machines, or applications and applications (e.g., to regulate the flow of traffic and resource sharing between these entities). The firewall may additionally manage and/or have access to an access control list that details permissions including the access and operation rights of an object by an individual, a machine, or an application, and the circumstances under which the permission rights stand.
Remarks
The foregoing description of various embodiments of the claimed subject matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling those skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated.
Although the Detailed Description describes certain embodiments and the best mode contemplated, the technology can be practiced in many ways no matter how detailed the Detailed Description appears. Embodiments may vary considerably in their implementation details, while still being encompassed by the specification. Particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the technology encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the embodiments.
The language used in the specification has been principally selected for readability and instructional purposes. It may not have been selected to delineate or circumscribe the subject matter. It is therefore intended that the scope of the technology be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, of the scope of the technology as set forth in the following claims.
This application is a continuation of U.S. patent application Ser. No. 17/155,843, titled ABUSE MAILBOX FOR FACILITATING DISCOVERY, INVESTIGATION, AND ANALYSIS OF EMAIL-BASED THREATS and filed on Jan. 22, 2021, which claims priority to U.S. Provisional Application No. 62/984,098, titled ABUSE MAILBOX FOR FACILITATING DISCOVERY INVESTIGATION AND ANALYSIS OF EMAIL-BASED THREATS and filed on Mar. 2, 2020, each of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5999932 | Paul | Dec 1999 | A |
6023723 | McCormick | Feb 2000 | A |
6088717 | Reed | Jul 2000 | A |
7263506 | Lee | Aug 2007 | B2 |
7451487 | Oliver | Nov 2008 | B2 |
7610344 | Mehr | Oct 2009 | B2 |
7953814 | Chasin | May 2011 | B1 |
8112484 | Sharma | Feb 2012 | B1 |
8244532 | Begeja | Aug 2012 | B1 |
8566938 | Prakash | Oct 2013 | B1 |
8819819 | Johnston | Aug 2014 | B1 |
8935788 | Diao | Jan 2015 | B1 |
9009824 | Chen | Apr 2015 | B1 |
9154514 | Prakash | Oct 2015 | B1 |
9213827 | Li | Dec 2015 | B2 |
9245115 | Jakobsson | Jan 2016 | B1 |
9245225 | Winn | Jan 2016 | B2 |
9264418 | Crosley | Feb 2016 | B1 |
9348981 | Hearn | May 2016 | B1 |
9473437 | Jakobsson | Oct 2016 | B1 |
9516053 | Muddu | Dec 2016 | B1 |
9537880 | Jones | Jan 2017 | B1 |
9571512 | Ray | Feb 2017 | B2 |
9686308 | Srivastava | Jun 2017 | B1 |
9756007 | Stringhini | Sep 2017 | B1 |
9774626 | Himler | Sep 2017 | B1 |
9781152 | Mistratov | Oct 2017 | B1 |
9847973 | Jakobsson | Dec 2017 | B1 |
9940394 | Grant | Apr 2018 | B1 |
9946789 | Li | Apr 2018 | B1 |
9954805 | Nigam | Apr 2018 | B2 |
9961096 | Pierce | May 2018 | B1 |
9967268 | Hewitt | May 2018 | B1 |
10015182 | Shintre | Jul 2018 | B1 |
10044745 | Jones | Aug 2018 | B1 |
10091312 | Khanwalkar | Oct 2018 | B1 |
10104029 | Chambers | Oct 2018 | B1 |
10129194 | Jakobsson | Nov 2018 | B1 |
10129288 | Xie | Nov 2018 | B1 |
10243989 | Ding | Mar 2019 | B1 |
10250624 | Mixer | Apr 2019 | B2 |
10277628 | Jakobsson | Apr 2019 | B1 |
10362057 | Wu | Jul 2019 | B1 |
10397272 | Bruss | Aug 2019 | B1 |
10419468 | Glatfelter | Sep 2019 | B2 |
10523609 | Subramanian | Dec 2019 | B1 |
10601865 | Mesdaq | Mar 2020 | B1 |
10616272 | Chambers | Apr 2020 | B2 |
10673880 | Pratt | Jun 2020 | B1 |
10721195 | Jakobsson | Jul 2020 | B2 |
10834127 | Yeh | Nov 2020 | B1 |
10911489 | Chechik | Feb 2021 | B1 |
10972483 | Thomas | Apr 2021 | B2 |
10972485 | Ladnai | Apr 2021 | B2 |
11019076 | Jakobsson | May 2021 | B1 |
11252189 | Reiser | Feb 2022 | B2 |
11494421 | Ghafourifar | Nov 2022 | B1 |
20020002520 | Gatto | Jan 2002 | A1 |
20020116463 | Hart | Aug 2002 | A1 |
20030204569 | Andrews | Oct 2003 | A1 |
20040030913 | Liang | Feb 2004 | A1 |
20040117450 | Campbell | Jun 2004 | A1 |
20040128355 | Chao | Jul 2004 | A1 |
20040215977 | Goodman | Oct 2004 | A1 |
20040260922 | Goodman | Dec 2004 | A1 |
20050039019 | Delany | Feb 2005 | A1 |
20050187934 | Motsinger | Aug 2005 | A1 |
20050198518 | Kogan | Sep 2005 | A1 |
20060036698 | Hebert | Feb 2006 | A1 |
20060053203 | Mijatovic | Mar 2006 | A1 |
20060191012 | Banzhof | Aug 2006 | A1 |
20060253581 | Dixon | Nov 2006 | A1 |
20070074169 | Chess | Mar 2007 | A1 |
20070276851 | Friedlander | Nov 2007 | A1 |
20080005249 | Hart | Jan 2008 | A1 |
20080086532 | Cunningham | Apr 2008 | A1 |
20080114684 | Foster | May 2008 | A1 |
20080201401 | Pugh | Aug 2008 | A1 |
20090037350 | Rudat | Feb 2009 | A1 |
20090132490 | Okraglik | May 2009 | A1 |
20100115040 | Sargent | May 2010 | A1 |
20100211641 | Yih | Aug 2010 | A1 |
20100318614 | Sager | Dec 2010 | A1 |
20110173142 | Dasgupta | Jul 2011 | A1 |
20110179126 | Wetherell | Jul 2011 | A1 |
20110213869 | Korsunsky | Sep 2011 | A1 |
20110214157 | Korsunsky | Sep 2011 | A1 |
20110231510 | Korsunsky | Sep 2011 | A1 |
20110231564 | Korsunsky | Sep 2011 | A1 |
20110238855 | Korsunsky | Sep 2011 | A1 |
20120028606 | Bobotek | Feb 2012 | A1 |
20120110672 | Judge | May 2012 | A1 |
20120137367 | Dupont | May 2012 | A1 |
20120233662 | Scott-Cowley | Sep 2012 | A1 |
20120278887 | Vitaldevara | Nov 2012 | A1 |
20120290712 | Walter | Nov 2012 | A1 |
20120297484 | Srivastava | Nov 2012 | A1 |
20130041955 | Chasin | Feb 2013 | A1 |
20130086180 | Midgen | Apr 2013 | A1 |
20130086261 | Lim | Apr 2013 | A1 |
20130097709 | Basavapatna | Apr 2013 | A1 |
20130167207 | Davis | Jun 2013 | A1 |
20130191759 | Bhogal | Jul 2013 | A1 |
20140013441 | Hencke | Jan 2014 | A1 |
20140032589 | Styler | Jan 2014 | A1 |
20140181223 | Homsany | Jun 2014 | A1 |
20140325662 | Foster | Oct 2014 | A1 |
20140365303 | Vaithilingam | Dec 2014 | A1 |
20140379825 | Speier | Dec 2014 | A1 |
20140380478 | Canning | Dec 2014 | A1 |
20150026027 | Priess | Jan 2015 | A1 |
20150128274 | Giokas | May 2015 | A1 |
20150143456 | Raleigh | May 2015 | A1 |
20150161609 | Christner | Jun 2015 | A1 |
20150161611 | Duke | Jun 2015 | A1 |
20150228004 | Bednarek | Aug 2015 | A1 |
20150234831 | Prasanna Kumar | Aug 2015 | A1 |
20150237068 | Sandke | Aug 2015 | A1 |
20150295942 | Tao | Oct 2015 | A1 |
20150295945 | Canzanese, Jr. | Oct 2015 | A1 |
20150319157 | Sherman | Nov 2015 | A1 |
20150339477 | Abrams | Nov 2015 | A1 |
20160014151 | Prakash | Jan 2016 | A1 |
20160036829 | Sadeh-Koniecpol | Feb 2016 | A1 |
20160057167 | Bach | Feb 2016 | A1 |
20160063277 | Vu | Mar 2016 | A1 |
20160156654 | Chasin | Jun 2016 | A1 |
20160227367 | Alsehly | Aug 2016 | A1 |
20160253598 | Yamada | Sep 2016 | A1 |
20160262128 | Hailpern | Sep 2016 | A1 |
20160301705 | Higbee | Oct 2016 | A1 |
20160306812 | McHenry | Oct 2016 | A1 |
20160321243 | Walia | Nov 2016 | A1 |
20160328526 | Park | Nov 2016 | A1 |
20160344770 | Verma | Nov 2016 | A1 |
20160380936 | Gunasekara | Dec 2016 | A1 |
20170041296 | Ford | Feb 2017 | A1 |
20170048273 | Bach | Feb 2017 | A1 |
20170098219 | Peram | Apr 2017 | A1 |
20170111506 | Strong | Apr 2017 | A1 |
20170186112 | Polapala | Jun 2017 | A1 |
20170214701 | Hasan | Jul 2017 | A1 |
20170222960 | Agarwal | Aug 2017 | A1 |
20170223046 | Singh | Aug 2017 | A1 |
20170230323 | Jakobsson | Aug 2017 | A1 |
20170230403 | Kennedy | Aug 2017 | A1 |
20170237754 | Todorovic | Aug 2017 | A1 |
20170237776 | Higbee | Aug 2017 | A1 |
20170251006 | Larosa | Aug 2017 | A1 |
20170289191 | Thioux | Oct 2017 | A1 |
20170324767 | Srivastava | Nov 2017 | A1 |
20170346853 | Wyatt | Nov 2017 | A1 |
20180026926 | Nigam | Jan 2018 | A1 |
20180027006 | Zimmermann | Jan 2018 | A1 |
20180084003 | Uriel | Mar 2018 | A1 |
20180084013 | Dalton | Mar 2018 | A1 |
20180091453 | Jakobsson | Mar 2018 | A1 |
20180091476 | Jakobsson | Mar 2018 | A1 |
20180159808 | Pal | Jun 2018 | A1 |
20180189347 | Ghafourifar | Jul 2018 | A1 |
20180196942 | Kashyap | Jul 2018 | A1 |
20180219888 | Apostolopoulos | Aug 2018 | A1 |
20180227324 | Chambers | Aug 2018 | A1 |
20180295146 | Kovega | Oct 2018 | A1 |
20180324297 | Kent | Nov 2018 | A1 |
20180375814 | Hart | Dec 2018 | A1 |
20190014143 | Syme | Jan 2019 | A1 |
20190026461 | Cidon | Jan 2019 | A1 |
20190028509 | Cidon | Jan 2019 | A1 |
20190052655 | Benishti | Feb 2019 | A1 |
20190065748 | Foster | Feb 2019 | A1 |
20190068616 | Woods | Feb 2019 | A1 |
20190081983 | Teal | Mar 2019 | A1 |
20190087428 | Crudele | Mar 2019 | A1 |
20190089711 | Faulkner | Mar 2019 | A1 |
20190104154 | Kumar | Apr 2019 | A1 |
20190109863 | Traore | Apr 2019 | A1 |
20190141183 | Chandrasekaran | May 2019 | A1 |
20190166161 | Anand | May 2019 | A1 |
20190166162 | Anand | May 2019 | A1 |
20190190929 | Thomas | Jun 2019 | A1 |
20190190936 | Thomas | Jun 2019 | A1 |
20190199745 | Jakobsson | Jun 2019 | A1 |
20190205511 | Zhan | Jul 2019 | A1 |
20190222606 | Schweighauser | Jul 2019 | A1 |
20190238571 | Adir | Aug 2019 | A1 |
20190260780 | Dunn | Aug 2019 | A1 |
20190311121 | Martin | Oct 2019 | A1 |
20190319905 | Baggett | Oct 2019 | A1 |
20190319987 | Levy | Oct 2019 | A1 |
20190349400 | Bruss | Nov 2019 | A1 |
20190384911 | Caspi | Dec 2019 | A1 |
20200007502 | Everton | Jan 2020 | A1 |
20200021609 | Kuppanna | Jan 2020 | A1 |
20200044851 | Everson | Feb 2020 | A1 |
20200053111 | Jakobsson | Feb 2020 | A1 |
20200053120 | Wilcox | Feb 2020 | A1 |
20200068031 | Kursun | Feb 2020 | A1 |
20200074078 | Saxe | Mar 2020 | A1 |
20200076825 | Vallur | Mar 2020 | A1 |
20200125725 | Petersen | Apr 2020 | A1 |
20200127962 | Chuhadar | Apr 2020 | A1 |
20200162483 | Farhady | May 2020 | A1 |
20200204572 | Jeyakumar | Jun 2020 | A1 |
20200287936 | Nguyen | Sep 2020 | A1 |
20200344251 | Jeyakumar | Oct 2020 | A1 |
20200358804 | Crabtree | Nov 2020 | A1 |
20200374251 | Warshaw | Nov 2020 | A1 |
20200389486 | Jeyakumar | Dec 2020 | A1 |
20200396190 | Pickman | Dec 2020 | A1 |
20200396258 | Jeyakumar | Dec 2020 | A1 |
20200412767 | Crabtree | Dec 2020 | A1 |
20210021612 | Higbee | Jan 2021 | A1 |
20210058395 | Jakobsson | Feb 2021 | A1 |
20210091962 | Finke | Mar 2021 | A1 |
20210092154 | Kumar | Mar 2021 | A1 |
20210168161 | Dunn | Jun 2021 | A1 |
20210240836 | Hazony | Aug 2021 | A1 |
20210272066 | Bratman | Sep 2021 | A1 |
20210295179 | Eyal Altman | Sep 2021 | A1 |
20210329035 | Jeyakumar | Oct 2021 | A1 |
20210336983 | Lee | Oct 2021 | A1 |
20210360027 | Boyer | Nov 2021 | A1 |
20210374679 | Bratman | Dec 2021 | A1 |
20210374680 | Bratman | Dec 2021 | A1 |
20220021700 | Devlin | Jan 2022 | A1 |
Number | Date | Country |
---|---|---|
107315954 | Nov 2017 | CN |
Entry |
---|
Barngrover, Adam, “Vendor Access Management with IGA”, Saviynt Inc. Apr. 24, 2019 (Apr. 24, 2019) Retrieved on Apr. 17, 2021 (Apr. 17, 2021) from <https://saviynt.com/vendor-access-management-with-iga/> entire document, 2 pp. |
Information Security Media Group, “Multi-Channel Fraud: A Defense Plan”, Retrieved on Apr. 18, 2021 (Apr. 18, 2021) from <https://www .bankInfosecurity.com/Interviews/multi-channel-fraud-defense-plan-i-1799>, Feb. 20, 2013, 9 pages. |
International Search Report and Written Opinion dated Apr. 24, 2020 of PCT/US2019/067279 (14 pages). |
Mahajan, et al., “Finding HTML Presentation Failures Using Image Comparison Techniques”, ASE'14, pp. 91-98 (Year: 2014). |
Mont, Marco Casassa, “Towards accountable management of identity and privacy: Sticky policies and enforceable tracing services”, 14th International Workshop on Database and Expert Systems Applications, 2003. Proceedings. IEEE, 2003. Mar. 19, 2003 (Mar. 19, 2003), Retrieved on Apr. 17, 2021 (Apr. 17, 2021) from <https://ieeexplore.ieee.org/abstract/documenV1232051 > entire document, Mar. 19, 2003, 17 pp. |
Proofpoint (Proofpoint Closed-Loop Email Analysis and Response, Aug. 2018, 2 pages) (Year: 2018). |
Number | Date | Country | |
---|---|---|---|
20220255961 A1 | Aug 2022 | US |
Number | Date | Country | |
---|---|---|---|
62984098 | Mar 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17155843 | Jan 2021 | US |
Child | 17550848 | US |