The present disclosure relates to computer systems and methods, and more particularly to security systems and methods using an automated bot with a natural language interface for improving response times for security alert response, and mediation.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Computer networks are frequently attacked by hackers attempting to destroy, expose, alter, disable, steal or gain unauthorized access to or make unauthorized use of an asset. Some computer networks detect threats using a set of rules or machine learning to identify unusual activity and generate security alerts. The security alerts are forwarded to one or more security analysts for further investigation and diagnosis.
It can be difficult to identify whether or not the security alert is genuine or a false positive since there are a large variety of attacks strategies. Genuine threats should be investigated further and escalated while false positives should be closed as quickly as possible. For example, a denial of service (DOS) attack attempts to make a resource, such as a web server, unavailable to users. Brute force attacks attempt gain access to a computer network using a trial-and-error approach to guess a password corresponding to a username. Browser-based attacks target end users who are browsing the Internet. The browser-based attacks may encourage the end user to unwittingly download malware disguised as a fake software updates, e-mail attachments or applications.
Secure socket layer (SSL) attacks attempt to intercept data that is sent over an encrypted connection. A botnet attack uses a group of hijacked computers that are controlled remotely by one or more malicious actors. A backdoor attack bypasses normal authentication processes to allow remote access at will. Backdoors can be present in software by design, enabled by other programs or created by altering an existing program.
The set of rules or machine learning algorithms make detection guesses that are not perfect. In other words, a significant number of the security alerts are false positives. All of the security alerts must be manually checked by the security analysts. When a security alert is received, the security analyst typically reviews visualizations such as bar charts, directed graphs, etc. on a dashboard. The security analyst gathers and attaches contextual information to the security alert. The security analyst writes queries and performs root cause analysis to assess whether or not the security alert is genuine or a false positive.
In many cases, the security alert is a false positive. Nonetheless, the response steps performed by the security analyst are time consuming. Investigations of false positive security alerts cause organizations to waste a lot of money. Apart from the time and effort that is wasted, a more serious consequence is that the false positives divert the security analyst resources from pursuing security alerts that are genuine.
A computing system for generating automated responses to improve response times for diagnosing security alerts includes a processor and a memory. An application is stored in the memory and executed by the processor. The application includes instructions for receiving a text phrase relating to a security alert; using a natural language interface with a natural language model to select one of a plurality of intents corresponding to the text phrase; and mapping the selected intent to one of a plurality of actions. Each of the plurality of actions includes at least one of a static response, a dynamic response, and a task. The application includes instructions for sending a response based on the at least one of the static response, the dynamic response, and the task.
In other features, the application receives the text phrase from one of an e-mail application or a chat application. The application sends the response using the e-mail application or the chat application. The natural language model is configured to generate one or more probabilities that the text phrase corresponds to one or more of the plurality of intents, respectively; select one of the plurality of intents corresponding to a highest one of the probabilities as a selected intent; compare the probability of the selected intent to a predetermined threshold; output the selected intent if the probability of the selected intent is greater than the predetermined threshold; and not output the selected intent if the probability of the selected intent is less than or equal to the predetermined threshold.
In other features, the action includes the task, and the application includes instructions to perform the task including instructions for generating a query based on the text phrase; sending a request including the query to a security server; and including a result of the query from the security server in the response.
In other features, the action includes the task, and the application includes instructions to perform the task including instructions for generating a query based on the text phrase; sending a request including the query to a threat intelligence server; and including a result of the query from the threat intelligence server in the response.
In other features, the action includes turning on multi-factor authentication, and the application includes instructions for turning on multi-factor authentication for a remote computer based on the selected intent.
In other features, the action includes forwarding one of a suspicious file or a suspicious uniform resource link (URL) to a file to a remote server. The application includes instructions for forwarding one of a suspicious file or a suspicious uniform resource link (URL) to a file to a remote server.
In other features, the application includes instructions for receiving a response from the remote server indicating whether or not the one of the suspicious file or the suspicious URL link is safe and for indicating whether or not the one of the suspicious file or the suspicious URL link is safe in the response.
In other features, the selected intent corresponds to a request to close a security alert due to a false positive, the application includes instructions for sending a code to a cellular phone and the application includes instructions for closing the security alert if the code is received.
In other features, the natural language interface creates the natural language model in response to training using text phrase and intent pairs.
A method for generating automated responses to improve response times for diagnosing security alerts includes receiving a text phrase at a security bot server relating to a security alert from one of an e-mail application and a chat application; in response to receiving the text phrase, using a natural language interface of the security bot server to execute a natural language model to select one of a plurality of intents corresponding to the text phrase as a selected intent; and, in response to identification of the selected intent, mapping the selected intent one of a plurality of actions using the security bot server. Each of the plurality of actions includes at least one of a static response, a dynamic response, and a task. The method includes sending a response based on the one of the plurality of actions using the security bot server via the one of the e-mail application and the chat application.
In other features, using the natural language interface of the security bot server to execute the natural language model further comprises generating one or more probabilities that the text phrase corresponds to one or more of the plurality of intents, respectively; selecting one of the plurality of intents corresponding to a highest one of the probabilities as the selected intent; comparing the probability of the selected intent to a predetermined threshold; outputting the selected intent if the probability of the selected intent is greater than the predetermined threshold; and not outputting the selected intent if the probability of the selected intent is less than or equal to the predetermined threshold.
In other features, the one of the plurality of actions includes the task and the method further includes generating a query based on the text phrase using the security bot server; sending a request including the query using the security bot server to a security server; and including a result of the query from the security server in the response. The one of the plurality of actions includes the task and the method further includes generating a query based on the text phrase using the security bot server; sending a request including the query using the security bot server to a threat intelligence server; and including a result of the query from the threat intelligence server in the response.
In other features, the method includes turning on multi-factor authentication in response to the selected intent using the security bot server. The method further includes forwarding one of a suspicious file or a suspicious uniform resource link (URL) to a file to a remote server using the security bot server.
In other features, the method includes receiving a response at the security bot server from the remote server indicating whether or not the one of the suspicious file or the suspicious URL link is safe. The response indicates whether or not the one of the suspicious file or the suspicious URL link is safe.
In other features, when the selected intent corresponds to a request to close a security alert due to a false positive, the method includes sending a code via a cellular phone using the security bot server, and closing the security alert if the code is received by the security bot server. The method includes creating the natural language model in response to training using text phrase and intent pairs.
A computing system for generating automated responses to improve response times for diagnosing security alerts includes a processor and a memory. An application is stored in the memory and executed by the processor. The application includes instructions for providing an interface for at least one of an e-mail application or a chat application; receiving a text phrase via the interface relating to a security alert; using a natural language interface with a natural language model to select one of a plurality of intents corresponding to the text phrase if a probability that the text phrase corresponds the selected intent is greater than a predetermined probability; and mapping the selected intent to one of a plurality of actions. Each of the plurality of actions includes at least one of a static response, a dynamic response, and a task. The application includes instructions for sending a response using the interface based on the at least one of the static response, the dynamic response, and the task; generating a query based on the text phrase in response to the task; sending a request including the query to at least one of a security server and a threat intelligence database; and including a result of the query from the at least one of the security server and the threat intelligence database in the response.
Further areas of applicability of the present disclosure will become apparent from the detailed description, the claims and the drawings. The detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the disclosure.
In the drawings, reference numbers may be reused to identify similar and/or identical elements.
Systems and methods according to the present disclosure provide an automated system or bot with a natural language interface that provides assistance to security analysts when responding to security alerts. The security alerts can be generated by a security server based on a set of rules or machine learning or can be generated manually in response to unusual activity, receipt of a suspicious file or URL link, or in any other way. The security alerts can relate to alerts generated from all layers of security including network, application, host, and operating system levels. The systems and methods described herein use a conversation-style triage process to improve response times for deciding whether or not a security alert is genuine or a false positive.
The security bots use a natural language interface to analyze text phrases submitted by the security analyst and to determine the intent of the security analyst. If an intent can be determined from the text phrase with a sufficiently high level of confidence, the security bot maps the intent to an action that may include a static response, a dynamic response, and one or more tasks. Some of the tasks may involve generating queries, sending the queries to security-based data stores (such as those managed at a local level by a network security server or more globally by a threat intelligence server) and returning a response including the gathered data to the security analyst. Other tasks may involve performing behavioral analysis on or detonating potentially malicious files and uniform resource links (URLs) to files. Still other tasks may involve turning on higher levels of authentication such as multi-factor authentication for a user or group of users when suspicious activity occurs. As a result, the security analyst does not need to spend time monitoring dashboards and manually writing complicated queries. In some examples, the results include a high-level summary of the threat, synthesized information and/or contextual data.
Referring now to
As will be described further below, the security bot server 60 allows the security analyst or other user to engage in a natural language dialogue during investigations of security alerts that occur in a network environment. In some situations, the security bot server 60 includes a natural language processing application or interface that attempts to map text phrases (generated by the security analyst or other user) to one of a plurality of intents. If the mapping of the text phrase to one of the intents can be done with a sufficiently high level of confidence, the security bot server 60 maps the selected intent to an action, performs the action and generates a response.
In some examples, the action may include generating static responses, generating dynamic responses and/or performing tasks. More particularly, the security bot server 60 completes actions required by the dynamic responses or tasks and generates a response that is output to the security analyst computer 54 via the e-mail or chat server 58. The security analyst and the security bot server 64 may have several exchanges before the security alert is investigated further, escalated or closed because it is a false positive.
In some situations, the security bot server 60 generates requests including one or more queries and forwards the requests to a network security server 64. In some examples, the network security server 64 controls network access using passwords and/or other authentication methods and network file accessing policies. In some examples, the network security server 64 performs threat monitoring for the local network. For example, the network security server 64 may monitor Internet Protocol (IP) header data for packets sent and received by the local network to determine where a login attempt is being made, the type of device is being used to login, prior login attempts by the device, prior login attempts to the account or entity, and/or other data to help identify malicious activity and/or to generate security alerts. In some examples, the network security server 64 uses behavioral analysis or a set of rules to identify malicious activity. In some examples, the network security server 64 also receives or has access to data relating to attacks occurring on other networks and/or remediation strategies that have been used for particular files or types of malware. In some examples, the network security server 64 may be implemented by Microsoft® Azure® Security Center or another suitable security server. The network security server 64 may store data in a local database 66 and may answer the queries relating to malware and remediation using the local database 66.
For example, the network security server 64 may communicate with a threat intelligence server 68 that provides access to details relating to attacks occurring on other non-local networks, IP addresses tied to malicious activity, malicious files, malicious URL links, etc. Alternately, the network security server 64 may generate and send a request including one more queries to the threat intelligence server 68 and/or may receive data pushed from the threat intelligence server 68. The query may be based on an IP address of the login attempt, the identity of the computer making the logic attempt, the suspicious file or URL link, or other information. The threat intelligence server 68 may include a database 70 for storing data relating to malware, malicious IP addresses, remediation efforts, etc. in response to a query, the threat intelligence server 68 forwards information to the network security server 64, which forwards a response to the security bot server 60 (or the response may be sent directly to the security bot server 60). In other examples, the security bot server 60 may send queries directly to the threat intelligence server 68.
The security bot server 60 may send suspicious files or suspicious uniform resource location (URL) links (connecting to a file) that are attached by the security analyst and sent to a detonation server 80. The detonation server 80 may include (or is connected to another server 84 including) one or more processors 85, one or more virtual machines (VMs) 86 and/or memory 88 including a behavioral analysis application 91. In some examples, the behavioral analysis application 91 uses machine learning to analyze suspicious files or suspicious URL links to determine whether or not the suspicious file or URL link is malicious or safe. Once the determination is made, the detonation server 80 sends a message to the security bot server 60 that the message is either malicious or safe. The security bot server 60 sends a message to or otherwise notifies the security analyst computer 54. If the file or URL link is not safe, the security bot server 60 instructs the user that the file or URL link is not safe and to delete the file or URL link.
After completing a dialogue with the security bot server 60, the security analyst can make a determination as to whether or not the security alert needs additional investigation. If additional investigation is needed, the security analyst can escalate the security alert. Alternately, if the security analyst decides that the security alert is a false positive, the security analyst can terminate the security alert.
As previous described above, the security analysts are expected to handle a large number of security alerts in a short period of time. To prevent inadvertent or flippant closure of a security alert, the system 50 may perform a code confirmation process. In some examples, the security bot server 60 sends a code to the security analyst. In some examples, the security bot server 60 sends the code to the cellular phone 56 of the security analyst via a cellular system 90. In some examples, the code includes a text that is sent using short message service (SMS). The security analyst must enter the correct code in the e-mail or chat window to close the security alert.
Referring now to
The processor 104 of the security bot server 60 executes an operating system 114 and one or more applications 118. In some examples, the applications 118 include an e-mail or chat application, a security bot application 121, a natural language processing interface 122 and an authenticator application 123. In some examples, the security bot application 121 is implemented using Microsoft® Bot Framework, although other bot applications can be used. In some examples, the natural language processing interface 122 generates a natural language model 125 based on training using known text phrase and intent pairs. In some examples, the natural language processing interface 122 includes Microsoft® LUIS® application protocol interface (API), although other natural language processing interfaces or engines may be used. In some examples, the security bot application 121 integrates one or more of the other applications 120, 122 and/or 123.
The security bot server 60 further includes a wired interface (such as an Ethernet interface) and/or wireless interface (such as a Wi-Fi, Bluetooth, near field communication (NFC) or other wireless interface (collectively identified at 120)) that establish a communication channel over the distributed communication system 52. The security bot server 60 includes a display subsystem 124 including a display 126. The security bot server 60 includes bulk storage 130 such as a hard disk drive or other bulk storage.
Referring now to
In some examples, the natural language processing interface 122 generates one or more probabilities that the text phrase corresponds to one or more of the intents, respectively. The natural language processing interface selects one of the intents having a highest probability as the selected intent if the probability is greater than a predetermined threshold. The natural language processing interface 122 outputs the selected intent (if applicable) to the security bot application 121. If none of the intents have a probability greater than the predetermined threshold, then the natural language processing interface 122 outputs a default intent (such as None).
The security bot application 121 maps the selected intent to an action. The actions may include static responses, dynamic responses and/or tasks. Some of the tasks require the security bot application to access various Internet resources, local or remote contextual databases 127 such as those associated with the network security server 64, the threat intelligence server 68 and/or other databases.
Referring now to
The processor 204 of the security analyst computer 54 executes an operating system 214 and one or more applications 218. In some examples, the applications 218 include a browser application 219 and one or more other applications 221 such as an e-mail or chat application or interface. In some examples, the browser is used to access the e-mail or chat application and/or a separate e-mail or chat application or interface is used. In some examples, the e-mail or chat application includes Skype®, Slack®, Microsoft Outlook®, Gmail® or other suitable e-mail or chat application.
The security analyst computer 54 further includes a wired interface (such as an Ethernet interface) and/or wireless interface (such as a Wi-Fi, Bluetooth, near field communication (NFC) or other wireless interface (collectively identified at 220)) that establish a communication channel over the distributed communication system 52. The security analyst computer 54 includes a display subsystem 224 including a display 226. The security analyst computer 54 includes a bulk storage system 230 such as a hard disk drive or other storage.
Referring now to
At 244, the method analyzes the text phrase using natural language processing. At 246, the method determines whether or not the text phrase corresponds sufficiently to one of the intents. If 246 is false, the method sends a generic message requesting additional information or offering help and returns to 242. If 246 is true, the method maps the selected intent to an action at 248. At 250, the method performs the action. In some examples, the action includes at least one of responding to the security analyst or other user with a static response or a dynamic response and/or performing a task.
Referring now to
Referring now to
Referring now to
In
In
Referring now to
Referring now to
Referring now to
The foregoing description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. The broad teachings of the disclosure can be implemented in a variety of forms. Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure. Further, although each of the embodiments is described above as having certain features, any one or more of those features described with respect to any embodiment of the disclosure can be implemented in and/or combined with features of any of the other embodiments, even if that combination is not explicitly described. In other words, the described embodiments are not mutually exclusive, and permutations of one or more embodiments with one another remain within the scope of this disclosure.
Spatial and functional relationships between elements (for example, between modules, circuit elements, semiconductor layers, etc.) are described using various terms, including “connected,” “engaged,” “coupled,” “adjacent,” “next to,” “on top of,” “above,” “below,” and “disposed.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the above disclosure, that relationship can be a direct relationship where no other intervening elements are present between the first and second elements, but can also be an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. As used herein, the text phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.”
In the figures, the direction of an arrow, as indicated by the arrowhead, generally demonstrates the flow of information (such as data or instructions) that is of interest to the illustration. For example, when element A and element B exchange a variety of information but information transmitted from element A to element B is relevant to the illustration, the arrow may point from element A to element B. This unidirectional arrow does not imply that no other information is transmitted from element B to element A. Further, for information sent from element A to element B, element B may send requests for, or receipt acknowledgements of, the information to element A.
The term application or code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. The term memory or memory circuit is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium may therefore be considered tangible and non-transitory. Non-limiting examples of a non-transitory, tangible computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only memory circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc).
In this application, apparatus elements described as having particular attributes or performing particular operations are specifically configured to have those particular attributes and perform those particular operations. Specifically, a description of an element to perform an action means that the element is configured to perform the action. The configuration of an element may include programming of the element, such as by encoding instructions on a non-transitory, tangible computer-readable medium associated with the element.
The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer.
The computer programs include processor-executable instructions that are stored on at least one non-transitory, tangible computer-readable medium. The computer programs may also include or rely on stored data. The computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc.
The computer programs may include: (i) descriptive text to be parsed, such as JavaScript Object Notation (JSON), hypertext markup language (HTML) or extensible markup language (XML), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler, etc. As examples only, source code may be written using syntax from languages including C, C++, C#, Objective C, Haskell, Go, SQL, R, Lisp, Java®, Fortran, Perl, Pascal, Curl, OCaml, Javascript®, HTML5, Ada, ASP (active server pages), PHP, Scala, Eiffel, Smalltalk, Erlang, Ruby, Flash®, Visual Basic®, Lua, and Python®.
None of the elements recited in the claims are intended to be a means-plus-function element within the meaning of 35 U.S.C. § 112(f) unless an element is expressly recited using the text phrase “means for,” or in the case of a method claim using the text phrases “operation for” or “step for.”