An exemplary computer program listing appendix is provided on a compact disc, in a file named “3928—3_appendix.doc”. The size of 3928—3_appendix.doc is 146,944 bytes (147,456 bytes on a disc), and the file was created on Sep. 2, 2004. The content of 3928—3_appendix.doc is herein incorporated by reference in its entirety.
This invention relates to network-based alert management.
Computer networks may include one or more digital security monitors or sensors that automatically analyze traffic on the network to identify potentially suspicious activity. The sensors may be implemented in either software or hardware. Monitors may focus on security monitoring and/or on fault analysis.
Upon detecting suspicious activity, the sensors typically generate some kind of digital alert message or signal, and attempt to bring that message to the attention of network I/S managers whose responsibility it is to respond and react in an appropriate defensive manner against hostile digital attacks or to recover quickly from catastrophic failures.
In an aspect, the invention features a system for managing network alerts including data connections adapted to receive alerts from network sensors, alert processing logic coupled to the data connections and further including alert integration logic operable to integrate the alerts, report generation logic coupled to the alert integration logic, distribution logic coupled to the report generation logic and a remote management unit coupled to the alert processing logic and being operable to dynamically modify the alert processing logic. The data connections may include secure electronic communication lines and be dynamically configurable through the remote management unit. The network sensors may include heterogeneous sensors.
In another aspect, the invention features a method of managing alerts including receiving alerts from a number of network sensors, filtering the alerts to produce one or more internal reports and consolidating the internal reports that are indicative of a common incident-to-incident report. Related incident reports may be correlated. The network sensors may format the received alerts. Filtering includes deleting alerts that do not match specified rules. The filtering rules may be dynamically adjusted. Filtering may also include tagging alerts with a significance score that can indicate a priority measure and relevance measure.
Among the advantages of the invention may be one or more of the following.
The alert manager can be tailored to a particular application by dynamically adding or removing data connections to sources of incoming alerts, and by dynamically varying the process modules, user filter clauses, priority clauses, topology clauses, and output. Process modules may be added, modified, and deleted while the alert manager is active. Output may be configured for a variety of graphical user interfaces (GUIs). In embodiments, useful, for example, for each category of attack the user can define different priorities as related to denial of service, security, and integrity.
Process modules are logical entities within the alert manager that can respond to an incoming alert in real time and virtual time, i.e., data within an application can cause the alert manager to respond.
The alert manager can act as a sender or receiver. In embodiments, useful, for example, the alert manager can listen to a specific port in a network or connect to an external process on a host computer and process its data.
The alert management process can be an interpretive process allowing the incorporation of new process clauses and new rules.
The alert management process may provide a full solution for managing a diverse suite of multiparty security and fault monitoring services. Example targets of the alert management process are heterogeneous network computing environments that are subject to some perceived operational requirements for confidentiality, integrity, or availability. Inserted within the network are a suite of potential multiparty security and fault monitoring services such as intrusion detection systems, firewalls, security scanners, virus protection software, network management probes, load balancers, or network service appliances. The alert management process provides alert distributions within the monitored network through which security alerts, fault reports, and performance logs may be collected, processed and distributed to remote processing stations (e.g., Security Data Centers, Administrative Help Desks, MIS stations). Combined data produced by the security, fault, or performance monitoring services provide these remote processing stations detailed insight into the security posture, and more broadly the overall health, of the monitored network.
Value may be added to the content delivered by the alert management process to the remote processing station(s) that subscribe to alerts in the form of an advanced alert processing chain. For example, alerts received by the alert management process and prepared for forwarding to a remote processing station, may be filtered using a dynamically down-loadable message criteria specification.
In a further aspect, alerts may be tagged with a priority indication flag formulated against the remote processing station's alert processing policy and tagged with a relevance flag that indicates the likely severity of the attack with respect to the known internal topology of the monitored network.
In a further aspect of the invention, alerts may be aggregated (or consolidated) into single incident reports when found to be associated with a series of equivalent alerts produced by the same sensor or by other sensors, based upon equivalence criteria, and the incident reports forwarded to the remote processing station.
The alert management system is configurable with respect to the data needs and policies specified by the remote processing station. These processes are customizable on a per remote processing station basis. For example, two remote processing stations may in parallel subscribe to alerts from the alert management process, with each having individual filtering policies, prioritization schemes, and so forth, applied to the alert/incident reports it receives.
Other features and advantages will become apparent from the following description and from the claims.
Like reference symbols in the various drawings indicate like elements.
Referring to
The security and fault monitoring systems 22 may include, for example, intrusion detection systems, firewalls, security scanners, virus protection software, network management probes, load balancers, and network service appliances. Each of the security and fault monitoring systems 22 produces an alert stream in the form of, for example, security alerts, fault reports, and performance logs. The alert stream is sent to the alert manager 24 for collection, processing, and distribution to the remote processing center 26. Example remote processing centers 26 are security data centers, administrative help desks, and MIS stations.
In an embodiment, the remote processing center 26 subscribes to the alert manager 24 which in turns distributes specific collected and processed alert information to the remote processing center 26, more fully described below.
The networks 12, 14, and 16 being monitored by the security and fault monitoring systems 22 may include any computer network environment and topology such as local area networks (LAN), wide area networks (WAN), Ethernet, switched, and TCP/IP-based network environments. Network services occurring within the networks 12-16 include features common to many network operating systems such as mail, HTTP, ftp, remote log in, network file systems, finger, Kerbebos, and SNMP. Each of the sensors 22 monitors various host and/or network activity within the networks 12-16, and each sensor 22, as discussed above, generate a stream of alerts, triggered by potentially suspicious events, such as network packet data transfer commands, data transfer errors, network packet data transfer volume, and so forth. The alerts indicate a suspicion of possible malicious intrusion or other threat to operations within the networks 12-16.
The alert manager 24 includes a receive-input logic module 28. In an embodiment, the receive-input logic 28 of the alert manager 24 subscribes, i.e., establishes a transport connection, to receive each of the alert streams produced by the sensors 22 through a secure electronic communication line (SSL) 30. The alert streams contain raw, i.e., unprocessed, alerts. The monitors 22 may format their respective alert streams in a variety of formats, such as IDIP, SNMP, HP Openview, an XML-based standard format (such as the Attack Specifications from IETF), Common Intrusion Detection Framework (CIDF), GIDOs, or some other format. The receive-input logic 28 of the alert manager 24 is equipped with translation modules 32 to translate the original, raw alert streams from the monitors 22 into a common format for further processing, if the alerts do not arrive in the common format.
In another embodiment, the monitors 22 include conversion software (not shown), also referred to as “wrapper” software that translates a monitor's raw alert stream into the common format used by the alert manager 24. The wrapper software can add data items of interest to the alert manager 24, by querying its network 12-16.
In another embodiment, a combination of monitors 22 having wrapper software and the receive-input logic 28 pre-processing raw alerts in the alert management network 10 are present to accommodate a heterogeneous base of monitors 22 that an end-user desires to manage.
The alert manager 24 includes an alert processing engine 34. Raw alerts received by the receive-input module 28 and formatted into the common format are sent to the alert processing engine 34.
Referring to
For example, a particular end-user subscriber may be responsible only for a portion of the overall operations network and may only wish to see alerts coming from a particular subset of monitors 22, e.g., from particular ports. Each end-user subscriber can interactively define his or her own customized user-specified filters using the remote management interface 36 of the remote processing center 26, fully described below.
The filtered alerts are prioritized 56, i.e., rated or scored according to priorities dynamically controlled by the user. In an embodiment, the priority of an alert is determined by analyzing the known, (relative) potential impact of the attack category identified with respect to each of various concerns such as confidentiality, data integrity, and system availability. Confidentiality involves allowing only authorized users to view network data. Data integrity involves allowing only authorized persons to change data. System availability involves providing users access to data whenever needed with minimum downtime.
Different categories of known computer intrusions and anomalies generally pose threats with differing levels of impact on each of the above three concerns. In addition, for different users and different applications, each of the concerns may be of different relative priority. For example, in a general Internet news/search portal like Yahoo! or Lycos, continuous availability may be a more important concern than confidentiality. Conversely, for a government intelligence database, confidentiality may be a greater priority than continuous availability. For an e-commerce business site, all three concerns may be of roughly equal seriousness and priority. An ultimate priority score assigned to a particular alert for a given end-user during prioritization 56 reflects a sum or combination of the identified attack's potential adverse impact along each of the dimensions of interest (confidentiality, data integrity, and system availability), weighted by the end-user's individual profile of relative priority for each such dimension.
In an embodiment, a default priority profile is provided for each user or subscriber that assigns equal priority to confidentiality, data integrity, and system availability. In a preferred embodiment, the end-user may configure the priorities dynamically, and modify the default values as desired, through the remote management interface 36 that gives the user the flexibility to customize priority assignments in a manner that reflects his/her unique concerns.
In an another embodiment, users (or system developers) directly assign a relative priority score to each type of attack, instead of ranking more abstract properties such as integrity or availability, that allows more precise reflection of a user's priorities regarding specific attacks, but requires more initial entry of detailed information.
In an embodiment, users may register a listing of critical services, identified by <host ID, protocol> pairs, as to whom potential attacks or operational failures are considered to be of especially high priority.
Management and alteration of filters and listings of critical services in accordance with each of the prioritization methodologies described above can are performed dynamically and interactively while alert manager 24 is in operation and as user priorities change using the remote management interface 36.
The alerts are topology vetted 58. Vetting 58 provides a relevance rating to alerts based on the topological vulnerability of the network being monitored to the type of attack signaled by the alert. Example topologies include the computing environment, what kind of operating system (O/S), network infrastructure, and so forth. In a preferred embodiment, vetting 58 utilizes a mapping between each network host and an enumeration of that host's O/S and O/S version(s). Vetting step 58 further preferably utilizes a topology relevance table indicating the relevance of various types of attacks to each of the different possible OS/version environments. Thus, to determine and assign a relevance score for a particular alert, the host ID (hostname/IP address) for the target of that alert can be used to retrieve its OS/version information, and the OS/version along with the attack type of the alert can be used to retrieve a relevancy score from the topology table.
In an embodiment, the topology table of the network being monitored is dynamically configurable by end users through the remote management interface 36.
In another embodiment, automatic local area network (LAN) mapping is provided by a network topology scope application.
The relevance of various types of known attacks against different topologies is preferably specified in predefined maps, but dynamically configured using the remote management interface 36.
Internal reports are generated 60 from the output of filtering 54, prioritizing 56 and vetting 58. Internal reports generally include fewer alerts as compared with the original raw alert stream, as a result of the user-configured filtering 40. Internal reports also tag or associate each alert with priority and/or relevance scores as a result of priority mapping 56 and topology vetting 58, respectively.
The internal reports are used to generate 62 consolidated incident reports. A consolidated incident report adds perspective and reduces information clutter by merging/combining the internal reports for multiple alerts into a single incident report. In a preferred embodiment, generating 62 is carried out through report aggregation and equivalence recognition. Aggregation refers to combining alerts produced by a single sensor, whereas equivalence recognition refers to combining alerts from multiple sensors. The underlying notion in both cases is that nominally different alerts may actually represent a single intrusion “incident” in the real world. By analogy, a single criminal intrusion into a physical property might trigger alarms on multiple sensors such as a door alarm and a motion detector that are instrumented on the same premises, but from an informational perspective both alarms are essentially signaling the same event.
In an embodiment, alert parameters examined for report aggregation include a variable combination of attack type, timestamp, monitor identification (ID), user ID, process ID, and <IP, port addresses> for the source and target of the suspicious activity.
When an internal report is generated 60 alerts are consolidated and the corresponding priority and relevance tags for the individual alerts are merged into single meta-priority/meta-relevance scores for the single incident. Different functions may be utilized for doing the priority blend, such as additive, min/max, average, and so forth. Duration of the overall incident is also preferably computed and associated with the incident, based on the time stamps of the various individual alerts involving the incident.
The consolidated incident reports are used to generate 64 a report output. Formatting of the output report is based on subscriber-customized criteria that are defined using the remote management interface 36. The report output is transported 66 to the remote processing center 26.
Selection of a transport is under user control and managed using the remote management interface 36. The user may specify, for example, E-mail, XML, HTML and/or writing out to a file. In an embodiment, the transport occurs over an SSL for display and assessment by the end-user.
The filtering 54, prioritization 54 and topology vetting 58 are event driven, i.e., each alert is processed and filtered/tagged as it arrives, one alert at a time. However, temporal clauses are utilized for aspects of report aggregation and equivalence recognition among multiple alerts. For example, as internal reports are generated 60 a sliding window is established during which additional records may be merged into the aggregate incident report. A single-alert internal report may be sent to the remote processing center 26 indicating that it has witnessed the alert. A subsequent aggregate alert report, i.e., an incident report, covering that single alert as well as others, may also be forwarded to the remote processing center 26 to indicate a duration of the attack/incident, an aggregate count of individual alerts representing this incident, and an aggregate priority. In an embodiment, aggregate alert flushing may occur after some period of inactivity (e.g., “two minutes since last event”). The aggregate alert flushing is not event driven, but rather driven by an internal timeout recognized from a system clock (not shown) of the alert manager 24.
Referring to
In this example, for first user 94 a “ping of death” alert 82 will have a priority score=(0*90)+(0.2*10)+(0.8*10)=10; whereas for second user 96 a “ping of death” alert 82 will receive a priority score=(0.8*90)+(0.1*10)+(0.1*10)=74.
As is seen from the description above, (a) it is the relative value of these priority scores that has significance, not the absolute magnitudes, and (b) the priority values for alerts and for user preferences are subjective values that may vary from one application to another and from one user to another. As noted above, the alert priority map values and user priority profiles may be dynamically adjusted and customized by individual users via remote management interface 36.
Referring again to
In an embodiment, the alert management network 10 provides an open, dynamic infrastructure for alert processing and management. The alert manager 24 preferably includes functionality for dynamically generating, suspending, and configuring data connections and logical process modules, in response to interactive remote user commands issued via remote management interface 36. The remote management interface 36 preferably executes a java application that generates command files, in response to end user requests, in the form of directives and any necessary data files, such as the priority database record 80, and so forth. The java application communicates, e.g. via telnet, to the alert manager 24 and downloads the directives and data files. The alert processing engine 34, preferably a postscript interpreter in one embodiment, can process the directives dynamically. Many of the directives are preferably defined in terms of postscript code that resides locally in a library 44 in the alert manager 24. Applications running in alert manager 24 are written in modular fashion, allowing directives to accomplish meaningful change of logical behavior by instructing the alert manager 24 to terminate a particular process clause and activate a newly downloaded clause, for example.
By way of another example, through the operation of the alert processing engine 34 the alert manager 24 can dynamically establish and suspend connections to the various alert streams generated by the security and fault monitoring systems 22. Thus, the alert manager 24 can dynamically “plug into” (i.e., connect) new alert streams, such as alert streams from additional sensors newly deployed by an end-user, and likewise can dynamically suspend (permanently or temporarily) its connection to alert streams from sensors 22 that are removed, replaced, taken offline, and so forth. Similarly, alert manager 24 can dynamically generate or suspend modules of the alert management process 50, and can dynamically adjust the configurable parameter settings of those modules.
In this manner, alert manager 24 is designed to be responsive to dynamic configuration requests initiated by end users using the remote management interface 36 of the remote processing center 26. As mentioned above, the remote management interface 36 provides an interactive interface at workstation 42 for end-users to specify desired modifications to the dynamically configurable aspects of alert manager 24.
Referring to
A detailed functional specification for a software infrastructure corresponding to eFlowgen module 102 is included in the Appendix, incorporated herein.
In another embodiment, referring to
Another correlation technique residing in the correlation logic engine 110 looks for interrelated vulnerabilities, applying rule-based knowledge to look for groups of distinct incidents that can inferentially be interpreted as related parts of a single, coordinated attack. For example, rules matching patterns of incidents that look like a chain over time, where the target of an earlier incident becomes the source of a subsequent incident, may allow correlation logic engine 110 to conclude that these likely are not unrelated incidents, and that a “worm” infection appears to be spreading.
In an embodiment, the correlation logic engine 110 incorporates statistical inferential methods. The correlation logic engine 110 attempts to draw conclusions automatically based on received intrusion incident reports. The correlation logic engine 110 produces reports for the end-user indicating correlation found.
The alert manager 24 and other components of the alert management network 10 may be implemented and executed on a wide variety of digital computing platforms, including, but not limited to, workstation-class computer hardware and operating system software platforms such as Linux, Solaris, FreeBSD/Unix, and Windows-NT.
Referring to
The storage device 130 can store instructions that form an alert manager 24. The instructions may be transferred to the memory 126 and CPU 128 in the course of operation. The instructions for alert manager 24 can cause the display device 122 to display messages through an interface such as a graphical user interface (GUI). Further, instructions may be stored on a variety of mass storage devices (not shown).
Other embodiments are within the scope of the following claims:
This application is a Continuation of U.S. patent application Ser. No. 09/626,547, filed on Jul. 25, 2000, now U.S. Pat. No. 6,704,847 and is a Continuation-in-Part of U.S. patent application Ser. No. 09/188,739, filed on Nov. 9, 1998, now U.S. Pat. No. 6,321,338. U.S. patent application Ser. No. 09/626,547, filed on Jul. 25, 2000, now U.S. Pat. No. 6,704,847 is a Continuation-in-Part of U.S. patent application Ser. No. 09/188,739, filed on Nov. 9, 1998, now U.S. Pat. No. 6,321,338.
This invention was made with Government support under Contract Number F30602-96-C-0294 and F30602-96-C-0187 awarded by DARPA and the Air Force Research Laboratory. The Government has certain rights in this invention.
Number | Name | Date | Kind |
---|---|---|---|
4305097 | Doemens et al. | Dec 1981 | A |
4672609 | Humphrey et al. | Jun 1987 | A |
4773028 | Tallman | Sep 1988 | A |
5210704 | Husseiny | May 1993 | A |
5440498 | Timm | Aug 1995 | A |
5440723 | Arnold et al. | Aug 1995 | A |
5475365 | Hoseit et al. | Dec 1995 | A |
5517429 | Harrison | May 1996 | A |
5539659 | McKee et al. | Jul 1996 | A |
5557742 | Smaha et al. | Sep 1996 | A |
5568471 | Hershey et al. | Oct 1996 | A |
5704017 | Heckerman et al. | Dec 1997 | A |
5706210 | Kumano et al. | Jan 1998 | A |
5737319 | Croslin et al. | Apr 1998 | A |
5748098 | Grace | May 1998 | A |
5790799 | Mogul | Aug 1998 | A |
5796942 | Esbensen | Aug 1998 | A |
5825750 | Thompson | Oct 1998 | A |
5878420 | De la Salle | Mar 1999 | A |
5919258 | Kayashima et al. | Jul 1999 | A |
5922051 | Sidey | Jul 1999 | A |
5940591 | Boyle et al. | Aug 1999 | A |
5966650 | Hobson et al. | Oct 1999 | A |
5974237 | Shurmer et al. | Oct 1999 | A |
5974457 | Waclawshy et al. | Oct 1999 | A |
5991881 | Conklin et al. | Nov 1999 | A |
6009467 | Ratcliff et al. | Dec 1999 | A |
6052709 | Paul | Apr 2000 | A |
6067582 | Smith et al. | May 2000 | A |
6070244 | Orchier et al. | May 2000 | A |
6092194 | Touboul | Jul 2000 | A |
6119236 | Shipley | Sep 2000 | A |
6138121 | Costa et al. | Oct 2000 | A |
6144961 | De la Salle | Nov 2000 | A |
6192392 | Ginter | Feb 2001 | B1 |
6263441 | Cromer et al. | Jul 2001 | B1 |
6269456 | Hodges et al. | Jul 2001 | B1 |
6275942 | Bernhard et al. | Aug 2001 | B1 |
6279113 | Vaidya | Aug 2001 | B1 |
6298445 | Shostack et al. | Oct 2001 | B1 |
6311274 | Day | Oct 2001 | B1 |
6321338 | Porras et al. | Nov 2001 | B1 |
6324656 | Gleichauf et al. | Nov 2001 | B1 |
6353385 | Molini et al. | Mar 2002 | B1 |
6370648 | Diep | Apr 2002 | B1 |
6396845 | Sugita | May 2002 | B1 |
6405318 | Rowland | Jun 2002 | B1 |
6408391 | Huff et al. | Jun 2002 | B1 |
6442694 | Bergman et al. | Aug 2002 | B1 |
6453346 | Garg et al. | Sep 2002 | B1 |
6460141 | Olden | Oct 2002 | B1 |
6477651 | Teal | Nov 2002 | B1 |
6499107 | Gleichauf et al. | Dec 2002 | B1 |
6502082 | Toyama et al. | Dec 2002 | B1 |
6519703 | Joyce | Feb 2003 | B1 |
6529954 | Cookmeyer et al. | Mar 2003 | B1 |
6532543 | Smith et al. | Mar 2003 | B1 |
6535227 | Fox et al. | Mar 2003 | B1 |
6546493 | Magdych et al. | Apr 2003 | B1 |
6553378 | Eschelbeck | Apr 2003 | B1 |
6681331 | Munson et al. | Jan 2004 | B1 |
6701459 | Ramanathan et al. | Mar 2004 | B2 |
6704874 | Porras et al. | Mar 2004 | B1 |
6707795 | Noorhosseini et al. | Mar 2004 | B1 |
6725377 | Kouznetsov | Apr 2004 | B1 |
6732167 | Swartz et al. | May 2004 | B1 |
6751738 | Wesinger et al. | Jun 2004 | B2 |
6826697 | Moran | Nov 2004 | B1 |
6839850 | Campbell et al. | Jan 2005 | B1 |
6947726 | Rockwell | Sep 2005 | B2 |
6971028 | Lyle et al. | Nov 2005 | B1 |
20020019870 | Chirashnya et al. | Feb 2002 | A1 |
20020032717 | Malan et al. | Mar 2002 | A1 |
20020032793 | Malan et al. | Mar 2002 | A1 |
20020032880 | Poletto et al. | Mar 2002 | A1 |
20020035698 | Malan et al. | Mar 2002 | A1 |
20020138753 | Munson | Sep 2002 | A1 |
20020144156 | Copeland, III | Oct 2002 | A1 |
20030037136 | Labovitz et al. | Feb 2003 | A1 |
20030145226 | Bruton, III et al. | Jul 2003 | A1 |
20030172166 | Judge et al. | Sep 2003 | A1 |
Number | Date | Country |
---|---|---|
9913427 | Mar 1999 | WO |
9957626 | Nov 1999 | WO |
WO 9957625 | Nov 1999 | WO |
0010278 | Feb 2000 | WO |
0025214 | May 2000 | WO |
0025527 | May 2000 | WO |
0034867 | Jun 2000 | WO |
02101516 | Dec 2002 | WO |
WO 02101516 | Dec 2002 | WO |
WO 03077071 | Sep 2003 | WO |
Number | Date | Country | |
---|---|---|---|
Parent | 09626547 | Jul 2000 | US |
Child | 09629510 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09188739 | Nov 1998 | US |
Child | 09626547 | US |