Event database management method and system for network event reporting system

Information

  • Patent Grant
  • 7516208
  • Patent Number
    7,516,208
  • Date Filed
    Friday, July 20, 2001
    22 years ago
  • Date Issued
    Tuesday, April 7, 2009
    15 years ago
Abstract
Improved and more efficient techniques are described for reducing the amount of work that needs to be performed by a database in a computer network in order to distribute event summary data to a large number of administrator clients. Delays experienced by event data at a database, e.g., due to delays in accessing a database, are reduced so that client can be notified of the events as soon as possible. Furthermore, event data obtained from both local and remote networks is efficiently coordinated using replica and union processes. Each monitoring location in the network includes both locally generated events, and a copy of remotely-generated events which are provided and maintained by one or more remote monitoring locations. The monitoring locations update one another with their event data.
Description
COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.


CROSS REFERENCE TO RELATED APPLICATIONS

This application is related to U.S. patent application Ser. No. 09/877,619, filed Jun. 8, 2001 and entitled “METHOD AND SYSTEM FOR EFFICIENT DISTRIBUTION OF NETWORK EVENT DATA,”, which application is hereby incorporated herein by reference in its entirety.


BACKGROUND OF THE INVENTION

The invention disclosed herein relates generally to network monitoring systems. More particularly, the present invention relates to improved methods and systems for efficiently storing event data in a database and distributing the event data to different users, where the event data relates to events occurring on a computer network.


Maintaining the proper operation of services provided over a network is usually an important but difficult task. Service administrators are often called upon to react to a service failure by identifying the problem that caused the failure and then taking steps to correct the problem. The expense of service downtime, the limited supply of network engineers, and the competitive nature of today's marketplace have forced service providers to rely more and more heavily of software tools to keep their networks operating at peak efficiency and to deliver contracted service levels to an expanding customer base. Accordingly, it has become vital that these software tools be able to manage and monitor a network as efficiently as possible.


A number of tools are available to assist administrators in completing these tasks. One example is the NETCOOL® suite of applications available from Micromuse Inc. of San Francisco, Calif. which allows network administrators to monitor activity on networks such as wired and wireless voice communication networks, intranets, wide area networks, or the Internet. The NETCOOL® suite includes probes and monitors which log and collect network event data, including network occurrences such as alerts, alarms, or other faults, and store the event data in a database on a server. The system then reports the event data to network administrators in graphical and text based formats in accordance with particular requests made by the administrators. Administrators are thus able to observe desired network events on a real-time basis and respond to them more quickly. The NETCOOL® software allows administrators to request event data summarized according to a desired metric or formula, and further allows administrators to select filters in order to custom design their own service views and service reports.


In a demanding environment, there are many tens or even hundreds of clients viewing essentially the same filtered or summarized event data. Moreover, in a large network there are thousands of devices being monitored at a number of geographically distant locations, and events occur on these devices with great frequency. As a result, the databases that store event data have become very large and are constantly being updated. Newly incoming event data is delayed before being stored or processed at the database, e.g., during a period when the databases are locked. Even if such delays are for a fraction of a second or a few seconds, this may impair the ability of the administrator clients to receive event data in a timely manner. These and related issues become exacerbated as the size of a network increases, thus limiting the scalability of the network management system.


Accordingly, there is a need for improvements in how such network event databases are updated with events and how they are managed to provide greater scalability and efficiency. Furthermore, there is a need for improved techniques for efficiently coordinating the processing of event data obtained from both local and remote networks.


SUMMARY OF THE INVENTION

The present invention provides improved methods and systems for managing an event database in a network monitoring system. The improvements increase the efficiency in the way event data is handled by the database, increase the power and speed of the event database, and improve the scalability of the event database to allow it to serve a larger network. The improvements include methods for allowing users to set triggers to automatically pre-process event data received from monitored sites, a distributed system of local master and replica databases and methods for coordinating event data across them, and a notification subsystem for serving repeated client requests for raw and processed event data. These improvements work together to achieve improved performance as described herein.


Using the first aspect of the improvements, users are provided with the ability to add triggers into the event database. The triggers automatically test for certain event conditions, times, or database events, including system events or user-defined events including a name and selected parameter, and initiate a programmed action at the detection of such condition, time or event. One such trigger is a pre-processing trigger which examines event data before it is added to the event database, while still stored, e.g., in an event buffer or queue. One possible use of such a trigger is to prevent or reduce duplication of event data in the event database. That is, the same or duplicated event data may be reported from one or more monitors, and the pre-processing of the event data detects the occurrence of duplicate data through comparison to event data already stored in the event database or other event data also received for but prior to insertion in the event database. Preventing duplication keeps the event database streamlined and also limits the time in which the event database is in a lock down condition such as may be necessary during read/write operations.


Moreover, the invention efficiently provides event data from both local and remote monitoring locations using a system of local replicas and their unioning. Each monitoring location may maintain its own event data of local monitored sites, as well as a replica of event data stored at other, remote monitoring locations. Using a union or combining operating, the local and remote data can be combined to provide a unified event data summary for use by a client. The monitoring locations update one another when their event data changes.


The combination of the pre-processing triggers and the use of replicas and unioning provides substantially improved scalability of the network monitoring system. Since data in local event databases is propagated to corresponding replicas in remote locations, it is particularly advantageous to avoid or delay updating each local database table unless desired or necessary. The use of triggers helps achieve this selection before tying up each database.


The improved scalability and power of the event database is further achieved through a publish/subscribe notification method and software component. The notification component registers persistent client requests for filtered raw data or summary data, associates similar requests with one another, and delivers all corresponding requests at about the same time to all users who request the same or similar data. This allows for users to be efficiently updated on new event data without excessively tying up the event database.


A common thread in the use of triggers, especially pre-processing triggers, a union/replica architecture, and a publish/subscribe notification system is that they are all event-based methodologies. Triggers are activated by the occurrence of events, be they temporal or otherwise, and initiate actions on the events themselves, e.g., refusal to insert, or immediate communication to a client. The union/replica architecture moves away from a traditional database model and toward a model which focuses on where events are occurring and what locations they might affect. The notification model focuses on increased efficiency in delivering specific event data or event data metrics to clients requesting it. Shifting the focus of a network management system to the events occurring on the network results in the improvements in speed and efficiency described herein.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding elements, and in which:



FIG. 1 is a block diagram showing functional components of an improved network monitoring system in accordance with one embodiment of the present invention;



FIG. 2 is a flow diagram showing the pre-processing of event data in accordance with one embodiment of the present invention;



FIG. 3 is a flow chart showing a process of using a trigger to pre-process event data in accordance with one embodiment of the present invention;



FIG. 4 is a flow diagram showing a process of managing local and replicas of event databases in accordance with one embodiment of the present invention;



FIG. 5 illustrates a client display with an ordered view in accordance with one embodiment of the present invention;



FIG. 6 illustrates a client display with a map/geographical view in accordance with one embodiment of the present invention; and



FIG. 7 illustrates a client display with a list view in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

In accordance with the invention, methods and systems are described herein, with reference to the Figures, for providing pre-processing of event data, and efficient delivery of the event data to a number of clients. In particular, the description herein focuses on a network monitoring system in which data is captured relating to events such as faults or alarms occurring on a computer network and is distributed to a number of administrator clients responsible for monitoring the network and preventing or correcting such faults.



FIG. 1 illustrates a network monitoring system in accordance with the present invention. The system includes an object server 26 which receives event data from a number of monitoring devices including probes 2 and monitors 4, stores the event data in one or more event databases 28, and provides the event data to a number of clients 8 which issue requests for the data. In one embodiment, the event databases 28 in the object server 26 are relational event databases 28 such as the Object Server database available as part of the NETCOOL®/Omnibus system available from Micromuse, Inc., San Francisco, Calif. Alternatively, the event database 28 may be any other suitable type of data store, such as an object-oriented database, flat file, etc. The event database 28 is a memory resident database, which is periodically dumped to file in case of failure. Events come in from probes 2 and monitors 4 in the form of SQL inserts. Clients 8 also access the database using SQL. As explained further below, the server is easy to configure, and has increased performance, functionality and flexibility.


The probes 2 are portions of code that collect events from network management data sources 6, APIs, databases, network devices 5, log files, and other utilities. Monitors 4 are software applications that simulate network users to determine response times and availability of services 7 such as on a network. Other monitoring devices may be used to collect and report on events occurring in the network or related devices or services.


The network management system monitors and reports on activity on a computer, telecommunications, or other type of network. In this context, clients 8 are typically administrators who make requests for event data which they need to monitor on a regular basis. Clients may elect to see all event activity on the network. More typically for larger networks, clients will only want to see event data occurring on particular parts of the network for which they are responsible or which may affect their portion of the network. In addition, clients may only want to see summaries of their relevant part of the event data, such as event counts, sums, averages, minimums, maximums, or other distributions of event data. Clients input the various requests into an event list 34, with each request representing and being sometimes referred to herein as a particular view on the data.


Event data is stored in the event database 28 of one embodiment in a number of rows and columns, with each row representing an event and the columns storing fields of data relating to the event, e.g., location, type, time, severity, etc. As used herein, then, a view is generally a mechanism for selecting columns from the database and may also optionally include a filter. A filter is generally a mechanism for excluding rows of data in the database based on column values. Views may therefore be based on filters. Filters may also be based on other filters and other views. A metric view is generally a type of view that provides summary information on the number of rows in a view rather than the actual data, and usually requires some arithmetic processing on the number of rows.


These client requests or views are persistent and are delivered according to a publish/subscribe model. That is, because network events occur regularly, the data in the event database 28 changes frequently and clients must be informed promptly of the updates in accordance with their specified requests to be able to make proper use of the data. The object server 26 processes the standing requests at a set frequency, e.g., every five or ten seconds, and delivers the results to the clients in the form of a stream or event data which is new or updated since the requests were last processed. The default or initial frequency for processing standing requests may be preset to any desired time frequency in any desired time units, e.g., seconds or portions thereof, minutes, hours, etc., and in any desired amount.


Regarding communications used at the object server 26, features of the present invention include event list optimization, replicas 42 and unions 44, and clusters. For event list optimization, a typical installation has many users with the same event lists. This is a significant cause of load at the server. Approaches include evaluating a view once, then publishing the results on the bus, and using throttling to prevent overload. A replica comprises a cached copy of a remote table. Inserts, updates, and deletes are passed to the master copy of a replica. A union is a set of tables and replicas, which are presented as a single table. Replicas and unions are discussed further below. Clusters are described in the above referenced commonly owned pending application Ser. No. 09/877,619.


In accordance with one aspect of the invention, a notification program or notifier 30 is provided which manages the client requests for data from the object server 26 to efficiently distribute the responses to the client requests. The notification program 30 may be part of the object server 26 as shown or may be a separate, standalone component of the system. In accordance with processes described in greater detail in the above referenced commonly owned pending application Ser. No. 09/877,619, the notification program 30 manages the various client requests in a view list or table 32 having a number of request sets. Each request set relates to a specific type of view or data filter and may include a number of metrics or formulas which summarize data in the object server 26 and which are requested to be processed by or for clients 8.


When the notifier 30 receives a registration request from a client 8, it scans its list of existing registrations. If, as in this example, an identical registration already exists, the second registration is associated with the first. The first registration of particular summary data may be referred to as a “primary” registration or request, whereas subsequent registrations of identical summary data may be referred to as “secondary” registrations or requests. The notifier 30 periodically scans its list of primary registrations, and for each it calculates the summary data, and sends the results to all clients that have registered interest in that data. Thus, when a client 18 elects a metric view in its event list 34, the notifier 30 registers interest in that metric view with the view table 32 in the object server 26. If another client elects to view the same metric view, the notifier 30 also registers that other client's interest in the summary data in the view table 32.


As a result, this notification program 30 and view list library 32 optimizes the evaluation of summary data. Specifically, assuming that each client requests views of the same M metrics, the work done by the object server 26 is of the order of M, rather than M*(number of clients).


Further in accordance with the invention, an automation engine 20 is provided as part of the object server 26. The automation engine 20 involves the use of “triggers” to allow an administrator to connect events, conditions and actions together. The object server 26 provides the automation module with events, triggers and actions that are fully general to support many enhancement requests as well as debugging support wherein individual triggers can be put into debug mode, and debugging entries are written to a log file.


Much of the power of the object server 26 comes from automation. Automation allows the system to respond to events, i.e., happenings of interest such as the modification of data in a table, execution of a command, or the passing of a time interval. When an event occurs, a query is executed, and then a condition is evaluated, which if true causes the execution of an action: e.g., a sequence of SQL statements, an external script, or both. Moreover, each trigger has a coupling mode that indicates when the action should be executed, i.e., now or at some later time. Triggers may be created, altered, or dropped. The SQL command syntax for implementing triggers in one embodiment of the invention is summarized in the Appendix, which forms a part hereof.


Triggers allow actions to be executed when some event of interest occurs. The triggering event may be a primitive event, a database event, or a temporal event, for example. When the event occurs, an optional evaluate clause is executed, and then a condition is evaluated, and, if true, the action is executed. A primitive event is some happening of interest assigned a unique name. On occurrence, all events carry a set of arguments, which includes zero or more <name, type> pairs. The classes of primitive events include a user class for an event created by a user and a system class for some interesting occurrence in the object server, e.g. a user has logged in, the license is about to expire, etc.


Regarding user events, these may include user-defined events. Moreover, a user event may be created at a command line: e.g. create event bob (a INT, b REAL). A user event may be raised at the command line: e.g. raise event bob 1, 1.2.


Temporal triggers signal an event at a determined frequency from a specified absolute time until a specified absolute time.


In accordance with an aspect of the invention, triggers may be inserted into the database to handle processing of event data before it gets added to the database. The object server receives raw event data from monitored sites in the network, e.g., via monitoring devices 2, 4. The raw event data is buffered in buffer 12, and pre-processed by the automations engine 20 prior to being stored in a database. FIG. 2 shows this process schematically, wherein trigger 22 reads event data in buffer 12 before it is inserted into event database table 40. Advantageously, the event data 12 can be processed even while the database is in a read/write lockdown condition, e.g., when the database is being updated and cannot perform read or write operation from or to external devices. This avoids delays in processing the event data and communicating corresponding messages to the clients via a notifier.


An EECA model is used for triggers: event, evaluate, condition, action. When an event occurs, execute evaluate, then test the condition, and if true execute action. Transition tables may be used to communicate results between phases of a trigger. Procedural SQL syntax in actions may be used. Following are several exemplary types of triggers which may be used.


An “up-down correlation” temporal trigger may be described as follows:


NAME up-down correlation


EVENT every 10 seconds


EVALUATE select Node from alerts.status where . . . bind as tt


CONDITION when % row_count>0


ACTION


for each row r in tt


update alerts.status set . . . where Node=r.Node


A “PageOnRouterDown” temporal trigger may be described as follows.


NAME PageOnRouterDown


EVENT every 10 seconds


EVALUATE select * from alerts.status where . . . bind as tt


CONDITION true


ACTION


if % trigger.row_count>0 AND (% trigger.positive_row_count mod 5)=0


system(/usr/bin/page_message ‘some routers are down’)


end if


if % trigger.row_count=0 AND (% trigger.zero_row_count mod 5)=0


system(/usr/bin/page_message ‘no routers are down’)


end if


An example database trigger which preprocesses new event data to avoid duplication of event data in the event database is described as follows.


NAME deduplication


EVENT before reinsert on alerts.status


EVALUATE <nothing>


CONDITION true


ACTION


set old.Tally=old.Tally+1,


set old.Summary=new.Summary


set old.LastOccurrrence=new.LastOccurrence


An example event trigger may be described as follows.


NAME ConnectionWatch


EVENT on event connect


EVALUATE <nothing>


CONDITION true


ACTION


insert into alerts.status . . . % event.user, % event,time


Referring to FIG. 3, a process of using triggers to preprocess event data such as in a deduplication trigger is as follows. A user generates the trigger based on a desired event and inserts it into the database, step 50. When new event data is received, step 52, it is checked by the trigger while still stored in the event buffer and before added to the database, step 54. If the event defined by the trigger has occurred, step 56, e.g., is represented by the event in the new event data, the trigger test to see if any user-defined condition(s) is satisfied, step 58. If so, the new event data is processed in accordance with the action specified in the trigger, step 60. For this purpose, the trigger generates a transition table, i.e., an anonymous table used by the action in the trigger, and uses one or more transition variables, i.e., variables generated by a database trigger and used by an action. If the action in the trigger involves deleting the new event data or otherwise avoiding or delaying insertion of the new event data into the event database, step 62, the new event data is not added. Otherwise, or when the trigger action is not invoked, the event data is added to the event database, step 64.


Referring again to FIG. 1, in accordance with another aspect of the present invention, the event database 28 is distributed into local event databases 40 storing event data generated by a given location on the network and one or more replica event databases 42 storing event data gathered at and replicated from one or more remote network locations. When delivering on client requests, a union 44 is generated by combining the local and remote databases 40, 42, using commonly known unioning techniques.


For example, referring to FIG. 4, a distributed implementation is shown with one object server in London 26b, one in Hong Kong 26c, and another in New York 26a. Each object server 26a, 26b, 26c is configured to have a single table 40a, 40b, 40c, respectively, that holds locally generated alarms, and a replica of the table maintained by the other object server. So, for example, the New York local database table 40a holds locally generated alarms or other events in a table called alerts.NY, and has replicas of remotely generated alarms or other events in alerts.LN 42a and alerts.HK 43a. In addition it has a union 44a called alerts.status which logically contains alerts.HK, alerts.LN, and alerts.NY. A modification made to a local table is applied directly by that table. A modification to a replica normally requires the modification to be written through to its owner, e.g., an update of every row in alerts.status in Hong Kong results in a message being sent to New York requesting that the rows of alerts.NY held there be updated.


However, it is possible for a local monitoring location to take ownership of a replica, so that subsequent modifications to it do not require “writing through” to its owner that is, without receiving instructions from the remote monitoring location associated therewith. This allows “follow-the-sun” configurations to be built, in which each local monitoring station controls updating of local and replica tables during its primary time period of operation, and then passes control at the end of this period to another location in a different time period which then assumes control for its given time period of business operations.


Thus, in accordance with the invention, a replica represents a copy of a table held on a remote object server and is incrementally updated to reflect the state of the table at the owning site. Attempts to modify data held in a replica result in a command being sent to the master copy of the table requesting the modification. A site holding a replica can become the owner if necessary. The attributes of a replica include its unique name in each database and its storage class (persistent or temporary). Attempts to modify data held in a union result in the command being applied to each of its components in an implementation-defined order. The attributes of a union include its unique name in each database and its storage class (persistent or temporary). Unions are created through specification of the names of the tables and replicas, and may be altered through the addition or removal of a table or replica.


The features described herein of the present invention support a powerful, scalable, and fast system for delivering network event data to clients using many different client views. Three exemplary views are shown in FIGS. 5-7. FIG. 5 illustrates a client display with an ordered view in accordance with one embodiment of the present invention. The display is generated at the client location in response to the event data communicated to it by the server. Both pure and metric views are shown (referred to as monitors in the screen displays, which refer to client-generated monitors and not specific monitor devices such as monitors or probes which collect data from specific networks, services, or devices), and the user can reorder views by drag and drop. Links appear as views. Moreover, a background image/pattern can be added.



FIG. 6 illustrates a client display with a map/geographical view in accordance with one embodiment of the present invention. The display may also be generated at the client location in response to the event data communicated to it by the server, and provide event information arranged geographically. Users can reposition views by drag and drop, resize views by dragging edges, add a link by selecting monitors to link, or add a background map. New monitors created in ordered view appear in unplaced palette.



FIG. 7 illustrates a client display with a list view in accordance with one embodiment of the present invention. This view provides event data based on the monitor. In particular, monitor information is displayed as a detailed list. One can sort on each column by selecting the column header.


Generally, the event data that is communicated to the client location may be presented in a variety of forms to suit the clients' needs.


Accordingly, it can be seen that the present invention provides improved and more efficient techniques for reducing the amount of work that needs to be performed by a database in a computer network in order to distribute event summary data to a large number of administrator clients. Moreover, the invention avoids or reduces delays experienced by event data at a database, e.g., due to delays in accessing a database. Furthermore, event data obtained from both local and remote networks is efficiently coordinated using replica and union processes. Each monitoring location in the network includes both locally generated events, and a copy of remotely-generated events which are provided and maintained by one or more remote monitoring locations.


While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.


APPENDIX
SQL Command Syntax Summary

Triggers


CREATE [OR REPLACE] TRIGGER <name> // temporal triggers

    • [DEBUGGING <bool>]
    • [ENABLED <bool>]
    • PRIORITY <int> // 1=min, 20=max
    • [COMMENT <text>]
    • [FROM <abs_time>] [UNTIL <abs_time>] EVERY <rel_time>
    • [EVALUATE <select> BIND AS <name>]
    • [WHEN <condition>]
    • EXECUTE IMMEDIATE|DEFERRED|DETACHED
    • DECLARE
      • <decls>
    • BEGIN
      • <action>
    • END


CREATE [OR REPLACE] TRIGGER <name> // database triggers

    • [DEBUGGING <bool>]
    • [ENABLED <bool>]
    • PRIORITY <int> // 1=min, 20=max
    • [COMMENT <text>]


BEFORE|AFTER


INSERT|UPDATE|DELETE|REINSERT


ON <table>

    • FOR EACH {ROW|STATEMENT}
    • [EVALUATE <select> BIND AS <name>] // for STATEMENT triggers
    • [WHEN <condition>]
    • EXECUTE IMMEDIATE|DEFERRED|DETACHED
    • DECLARE // pre-triggers must be immediate
      • <decls>
    • BEGIN
      • <action>
    • END


CREATE [OR REPLACE] TRIGGER <name> // event triggers

    • [DEBUGGING <bool>]
    • [ENABLED <bool>]
    • PRIORITY <int> // 1=min, 20=max
    • [COMMENT <text>]
    • ON EVENT <name>
    • [EVALUATE <select> BIND AS <name>]
    • [WHEN <condition>]


EXECUTE IMMEDIATE|DEFERRED|DETACHED

    • DECLARE
      • <decls>
    • BEGIN
      • <action>
    • END


ALTER TRIGGER <name>


SET DEBUG <boolval>


SET ENABLED <boolval>


SET PRIORITY <int>


DROP TRIGGER <name>


Trigger Attributes


These are read-only scalar values accessible in WHEN and action blocks


% trigger.row_count number of rows matched in evaluate clause


% trigger.last_condition value of condition on last execution


% trigger.num_executions number of times trigger has been run


% trigger.num_fires number of executions where condition was true


% trigger.num_zero_row_count number of consecutive fires with zero matches in eval


% trigger.num_positive row_count number of consecutive fires with >0 matches in eval

Claims
  • 1. A method for providing an improved network monitoring system, the network monitoring system comprising an event database for storing event data representing events occurring on the network, the event data being gathered by a plurality of monitoring devices located at a plurality of different, remote locations on the network, the method comprising: allowing users to insert one or more triggers into the event database, the triggers automatically initiating a programmed response at the detection of an event including gathered event data prior to insertion of the gathered event data into the event database, the event is one of a primitive event, a database event or a temporal event;distributing the event database to a plurality of remote network locations, wherein each remote network location stores a local table containing event data generated at the remote location and one or more replica tables containing event data generated at other remote locations, and wherein a union of the local and replica tables is generated to form a combined event database at the remote location; andusing triggers and local and replica table unions during delivery of event data to users of the network monitoring system wherein the local and replica tables update one another when the event data of one of said tables changes.
  • 2. The method of claim 1, comprising providing a notification component for registering similar client requests for event data and substantially contemporaneously delivering requested event data to all clients having similar registered requests.
  • 3. An event database for use in a network monitoring system, the event database storing event data representing events occurring on the network, the event data being gathered by a plurality of monitor devices located at a plurality of different, remote locations on the network, the event database comprising: an automation engine for processing one or more triggers contained in the event database, the triggers automatically initiating a programmed response at the detection of an event including on gathered event data prior to insertion of the gathered event data into the event database, the event is one of a primitive event, a database event or a temporal event;a local table stored at each remote network location containing event data generated at the remote location;one or more replica tables stored at each remote network location containing event data generated at other remote locations, wherein a union of the local and replica tables is generated to form a combined event database at the remote location, and wherein the local and replica tables update one another when the event data of one of said tables changes.
  • 4. A method for handling event data from monitored sites in a computer network, comprising: receiving event data from the sites at a monitoring location;when received at the monitoring location, pre-processing the event data before the event data is inserted into an event database to determine if an event is met as set forth in a trigger;if the trigger event is met, initiating an action relating to the event data, the action being defined in the trigger, the event is one of a primitive event, a database event or a temporal event;inserting the event data into the event database thereby producing central data; andtransmitting the central data to each of the monitored sites;wherein each of the monitored sites includes locally-generated event data and a replica of the central data;wherein a union of the locally-generated event data and the central data is formed at each of the monitored sites; andwherein the monitored sites update one another when the event data of one of said monitored sites changes.
  • 5. The method of claim 4, wherein pre-processing the event data comprises determining whether the event data comprises a duplication of other event data in the event database or received at the monitoring location.
  • 6. The method of claim 5, wherein initiating the action comprises denying storage of the event data in the event database if it comprises a duplication of other event data.
  • 7. The method of claim 4, wherein if the event data does not meet the event, it is temporarily stored outside the data store.
  • 8. The method of claim 4, wherein for event data received at the monitoring location, a query is executed, and an event is evaluated, which, if true, causes the execution of the action.
  • 9. The method of claim 8, wherein the action comprises at least one of a sequence of Structured Query Language (SQL) statements and an external script.
  • 10. The method of claim 4, wherein the trigger has a coupling mode that indicates when the action should be executed.
  • 11. The method of claim 4, wherein the trigger allows an administrator of the network to connect events, conditions and actions.
  • 12. The method of claim 4, wherein the event data comprises a primitive event.
  • 13. The method of claim 4, wherein the event data comprises a database event.
  • 14. The method of claim 4, wherein the event data comprises a temporal event.
  • 15. The method of claim 4, wherein the trigger comprises a database trigger.
  • 16. The method of claim 4, wherein the trigger comprises a temporal trigger.
  • 17. The method of claim 16, wherein the temporal trigger signals an event at a determined frequency from a specified start time until a specified end time.
  • 18. The method of claim 4, wherein initiating an action comprises communicating a message in accordance with the event data to at least one customer location that has subscribed to receive the event data, and storing the event data in a data store at the monitoring location.
  • 19. The method of claim 18, wherein the pre-processing occurs, at least in part, during a period when the data store is inaccessible.
  • 20. The method of claim 18, wherein the message communicated in accordance with the event data is included in the union of at least event data of a local network and event data of a remote network.
  • 21. The method of claim 20, wherein the union comprises a union of event data tables.
  • 22. The method of claim 4, wherein the monitoring locations update one another with their event data.
  • 23. The method of claim 4, wherein at least one monitoring location is enabled to take ownership of a replica of remotely-generated event data to make modifications thereto without instructions from the remote monitoring location associated therewith.
  • 24. A system for handling event data from monitored sites in a computer network, comprising: means for receiving event data from the sites at a monitoring location;means for pre-processing the event data, when received at the monitoring location, to determine if a condition is met for setting a trigger;means for communicating a message, if the trigger is set, in accordance with the event data to at least one customer location that has subscribed to receive the event data, and storing the event data in a data store at the monitoring location, the trigger being in response to a primitive event, a database event or a temporal event;means for inserting the event data into the event database thereby producing central data; andmeans for transmitting the central data to each of the monitored sites;wherein each of the monitored sites includes locally-generated event data and a replica of the central data;wherein a union of the locally-generated event data and the central data is formed at each of the monitored sites; andwherein the monitored sites update one another when the event data of one of said monitored sites changes.
US Referenced Citations (144)
Number Name Date Kind
3855456 Summers et al. Dec 1974 A
3906454 Martin Sep 1975 A
4135662 Dlugos Jan 1979 A
4410950 Toyoda et al. Oct 1983 A
4438494 Budde et al. Mar 1984 A
4503534 Budde et al. Mar 1985 A
4503535 Budde et al. Mar 1985 A
4517468 Kemper et al. May 1985 A
4545013 Lyon et al. Oct 1985 A
4568909 Whynacht Feb 1986 A
4585975 Wimmer Apr 1986 A
4591983 Bennett et al. May 1986 A
4622545 Atkinson Nov 1986 A
4648044 Hardy et al. Mar 1987 A
4727545 Glackemeyer et al. Feb 1988 A
4817092 Denny Mar 1989 A
4823345 Daniel et al. Apr 1989 A
4866712 Chao Sep 1989 A
4881230 Clark et al. Nov 1989 A
4914657 Walter et al. Apr 1990 A
4932026 Dev et al. Jun 1990 A
4935876 Hanatsuka Jun 1990 A
5063523 Vrenjak Nov 1991 A
5107497 Lirov et al. Apr 1992 A
5109486 Seymour Apr 1992 A
5123017 Simpkins et al. Jun 1992 A
5125091 Staas, Jr. et al. Jun 1992 A
5133075 Risch Jul 1992 A
5159685 Kung Oct 1992 A
5179556 Turner Jan 1993 A
5204955 Kagei et al. Apr 1993 A
5214653 Elliott, Jr. et al. May 1993 A
5247517 Rose et al. Sep 1993 A
5261044 Dev et al. Nov 1993 A
5293629 Conley et al. Mar 1994 A
5295244 Dev et al. Mar 1994 A
5309448 Bouloutas et al. May 1994 A
5321837 Daniel et al. Jun 1994 A
5375070 Hershey et al. Dec 1994 A
5432934 Levin et al. Jul 1995 A
5436909 Dev et al. Jul 1995 A
5483637 Winokur et al. Jan 1996 A
5485455 Dobbins et al. Jan 1996 A
5491694 Oliver et al. Feb 1996 A
5495470 Tyburski et al. Feb 1996 A
5504921 Dev et al. Apr 1996 A
5521910 Matthews May 1996 A
5528516 Yemini et al. Jun 1996 A
5559955 Dev et al. Sep 1996 A
5590120 Vaishnavi et al. Dec 1996 A
5627819 Dev et al. May 1997 A
5646864 Whitney Jul 1997 A
5649103 Datta et al. Jul 1997 A
5655081 Bonnell et al. Aug 1997 A
5664220 Itoh et al. Sep 1997 A
5666481 Lewis Sep 1997 A
5673264 Hamaguchi Sep 1997 A
5675741 Aggarwal et al. Oct 1997 A
5687290 Lewis Nov 1997 A
5696486 Poliquin et al. Dec 1997 A
5706436 Lewis et al. Jan 1998 A
5722427 Wakil et al. Mar 1998 A
5727157 Orr et al. Mar 1998 A
5734642 Vaishnavi et al. Mar 1998 A
5748781 Datta et al. May 1998 A
5751933 Dev et al. May 1998 A
5751965 Mayo et al. May 1998 A
5754532 Dev et al. May 1998 A
5764955 Doolan Jun 1998 A
5768501 Lewis Jun 1998 A
5777549 Arrowsmith et al. Jul 1998 A
5790546 Dobbins et al. Aug 1998 A
5791694 Fahl et al. Aug 1998 A
5793362 Matthews et al. Aug 1998 A
5812750 Dev et al. Sep 1998 A
5822305 Vaishnavi et al. Oct 1998 A
5832503 Malik et al. Nov 1998 A
5872911 Berg Feb 1999 A
5872928 Lewis et al. Feb 1999 A
5872931 Chivaluri Feb 1999 A
5889953 Thebaut et al. Mar 1999 A
5903893 Kleewein et al. May 1999 A
5907696 Stilwell et al. May 1999 A
5940376 Yanacek et al. Aug 1999 A
5941996 Smith et al. Aug 1999 A
5970984 Wakil et al. Oct 1999 A
5980984 Modera et al. Nov 1999 A
5987442 Lewis et al. Nov 1999 A
6000045 Lewis Dec 1999 A
6003090 Puranik et al. Dec 1999 A
6006016 Faigon et al. Dec 1999 A
6014697 Lewis et al. Jan 2000 A
6026442 Lewis et al. Feb 2000 A
6041383 Jeffords et al. Mar 2000 A
6047126 Imai Apr 2000 A
6049828 Dev et al. Apr 2000 A
6057757 Arrowsmith et al. May 2000 A
6064304 Arrowsmith et al. May 2000 A
6064986 Edelman May 2000 A
6064996 Yamaguchi et al. May 2000 A
6084858 Matthews et al. Jul 2000 A
6115362 Bosa et al. Sep 2000 A
6131112 Lewis et al. Oct 2000 A
6138122 Smith et al. Oct 2000 A
6141720 Jeffords et al. Oct 2000 A
6170013 Murata Jan 2001 B1
6185613 Lawson et al. Feb 2001 B1
6199172 Dube et al. Mar 2001 B1
6205563 Lewis Mar 2001 B1
6209033 Datta et al. Mar 2001 B1
6216168 Dev et al. Apr 2001 B1
6233623 Jeffords et al. May 2001 B1
6243747 Lewis et al. Jun 2001 B1
6253211 Gillies et al. Jun 2001 B1
6255943 Lewis et al. Jul 2001 B1
6324530 Yamaguchi et al. Nov 2001 B1
6324590 Jeffords et al. Nov 2001 B1
6336138 Caswell et al. Jan 2002 B1
6341340 Tsukerman et al. Jan 2002 B1
6349306 Malik et al. Feb 2002 B1
6359976 Kalyanpur et al. Mar 2002 B1
6373383 Arrowsmith et al. Apr 2002 B1
6374293 Dev et al. Apr 2002 B1
6381639 Thebaut et al. Apr 2002 B1
6392667 McKinnon et al. May 2002 B1
6421719 Lewis et al. Jul 2002 B1
6430712 Lewis Aug 2002 B2
6437804 Ibe et al. Aug 2002 B1
6502079 Ball et al. Dec 2002 B1
6510478 Jeffords et al. Jan 2003 B1
6603396 Lewis et al. Aug 2003 B2
6651062 Ghannam et al. Nov 2003 B2
6799209 Hayton Sep 2004 B1
20010013107 Lewis Aug 2001 A1
20010042139 Jeffords et al. Nov 2001 A1
20010047409 Datta et al. Nov 2001 A1
20010047430 Dev et al. Nov 2001 A1
20010052085 Dube et al. Dec 2001 A1
20020032760 Matthews et al. Mar 2002 A1
20020050926 Lewis et al. May 2002 A1
20020075882 Donis et al. Jun 2002 A1
20020184528 Shevenell et al. Dec 2002 A1
20020188584 Ghannam et al. Dec 2002 A1
20030110396 Lewis et al. Jun 2003 A1
Foreign Referenced Citations (31)
Number Date Country
0 209 795 Jan 1987 EP
0 319 998 Jun 1989 EP
0 338 561 Oct 1989 EP
0 342 547 Nov 1989 EP
0 616 289 Sep 1994 EP
0 686 329 Dec 1995 EP
WO 8907377 Aug 1989 WO
WO 9300632 Jan 1993 WO
WO 9520297 Jul 1995 WO
WO 9609707 Mar 1996 WO
WO 9631035 Oct 1996 WO
WO 9716906 May 1997 WO
WO 9729570 Aug 1997 WO
WO 9737477 Oct 1997 WO
WO 9744937 Nov 1997 WO
WO 9842109 Sep 1998 WO
WO 9844682 Oct 1998 WO
WO 9852322 Nov 1998 WO
WO 9927682 Jun 1999 WO
PCTUS9931135 Dec 1999 WO
WO 0013112 Mar 2000 WO
WO 0072183 Nov 2000 WO
WO 0147187 Jun 2001 WO
WO 0186380 Nov 2001 WO
WO 0186443 Nov 2001 WO
WO 0186444 Nov 2001 WO
WO 0186775 Nov 2001 WO
WO 0186844 Nov 2001 WO
WO 0206971 Jan 2002 WO
WO 0206972 Jan 2002 WO
WO 0206973 Jan 2002 WO