A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
The invention disclosed herein relates generally to network monitoring systems. More particularly, the present invention relates to methods and systems for efficiently distributing data relating to events occurring on a network to a large number of users requesting the data.
Maintaining the proper operation of services provided over a network is usually an important but difficult task. Service administrators are often called upon to react to a service failure by identifying the problem that caused the failure and then taking steps to correct the problem. The expense of service downtime, the limited supply of network engineers, and the competitive nature of today's marketplace have forced service providers to rely more and more heavily of software tools to keep their networks operating at peak efficiency and to deliver contracted service levels to an expanding customer base. Accordingly, it has become vital that these software tools be able to manage and monitor a network as efficiently as possible.
A number of tools are available to assist administrators in completing these tasks. One example is the NETCOOL® suite of applications available from Micromuse Inc. which allows network administrators to monitor activity on networks such as wired and wireless voice communication networks, intranets, wide area networks, or the Internet. The NETCOOL® suite includes probes and monitors which log and collect network event data, including network occurrences such as alerts, alarms, or other faults, and store the event data in a database on a server. The system then reports the event data to network administrators in graphical and text based formats in accordance with particular requests made by the administrators. Administrators are thus able to observe desired network events on a real-time basis and respond to them more quickly. The NETCOOL® software allows administrators to request event data summarized according to a desired metric or formula, and further allows administrators to select filters in order to custom design their own service views and service reports.
In a demanding environment, there are many tens or even hundreds of clients viewing essentially the same filtered or summarized event data. The work required to derive such data for a single user is thus replicated for all users. If there are N users, each viewing M items of metric or summary data, the work done by the database is of the order of N*M. This limits the number of clients who can be connected to a single database and the frequency with which such filtered or summarized data can be provided.
There is therefore a need for improved and more efficient techniques for reducing the amount of work that needs to be performed by the database in order to distribute event summary data to a large number of administrator clients.
It is an object of the present invention to provide improved and more efficient techniques for distributing network event data to a large number of clients.
The above and other objects are achieved by a method for preparing to efficiently distribute data to be extracted from a data store to a plurality of clients and a method for distributing such prepared data to the clients. One method for preparing the data involves storing as primary requests one or more client requests for data to be extracted from the data store. For an additional client request for data to be extracted from the data store, the additional request is compared to the stored primary requests to determine whether the additional request matches a stored primary request in accordance with a given criterion. If the additional request matches a stored primary request, the additional client request is stored as a secondary request associated with the matching primary request. If the additional request does not match a stored primary request, the additional request is stored as an additional primary request. The matching client requests may come from or relate to different clients who are to receive the requested data. As a result of this method, client requests which match one another can be processed at once and distributed to all clients registering the request.
In some embodiments, the client requests each contain a filter for extracting a subset of data from the data store. The additional request is then compared to the primary requests by comparing the filter in the additional request to the stored primary request filters to determine whether the additional request filter matches any stored primary request filter. Alternatively or additionally, some or all of the client requests may contain a request for summary data to be extracted from the data store and processed in accordance with a metric or formula. The additional request may then be alternatively or additionally compared to the stored primary requests by comparing the additional request metric to the stored primary request metrics to determine whether the additional request metric matches any stored primary request metric.
One method for distributing data extracted from a data store to a plurality of clients involves storing a set of client requests received from a plurality of clients, the set comprising a primary client request to provide data to a first client in association with one or more secondary client requests to provide data to one or more second clients different than the first client, the secondary client requests each matching the primary client request in accordance with a given criterion. A plurality of such sets may be stored to accommodate a plurality of different criteria. The method further involves extracting data from the data store in accordance with the primary client request in each set and distributing the extracted data to the first client requesting the primary data request and to the second client or clients requesting the secondary data request.
In some embodiments, the primary and secondary client requests each contain a filter for extracting a subset of data from the data store, and the set is identified through the filter. The filter is further used to extract the data from the data store. In addition, at least one of the client requests in the set may contain a metric for summarizing data extracted from the data store, and the metric is used in summarizing the extracted data. The processed, summarized data is then distributed to any first or second client whose client request contains the metric.
In some embodiments, the data is extracted from the data store and distributed to clients repeatedly at a first time interval or frequency. This embodiment applies, for example, when the data store is regularly updated with new data that the clients would want or need to be aware of. The length of time required to extract and distribute data to all first and second clients is measured or computed. If the determined time length exceeds a threshold, the first time interval or frequency is increased, thus resulting in less frequent updating of clients.
Some of the above and other objects of the present invention are also achieved by a system for efficient distribution of network event data. The system includes a data store such as a database containing data relating to events occurring on the network and a library for storing client requests for data from the data store. The client requests identify a plurality of clients and are ordered as one or more sets of client requests, each set containing one or more client requests matching a given criterion. The system further contains a notification system for distributing data extracted from the data store in accordance with the client requests to the plurality of clients.
The methods and systems described herein are particularly useful in the context of a network management system which has a database storing regularly occurring events monitored or collected from the network. As explained above, a large network is monitored by many client administrators and produces an enormous amount of event data. To be effective, the client administrators need their data to be as up-to-date as possible, and thus need frequent updates on network events. In addition, the administrators also need to have this data filtered and summarized to suit their needs, and provide a number of persistent requests to that end. Given the large amount of data in the event database, these requirements present processing and bandwidth issues which are difficult to overcome. The methods and systems described above and further below for supporting very efficient distribution of the event data from the database go a long way towards providing that much needed solution.
The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding parts, and in which:
In accordance with the invention, methods and systems are described herein, with reference to the Figures, for providing efficient delivery of data from a database to a number of clients. In particular, the description herein focuses on a network monitoring system in which data is captured relating to events such as faults or alarms occurring on a computer network and is distributed to a number of administrator clients responsible for monitoring the network and preventing or correcting such faults.
Referring then to
At a time for processing requests, step 18, the request in each request set is processed, step 20, by, for example, querying for the requested data from a database or cache and processing it in any fashion specified in the request. As described further below, the processing of requests may be performed in parallel with or as a concurrent process interleaved with the processing of new client requests. For example, one or more request sets may be processed and distributed to clients, the process may then return to receiving and processing new requests, and then other request sets may be distributed. The results of each processed request are distributed to all clients having requests associated with the processed request, step 22. If there are any more request sets to be processed, step 24, the processing and distribution is repeated. When all request sets have been processed, or in between the processing of each request set, new client requests may be received and processed.
Embodiments of this process may be used in a variety of data distribution systems. However, a version of this process is particularly useful in a network monitoring system where clients subscribe to updates regarding events occurring in a large network and need to view the updated data with great frequency. With reference to
The probes 2 are portions of code that collect events from network management data sources 6, APIs, databases, network devices 5, log files, and other utilities. Monitors 4 are software applications that simulate network users to determine response times and availability of services 7 such as on a network. Other components may be used to collect and report on events occurring in the network or related devices or services.
The network management system monitors and reports on activity on a computer, telecommunications, or other type of network. In this context, clients 8 are typically administrators who make requests for event data which they need to monitor on a regular basis. Clients may elect to see all event activity on the network. More typically for larger networks, clients will only want to see event data occurring on particular parts of the network for which they are responsible or which may affect their portion of the network. In addition, clients may only want to see summaries of their relevant part of the event data, such as event counts, sums, averages, minimums, maximums, or other distributions of event data. Clients input the various requests into an event list 34, with each request representing and being sometimes referred to herein as a particular view on the data.
Event data is stored in the event database 28 of one embodiment in a number of rows and columns, with each row representing an event and the columns storing fields of data relating to the event, e.g., location, type, time, severity, etc. As used herein, then, a view is generally a mechanism for selecting columns from the database and may also optionally include a filter. A filter is generally a mechanism for excluding rows of data in the database based on column values. Views may therefore be based on filters. Filters may also be based on other filters and other views. A metric view is generally a type of view which provides summary information on the number of rows in a view rather than the actual data, and usually requires some arithmetic processing on the number of rows.
These client requests or views are persistent and are delivered according to a publish/subscribe model. That is, because network events occur regularly, the data in the event database 28 changes frequently and clients must be informed promptly of the updates in accordance with their specified requests to be able to make proper use of the data. The object server 26 processes the standing requests at a set frequency, e.g., every five or ten seconds, and delivers the results to the clients in the form of a stream or event data which is new or updated since the requests were last processed. The default or initial frequency for processing standing requests may be preset to any desired time frequency in any desired time units, e.g., seconds or portions thereof, minutes, hours, etc., and in any desired amount.
In accordance with the invention, a notification program or notifier 30 is provided which manages the client requests for data from the object server 26 to efficiently distribute the responses to the client requests. The notification program 30 may be part of the object server 26 as shown or may be a separate, standalone component of the system. In accordance with processes described in greater detail below, the notification program 30 manages the various client requests in a view list or table 32 having a number of request sets. Each request set relates to a specific type of view or data filter and may include a number of metrics or formulas which summarize data in the object server 26 and which are requested to be processed by or for clients 8. A process for organizing or ordering views, filters and metrics is described below with reference to
Thus, when a client 8 elects a metric view in its event list 34, the notifier 30 registers interest in that metric view with the view table 32 in the object server 26. If another client elects to view the same metric view, notifier 30 also registers that other client's interest in the summary data in the view table 32.
When the notifier 30 receives a registration request from a client 8, it scans its list of existing registrations. If, as in this example, an identical registration already exists, the second registration is associated with the first. The first registration of particular summary data may be referred to as a “primary” registration or request, whereas subsequent registrations of identical summary data may be referred to as “secondary” registrations or requests. The notifier 30 periodically scans its list of primary registrations, and for each it calculates the summary data, and sends the results to all clients that have registered interest in that data. As a result, this notification program 30 and view list library 32 optimizes the evaluation of summary data. Specifically, assuming that each client requests views of the same M metrics, the work done by the object server 26 is of the order of M, rather than M*(number of clients).
As can be seen, the notifier 30 manages several ongoing processes, including the processes of registering new client views and keeping the sets of views in proper order and of processing event data updates to clients which request them. In one embodiment, the notifier 30 employs several thread pools to manage these ongoing, concurrent processes, including, for example, threads to process new event data coming into the event database 28 and to place the event data into output stream caches, to process new views received from clients and manage the data structures in the view tables 32, and to manage the flow of output streams with cached updated event data to clients. One advantage of using two or more thread pools is that the notifier 30 can output data without locking up the database. Where it does lock the database, it uses read locks which can be held by multiple threads. This improves the overall scalability and efficiency of the object server 26.
Each client's event list needs to have associated views updated at the same time, e.g., a view and the equivalent metric view must be updated at the same time. Otherwise, the event list user will see inconstant data being displayed. Similarly, if the view is the same except for the order in which the columns are displayed then they must both be updated at the same time. The notifier 30 therefore keeps track of which views and metric views should be linked in this way, and sends out the appropriate updates for all of these at approximately the same time.
When a client subscribes to a view, a check is made to see if another subscription already exists to this view (or an identical view with a different name). If it does, then the client is returned the same stream and data is only output once on that stream. Both clients can then listen or subscribe to this stream and pick up the same data. As views and streams are created and destroyed by clients, the notifier 30 maintains the data structures needed to perform this de-duplication of output data in the view tables 32. If a view is destroyed, the associated streams are destroyed. If a client disconnects, its streams are removed unless they are still used by other clients. If no more clients are subscribing to a stream then it is deleted.
As will be understood, the goals of the process in
Referring then to
If a set having the filter is found, then, if the current view is not a metric view, i.e., is a pure view, step 78, the client ID is added to the pure view subset for this filter, step 80. If the current view is a metric view, then the notifier looks for a matching metric in the matching filter set Sf, step 82. If no matching metric is found, the current metric is used to establish a new metric set for this filter, step 84, and the client ID is added to the new metric set m, step 86. If the same metric is found for the same filter, the client ID for the current view is added to that set, step 88. As a result, every client request or view is associated with a set, and all views are ordered in sets.
The notifier also keeps track of a number of different types of stream. So far only the streams used by clients, such as through event lists, have been described. There are also occasions where a client requires all the events in order rather than just the net change, e.g., a gateway, positioned e.g. between clients and the object server, performing historical recording of events. When the stream is requested by a client it can therefore also request that all state changes are recorded and sent out rather than just the net changes.
Streams may also be created with a store and forward (SAF) option. This allows events to be stored when a client looses connection with the object server. When the client reconnects, it is then sent all of the missing events. In this case disconnection of the client does not cause the stream to be deleted; instead it continues to output data but it is written to a file until the client reconnects. This type of stream is used by both gateways and between object servers in a cluster, as explained further below.
A client may also require data to be sent out as soon as it changes rather than have it cached and sent out at a later time. In this case, data is not cached; instead it is sent out immediately. This type of stream is used between object servers in a cluster, as explained further below.
When an event list user makes a change to a row within a displayed view, e.g., he deletes a row, then he expects to see the change appear in the event list on his screen and also in any associated views and metric views they have open. The notifier 30 therefore supports an option to flush a stream. The client 8 executes a SQL change to the database and waits for the SQL command complete acknowledge. It then sends a flush stream command to the notifier 30 specifying the view within which the change was made and the view's stream. The notifier 30 forces processing of all pending messages in the message queue to ensure that the SQL change is in the stream's cache and then sends out data on the stream and all associated view's streams. The change will then appear in the users views. Updating all other caches then continues as before but the flushed caches will not be updated again until the update time (since the flush) has expired. Turning now to
The amount of time used in a single sweep through S is measured, step 116. This may be done by clocking the actual time taken to process and distribute the requests, or may be based on predictions. If it exceeds a metric view threshold tm, step 118, the time interval t between processing sets is incremented, e.g., by one second, step 120. This process of adjusting or throttling the time interval for notification helps limit processing requirements and prevent overloads.
These timing issues are graphically illustrated in
In accordance with additional aspects of the invention, object servers and their notifiers may be arranged in clusters to provide for backup of the object server. Each cluster consists of a master object server and one or more slave object servers. The slaves must be prepared to substitute themselves for the master in the event the master becomes temporarily or permanently unavailable. Similarly, the master must stay in sync with the slaves even during its own down time. These requirements apply not only to the event databases but also to the notifiers and view tables. Methods for keeping master and slave object servers and notifiers in sync are now described.
An exemplary cluster is shown in
When the master 26A fails or is taken offline, a quorum voting scheme is used to elect the new master, e.g., one of the two slaves 26B, 26C takes over as master provided that the two slaves are able to communicate between themselves (i.e. they are part of the majority). A node that is part of a minority (i.e. it cannot communicate with other nodes) is not a candidate for master since it cannot guarantee that a majority does not exist.
Considering the three object servers 26A, 26B, 26C, where 26A is the master. When 26A fails, 26B and 26C are informed of this. Both 26B and 26C now decide whether they will become master. A node makes this decision using its state and priority, e.g., if all slave nodes have active and current states, the node with the highest assigned priority becomes master. Assuming that 26B decides that it will become master, it publishes this request, and becomes the new master provided a majority vote is made.
The notifier 30A uses cluster tables to record all views, filters and all streams as they are created—in the order in which they are created. The client cluster code also records the client connection data in a cluster table in the same way. These cluster tables are replicated across the cluster as with any other cluster table. As client connections, views and filters are created and dropped on the master, the changes to the appropriate cluster tables are monitored by each slave. These are used to create or drop the same client connections, views and filters on the slaves. In this way the slaves stay in sync with the master.
When a stream subscription request comes in from a client it is read by all of the object servers in the cluster 56. The slaves finish creating any pending clients, views or filters and then check that they would then be able to create the stream. The stream is then created on the slave but not activated for output, and an acknowledgement is sent to the master. Once acknowledgements have been received by the master from all slaves in the current cluster and it has created the stream, it sends the stream (that the client must listen to) to the client. The output streams are also sent to the slave notifiers, which receive the processed event data as inputs to the slave output streams, which are not themselves distributed to clients as long as the master is in operation. If fail over occurs, e.g., the master crashes or otherwise becomes inaccessible or unable to process data normally, then one slave is elected as the new master. It then activates all of the streams and takes over updating them. Pending messages from probes, monitors, etc. are not processed until the notifier has activated all streams and resumed notification.
When a gateway uses a SAF stream it requires full historical data. If the gateway goes down, SAF buffers up the data being transmitted at the object server until the gateway comes back again. The data is buffered to disk so that if the master also goes down then, when it comes back up again, the data can still be retransmitted. However, in a cluster this would mean a slave would take over transmitting and therefore buffering the historical data to the gateway. When the original master comes back up again sometime later and transmits its buffered data, this would result in mis-ordered data being sent to the gateway.
To solve this problem, a count is added to each row of gateway data. The master sends out this sequence count to the gateway and to the slaves, which also listen on the same subscribed stream. This allows the slaves to record the sequence count, as it is incremented. If fail over occurs and a slave takes over notification it continues the count from the last output value. When a gateway comes back up again and gets sent the SAF data from the current master and the old master (when it comes back up again), it can then use the sequence count to order the data correctly.
While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
3855456 | Summers et al. | Dec 1974 | A |
3906454 | Martin | Sep 1975 | A |
4135662 | Dlugos | Jan 1979 | A |
4410950 | Toyoda et al. | Oct 1983 | A |
4438494 | Budde et al. | Mar 1984 | A |
4503534 | Budde et al. | Mar 1985 | A |
4503535 | Budde et al. | Mar 1985 | A |
4517468 | Kemper et al. | May 1985 | A |
4545013 | Lyon et al. | Oct 1985 | A |
4568909 | Whynacht | Feb 1986 | A |
4585975 | Wimmer | Apr 1986 | A |
4591983 | Bennett et al. | May 1986 | A |
4622545 | Atkinson | Nov 1986 | A |
4648044 | Hardy et al. | Mar 1987 | A |
4727545 | Glackemeyer et al. | Feb 1988 | A |
4817092 | Denny | Mar 1989 | A |
4823345 | Daniel et al. | Apr 1989 | A |
4866712 | Chao | Sep 1989 | A |
4881230 | Clark et al. | Nov 1989 | A |
4914657 | Walter et al. | Apr 1990 | A |
4932026 | Dev et al. | Jun 1990 | A |
4935876 | Hanatsuka | Jun 1990 | A |
5107497 | Lirov et al. | Apr 1992 | A |
5109486 | Seymour | Apr 1992 | A |
5123017 | Simpkins et al. | Jun 1992 | A |
5125091 | Staas, Jr. et al. | Jun 1992 | A |
5133075 | Risch | Jul 1992 | A |
5159685 | Kung | Oct 1992 | A |
5179556 | Turner | Jan 1993 | A |
5204955 | Kagei et al. | Apr 1993 | A |
5214653 | Elliott, Jr. et al. | May 1993 | A |
5247517 | Rose et al. | Sep 1993 | A |
5261044 | Dev et al. | Nov 1993 | A |
5293629 | Conley et al. | Mar 1994 | A |
5295244 | Dev et al. | Mar 1994 | A |
5309448 | Bouloutas et al. | May 1994 | A |
5321837 | Daniel et al. | Jun 1994 | A |
5375070 | Hershey et al. | Dec 1994 | A |
5432934 | Levin et al. | Jul 1995 | A |
5436909 | Dev et al. | Jul 1995 | A |
5483637 | Winokur et al. | Jan 1996 | A |
5485455 | Dobbins et al. | Jan 1996 | A |
5491694 | Oliver et al. | Feb 1996 | A |
5495470 | Tyburski et al. | Feb 1996 | A |
5504887 | Malhotra et al. | Apr 1996 | A |
5504921 | Dev et al. | Apr 1996 | A |
5521910 | Matthews | May 1996 | A |
5528516 | Yemini et al. | Jun 1996 | A |
5559955 | Dev et al. | Sep 1996 | A |
5590120 | Vaishnavi et al. | Dec 1996 | A |
5627819 | Dev et al. | May 1997 | A |
5646864 | Whitney | Jul 1997 | A |
5649103 | Datta et al. | Jul 1997 | A |
5664220 | Itoh et al. | Sep 1997 | A |
5666481 | Lewis | Sep 1997 | A |
5668987 | Schneider | Sep 1997 | A |
5673264 | Hamaguchi | Sep 1997 | A |
5675741 | Aggarwal et al. | Oct 1997 | A |
5687290 | Lewis | Nov 1997 | A |
5696486 | Poliquin et al. | Dec 1997 | A |
5706436 | Lewis et al. | Jan 1998 | A |
5722427 | Wakil et al. | Mar 1998 | A |
5727157 | Orr et al. | Mar 1998 | A |
5727196 | Strauss, Jr. et al. | Mar 1998 | A |
5734642 | Vaishnavi et al. | Mar 1998 | A |
5748781 | Datta et al. | May 1998 | A |
5751933 | Dev et al. | May 1998 | A |
5751965 | Mayo et al. | May 1998 | A |
5754532 | Dev et al. | May 1998 | A |
5764955 | Doolan | Jun 1998 | A |
5768501 | Lewis | Jun 1998 | A |
5777549 | Arrowsmith et al. | Jul 1998 | A |
5790546 | Dobbins et al. | Aug 1998 | A |
5791694 | Fahl et al. | Aug 1998 | A |
5793362 | Matthew et al. | Aug 1998 | A |
5812750 | Dev et al. | Sep 1998 | A |
5822305 | Vaishnavi et al. | Oct 1998 | A |
5832503 | Malik et al. | Nov 1998 | A |
5872911 | Berg | Feb 1999 | A |
5872928 | Lewis et al. | Feb 1999 | A |
5889953 | Thebaut et al. | Mar 1999 | A |
5907696 | Stilwell et al. | May 1999 | A |
5940376 | Yanacek et al. | Aug 1999 | A |
5956488 | Suzuki | Sep 1999 | A |
5970984 | Wakil et al. | Oct 1999 | A |
5980984 | Modera et al. | Nov 1999 | A |
5987442 | Lewis et al. | Nov 1999 | A |
6000045 | Lewis | Dec 1999 | A |
6003090 | Puranik et al. | Dec 1999 | A |
6006016 | Faigon et al. | Dec 1999 | A |
6014697 | Lewis et al. | Jan 2000 | A |
6026442 | Lewis et al. | Feb 2000 | A |
6041383 | Jeffords et al. | Mar 2000 | A |
6047126 | Imai | Apr 2000 | A |
6049828 | Dev et al. | Apr 2000 | A |
6057757 | Arrowsmith et al. | May 2000 | A |
6064304 | Arrowsmith et al. | May 2000 | A |
6064986 | Edelman | May 2000 | A |
6064996 | Yamaguchi et al. | May 2000 | A |
6084858 | Matthews et al. | Jul 2000 | A |
6115362 | Bosa et al. | Sep 2000 | A |
6131112 | Lewis et al. | Oct 2000 | A |
6138122 | Smith et al. | Oct 2000 | A |
6141720 | Jeffords et al. | Oct 2000 | A |
6170013 | Murata | Jan 2001 | B1 |
6199172 | Dube et al. | Mar 2001 | B1 |
6205563 | Lewis | Mar 2001 | B1 |
6209033 | Datta et al. | Mar 2001 | B1 |
6216168 | Dev et al. | Apr 2001 | B1 |
6233623 | Jeffords et al. | May 2001 | B1 |
6243747 | Lewis et al. | Jun 2001 | B1 |
6255943 | Lewis et al. | Jul 2001 | B1 |
6324530 | Yamaguchi et al. | Nov 2001 | B1 |
6324590 | Jeffords et al. | Nov 2001 | B1 |
6336138 | Caswell et al. | Jan 2002 | B1 |
6349306 | Malik et al. | Feb 2002 | B1 |
6373383 | Arrowsmith et al. | Apr 2002 | B1 |
6374293 | Dev et al. | Apr 2002 | B1 |
6381639 | Thebaut et al. | Apr 2002 | B1 |
6392667 | McKinnon et al. | May 2002 | B1 |
6421719 | Lewis et al. | Jul 2002 | B1 |
6430712 | Lewis | Aug 2002 | B1 |
6437804 | Ibe et al. | Aug 2002 | B1 |
6502079 | Ball et al. | Dec 2002 | B1 |
6510478 | Jeffords et al. | Jan 2003 | B1 |
6603396 | Lewis et al. | Aug 2003 | B1 |
6651062 | Ghannam et al. | Nov 2003 | B1 |
6785675 | Graves et al. | Aug 2004 | B1 |
6785722 | Vuong et al. | Aug 2004 | B1 |
20010013107 | Lewis | Aug 2001 | A1 |
20010042139 | Jeffords et al. | Nov 2001 | A1 |
20010047409 | Datta et al. | Nov 2001 | A1 |
20010047430 | Dev et al. | Nov 2001 | A1 |
20010052085 | Dube et al. | Dec 2001 | A1 |
20020032760 | Matthews et al. | Mar 2002 | A1 |
20020050926 | Lewis et al. | May 2002 | A1 |
20020075882 | Donis et al. | Jun 2002 | A1 |
20020184528 | Shevenell et al. | Dec 2002 | A1 |
20020188584 | Ghannam et al. | Dec 2002 | A1 |
20030079041 | Parrella et al. | Apr 2003 | A1 |
20030110396 | Lewis et al. | Jun 2003 | A1 |
Number | Date | Country |
---|---|---|
0 209 795 | Jan 1987 | EP |
0 319 998 | Jun 1989 | EP |
0 338 561 | Oct 1989 | EP |
0 342 547 | Nov 1989 | EP |
0 616 289 | Sep 1994 | EP |
0 686 329 | Dec 1995 | EP |
WO 8907377 | Aug 1989 | WO |
WO 9300632 | Jan 1993 | WO |
WO 9520297 | Jul 1995 | WO |
WO 9609707 | Mar 1996 | WO |
WO 9716906 | May 1997 | WO |
WO 9729570 | Aug 1997 | WO |
WO 9737477 | Oct 1997 | WO |
WO 9744937 | Nov 1997 | WO |
WO 9842109 | Sep 1998 | WO |
WO 9844682 | Oct 1998 | WO |
WO 9852322 | Nov 1998 | WO |
WO 9927682 | Jun 1999 | WO |
WO 0013112 | Mar 2000 | WO |
WO 0072183 | Nov 2000 | WO |
WO 0186380 | Nov 2001 | WO |
WO 0186443 | Nov 2001 | WO |
WO 0186444 | Nov 2001 | WO |
WO 0186775 | Nov 2001 | WO |
WO 0186844 | Nov 2001 | WO |
WO 0206971 | Jan 2002 | WO |
WO 0206972 | Jan 2002 | WO |
WO 0206973 | Jan 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20030014462 A1 | Jan 2003 | US |