Claims
- 1. A rate limiting method for a pipelined system having a plurality of stages, said method comprising the steps of:
choosing a heuristic for discarding data at each of said plurality of stages, wherein said heuristic is appropriate for said each stage; applying a rate limiting process at said each stage, wherein said rate limiting process incorporates said appropriate heuristic.
- 2. The method of claim 1, further comprising any of the steps of:
discarding data based on connection boundaries; discarding connection details, but maintaining counts of connections; and abstracting connection information over a longer time interval.
- 3. The method of claim 1, wherein said pipelined system is a passive network monitoring pipelined system.
- 4. An apparatus for rate limiting a pipelined system having a plurality of stages, said apparatus comprising:
means for choosing a heuristic for discarding data at each of said plurality of stages, wherein said heuristic is appropriate for said each stage; means for applying a rate limiting process at said each stage, wherein said rate limiting process incorporates said appropriate heuristic.
- 5. The apparatus of claim 4, further comprising any of:
means for discarding data based on connection boundaries; means for discarding connection details, but maintaining counts of connections; and means for abstracting connection information over a longer time interval.
- 6. The apparatus of claim 4, wherein said pipelined system is a passive network monitoring pipelined system.
- 7. An apparatus for rate limiting to control amounts of data in a passive network monitoring pipelined system, said apparatus comprising:
a packet filter module having a first rate limiting mechanism for discarding particular incoming packets and passing on non-discarded packets; a monitor receiving said non-discarded packets from said packet filter module, said monitor processing said non-discarded packets and outputting corresponding events, said monitor having a second rate limiting mechanism for discarding particular incoming events and passing on non-discarded events; a logger module having a third rate limiting mechanism for discarding unwanted data and passing on said non-discard events, said logger module within a policy monitor module receiving said outputted corresponding events from said monitor; and a reporting module, having a fourth rate limiting mechanism wherein said non-discarded events are received from said logger module and combined with like events in a “roll-up” over a time interval or based on the hierarchy of objects to meet storage requirements.
- 8. The apparatus of claim 7, wherein said second rate limiting mechanism discards particular packets on input when input packet rate exceeds a specified limit.
- 9. The apparatus of claim 7, wherein said second rate limiting mechanism chooses not to send events to a security manager system when the rate at which such events arrive exceeds a specified limit.
- 10. The apparatus of claim 7, further comprising means for rate limiting by rolling up input data to an Enterprise Reporting system.
- 11. The apparatus of claim 10, wherein said means for rate limiting further comprises means for combining said rolled-up data over an interval of time or with rolled-up data from objects higher in the object hierarchy.
- 12. The apparatus of claim 7, further comprising means for said third rate limiting mechanism to maintain several levels of degraded service such that service degrades slowly, and to degrade service, data is discarded based on a heuristic classification of the input traffic by rule, disposition and logging severity.
- 13. The apparatus of claim 7, further comprising a logger algorithm for performing a filtering process on data arriving into said logger module of said policy monitor module, said algorithm escalating aggressiveness of said filtering process based on a number of violations a Policy Engine prefers to log, and based on a configuration of said filtering process, wherein said filtering process takes place on a periodic basis over intervals of time, packets, or events, with said filtering process lagging by N of said intervals.
- 14. The apparatus of claim 13, wherein said configuration is in part defined by, but not limited to, the following parameters:
FILTER_ENABLE, set to 1 enables rate based filtering, and if set to 0 no rate limiting takes place; FILTER_START_HIGH_VOLUME_VIOL=a first predetermined number, wherein for each interval of time, packets, or events, and once such percentage of violations occur, start filtering the highest volume rule/disposition combinations; FILTER_START_LOW_PRIORITY=a second predetermined number, wherein for each said interval, and once such percentage of violations is reached, start filtering off all but CRITICAL violations; FILTER_START_ALL=a third predetermined number, wherein for each said interval, and once such percentage of violations is reached, logging of further violations is disabled; and FILTER_ER_LAG=a fourth predetermined number, indicating how many of said intervals are logged in a non-limited way even though the filtering threshold is met.
- 15. The apparatus of claim 7, further comprising means for tracking discarded events, said means for tracking comprising counters to indicate the following during a collection process of the a current interval of time, packets, or events:
number of NetworkEvents that are logged; number of NetworkEvents that are discarded; and total number of Network Events that occurred.
- 16. The apparatus of claim 15, wherein information from said collection process is passed on to an Enterprise Manager periodically.
- 17. The apparatus of claim 8, wherein said second rate limiting mechanism comprises an algorithm, wherein input to said algorithm as number of packets per second that can be processed by said monitor is controlled by a command, wherein said command runs said monitor on a local network device, generates a stream of events to a local host port, and restricts average processing rate to a predetermined number of packets/second.
- 18. The apparatus of claim 17, wherein said algorithm works by defining a predetermined measurement interval, such that an argument value is normalized to the number of packets allowed received in a single measurement interval, and when said number of packets is received, no more packets are received until said measurement interval is over.
- 19. The apparatus of claim 9, further comprising an algorithm,
wherein said algorithm has the following arguments:
Iv=interval length, in seconds; avgN=number of intervals to time average; S=maximum events per Iv (input specification); M=measured events per last Iv; and Ma=time averaged value of M; NB=a number of bytes to write at once; wherein said algorithm comprises the steps of:
measuring M; rate limiting a succeeding interval of Iv seconds based on M and S; to smooth out the performance of said algorithm, time averaging M over a last avgN intervals using said algorithm, as follows: Ma=(Ma*(avgN−1)+M)/avgN from Ma and S, deriving a number of events that should have been discarded by rate limiting over said last interval, as well as deriving a fraction of said events that should have been discarded, as follows: D=Ma−S, (when Ma>S, 0 otherwise); and frac=D/Ma; for applying said number to a next interval, adding frac to a sum every time there is a new event; when frac crosses an integer boundary, do not pass on said event.
- 20. The apparatus of claim 9, wherein said rate limiting mechanism accepts as a first input parameter a buffer size for encoding DME data to said policy monitor module, and accepts as a second input parameter a specification of a fraction of said buffer to reserve for rate limiting.
- 21. The apparatus of claim 20, further comprising an algorithm, said algorithm comprising the steps of:
when writing DME data to a socket for transmission to said policy monitor module, said monitor maintaining a large output buffer as a queue data structure; conceptually dividing said buffer into a working section and a reserved section; modifying a TCP socket to said policy monitor module to use non-blocking I/O mode, so that said monitor process can determine when said socket is full; placing generated DME data in said output buffer; every NB bytes, said monitor attempting to write said buffered data to said socket, stopping only when said buffer is empty or said socket is full; if said buffer is empty, said monitor continue processing; if said socket is full, leaving said DME data in said output buffer and continue processing until another NB bytes of data is added to said output buffer; if said output buffer is full, placing said socket in blocking I/O mode, whereby said monitor waits until at least NB bytes of data are processed by said policy monitor module before continuing processing; when said monitor determines a new event appears, providing means for said monitor determining if said output buffer is currently using said reserved section; if said output buffer is using said working section, then processing said new event and sending all DME data from said event to said policy monitor module as it is produced; and If said output buffer's working section is full, and said reserved section is in use, processing said new connection, but discarding all DME data from said event.
- 22. The apparatus of claim 7, wherein said packet filter module further comprises:
a streams process, wherein said streams process comprises the following steps:
a stream using a predetermined hash for identifying said stream; optionally keeping a most recently used (MRU) queue for all events with a given hashID, wherein said hashID indexes into a hash_struct array; and providing command line parameters for streams as follows:
conTime=t1, average connection time for a stream connection; discTime=t2, disconnect timer, used in disabling streams; lowWater=LLLL, set low water mark for capture; and hiWater=HHH, set high water mark for capture; wherein lowWater=LLLL is a rate limiting point; a discard policy, comprising:
each event has a hash_state: HASH_UNUSED, HASH_RECEIVING or HASH_DISCARDING; initially said hash_state is HASH_UNUSED and switches to HASH_RECEIVING when a stream is first encountered; when below lowWater all streams are collected; when above lowWater any new stream of data is ignored, wherein a packet on a current data stream in (HASH_RECEIVING) times out after conTime and is placed into said HASH_DISCARDING state for a period of discTime; and when above the hiWater mark, all streams are discarded and each discarded stream stays in said HASH_DISCARDING state for said discTime.
- 23. The apparatus of claim 22, wherein if an MRU queue is kept, and
wherein said hashStruct is associated with an element of said MRU queue, then said MRU element is moved to the top of said MRU queue when said hashStruct is referenced by data in an associated stream, or wherein if said hashStruct does not have an associated MRU element when data arrives on said stream, then a second MRU element is removed from the bottom of said MRU queue, said second MRU element disassociated with any other hashStruct, and associated with said hashStruct, and said second MRU element is moved to the top of said MRU queue.
- 24. The apparatus of claim 23, further providing an MRU circular queue, wherein when during rate limiting only streams with an MRU object are processed, and when a stream is processed in non-rate limiting mode, a least recently used (LRU) object is assigned to said stream and set as the MRU for every packet.
- 25. An apparatus for a packet filter module incorporating VLAN technology support, said apparatus comprising:
one or more network interface cards; means for using one or more VLAN tags having ranges; and a configuration mechanism comprising means for:
associating one or more logical device names coupled to one or more logical devices with said one or more network interface cards; and associating one or more logical device names with said one or more VLAN tag ranges; wherein a client program receives filtered packet data from one of said one or more logical devices, said one or more logical devices containing packet data from said associated interface cards having said associated VLAN tags.
- 26. The apparatus of claim 25, wherein if a client program coupled to said packet filter module does not support VLAN tags, said packet filter module strips off VLAN tags.
- 27. The apparatus of claim 25, wherein all traffic from said one or more network interface cards, said all traffic received with or without VLAN tag information, is transmitted to a single logical device of said one or more logical devices, wherein one client program of said logical device is able to see all packets of said all traffic from said one or more network interface cards.
- 28. A method for packet filter module physical replication, said method comprising the steps of:
logically splitting a hash table between multiple packet filter modules in a given configuration; configuring said packet filter modules to share all traffic going to all said multiple packet filter modules; and configuring each packet filter module with a distinct non-overlapping partition of a corresponding hash space, wherein all packet filter modules exactly cover said hash space, such that each packet filter module processes input packet data only when said input packet data corresponds to a stream with hashID in a partition of said hash space, said partition assigned to said packet filter module.
- 29. The method of claim 28, further comprising partitioning a distinct packet stream associated with different protocols by configuration to a particular packet filter module.
- 30. The method of claim 29, wherein said distinct packet stream is divided using a physical layer splitter comprising any of:
an Ethernet hub; and an electro-optical Tap, such that said each packet filter module sees an identical input stream of data.
- 31. The method of claim 28, wherein said method is used for load balancing of a passive network monitoring system.
- 32. The method of claim 28, wherein:
said distinct packet stream is fed into a first packet filter module; each packet filter module having two network interface cards, referred to as NIC's, one for reading and one for writing; said packet filter modules are physically connected in a daisy chain fashion, wherein: on each packet filter module except the first, the reading NIC is connected to the previous packet filter module's writing NIC; and on each packet filter module the writing NIC is connected to the next packet filter module's reading NIC, or left unconnected on the last packet filter module; and each packet filter module processes packets received on the reading NIC that are associated with streams in the partition of hashID assigned to said packet filter module, while all other packets are sent down to the subsequent writing NIC.
- 33. The method of claim 32, wherein said method is used for load balancing of a passive network monitoring system.
- 34. The method of claim 32, wherein each packet filter module has more than two network interface cards, referred to as NIC's, one or more for reading and one or more for writing, wherein:
input stream data from reading cards is aggregated using VLAN technology of claim 25; and input stream data outside of the partition of hashID assigned to said packet filter module is written to one of the output NIC's, based on any of, or any combination of:
said VLAN technology; and communications protocol of each packet.
- 35. A method for rate limiting to control amounts of data in a passive network monitoring pipelined system, said method comprising:
providing a packet filter module having a first rate limiting mechanism for discarding particular incoming packets and passing on non-discarded packets; providing a monitor receiving said non-discarded packets from said packet filter module, said monitor processing said non-discarded packets and outputting corresponding events, said monitor having a second rate limiting mechanism for discarding particular incoming events and passing on non-discarded events; providing a logger module having a third rate limiting mechanism for discarding unwanted data and passing on said non-discard events, said logger module within a policy monitor module receiving said outputted corresponding events from said monitor; and providing a rate limiting module, having a fourth rate limiting mechanism wherein said non-discarded events are received from said logger module and abstracted over a predetermined time interval to meet storage requirements.
- 36. The method of claim 35, wherein said second rate limiting mechanism discards particular packets on input when input packet rate exceeds a specified limit.
- 37. The method of claim 35, wherein said second rate limiting mechanism chooses not to send events to a security manager system when the rate at which such events arrive exceeds a specified limit.
- 38. The method of claim 35, further comprising rate limiting by rolling up input data to an Enterprise Reporting system.
- 39. The method of claim 38, wherein said rate limiting further comprises means for combining said rolled-up data over an interval of time or with rolled-up data from objects higher in the object hierarchy.
- 40. The method of claim 35, further comprising said third rate limiting mechanism maintaining several levels of degraded service such that service degrades slowly, and to degrade service, discarding based on a heuristic classification of the input traffic by rule, disposition, and logging severity.
- 41. The method of claim 35, further comprising providing a logger algorithm for performing a filtering process on data arriving into said logger module of said policy monitor module, said algorithm escalating aggressiveness of said filtering process based on a number of violations a Policy Engine prefers to log, and based on a configuration of said filtering process, wherein said filtering process takes place on a periodic basis over intervals of time, packets, or events, with said filtering process lagging by N of said intervals.
- 42. The method of claim 41, wherein said configuration is in part defined by, but not limited to, the following parameters:
FILTER_ENABLE, set to 1 enables rate based filtering, and if set to 0 no rate limiting takes place; FILTER_START_HIGH_VOLUME_VIOL=a first predetermined number, wherein for each interval of time, packets, or events, and once such percentage of violations occur, start filtering the highest volume rule/disposition combinations; FILTER_START_LOW_PRIORITY=a second predetermined number, wherein for each said interval, and once such percentage of violations is reached, start filtering off all but CRITICAL violations; FILTER_START_ALL=a third predetermined number, wherein for each said interval, and once such percentage of violations is reached, logging of further violations is disabled; and FILTER_ER_LAG=a fourth predetermined number, indicating how many of said intervals are logged in a non-limited way even though the filtering threshold is met.
- 43. The method of claim 35, further comprising means for tracking discarded events, said tracking comprising counters to indicate the following during a collection process of the a current interval of time, packets, or events:
number of NetworkEvents that are logged; number of NetworkEvents that are discarded; and total number of Network Events that occurred.
- 44. The method of claim 43, wherein information from said collection process is passed on to an Enterprise Manager periodically.
- 45. The method of claim 36, wherein said second rate limiting mechanism comprises an algorithm, wherein input to said algorithm as number of packets per second that can be processed by said monitor is controlled by a command, wherein said command runs said monitor on a local network device, generates a stream of events to a local host port, and restricts average processing rate to a predetermined number of packets/second.
- 46. The method of claim 45, wherein said algorithm works by defining a predetermined measurement interval, such that an argument value is normalized to the number of packets allowed received in a single measurement interval, and when said number of packets is received, no more packets are received until said measurement interval is over.
- 47. The method of claim 37, further comprising providing an algorithm,
wherein said algorithm has the following arguments:
Iv=interval length, in seconds; avgN=number of intervals to time average; S=maximum events per Iv (input specification); M=measured events per last Iv; and Ma=time averaged value of M; NB=a number of bytes to write at once; wherein said algorithm comprises the steps of:
measuring M; rate limiting a succeeding interval of Iv seconds based on M and S; to smooth out the performance of said algorithm, time averaging M over a last avgN intervals using said algorithm, as follows: Ma=(Ma*(avgN−1)+M)/avgN from Ma and S, deriving a number of events that should have been discarded by rate limiting over said last interval, as well as deriving a fraction of said events that should have been discarded, as follows: D=Ma−S, (when Ma>S, 0 otherwise); and frac=D/Ma; for applying said number to a next interval, adding frac to a sum every time there is a new event; when frac crosses an integer boundary, do not pass on said event.
- 48. The method of claim 37, wherein said rate limiting mechanism accepts as a first input parameter a buffer size for encoding DME data to said policy monitor module, and accepts as a second input parameter a specification of a fraction of said buffer to reserve for rate limiting.
- 49. The method of claim 48, further comprising providing an algorithm, said algorithm comprising the steps of:
when writing DME data to a socket for transmission to said policy monitor module, said monitor maintaining a large output buffer as a queue data structure; conceptually dividing said buffer into a working section and a reserved section; modifying a TCP socket to said policy monitor module to use non-blocking I/O mode, so that said monitor process can determine when said socket is full; placing generated DME data in said output buffer; every NB bytes, said monitor attempting to write said buffered data to said socket, stopping only when said buffer is empty or said socket is full; if said buffer is empty, said monitor continue processing; if said socket is full, leaving said DME data in said output buffer and continue processing until another NB bytes of data is added to said output buffer; if said output buffer is full, placing said socket in blocking I/O mode, whereby said monitor waits until at least NB bytes of data are processed by said policy monitor module before continuing processing; when said monitor determines a new event appears, providing means for said monitor determining if said output buffer is currently using said reserved section; if said output buffer is using said working section, then processing said new event and sending all DME data from said event to said policy monitor module as it is produced; and If said output buffer's working section is full, and said reserved section is in use, processing said new connection, but discarding all DME data from said event.
- 50. The method of claim 35, wherein said packet filter module further comprises:
a streams process, wherein said streams process comprises the following steps:
a stream using a predetermined hash for identifying said stream; optionally keeping a most recently used (MRU) queue for all events with a given hashID, wherein said hashID indexes into a hash_struct array; and providing command line parameters for streams as follows:
contime=t1, average connection time for a stream connection; discTime=t2, disconnect timer, used in disabling streams; lowWater=LLLL, set low water mark for capture; and hiWater=HHH, set high water mark for capture; wherein lowWater=LLLL is a rate limiting point; a discard policy, comprising:
each event has a hash_state: HASH_UNUSED, HASH_RECEIVING or HASH_DISCARDING; initially said hash_state is HASH_UNUSED and switches to HASH_RECEIVING when a stream is first encountered; when below lowWater all streams are collected; when above lowWater any new stream of data is ignored, wherein a packet on a current data stream in (HASH_RECEIVING) times out after conTime and is placed into said HASH_DISCARDING state for a period of discTime; and when above the hiWater mark, all streams are discarded and each discarded stream stays in said HASH_DISCARDING state for said discTime.
- 51. The apparatus of claim 50, wherein if an MRU queue is kept, and
wherein said hashStruct is associated with an element of said MRU queue, then said MRU element is moved to the top of said MRU queue when said hashStruct is referenced by data in an associated stream, or wherein if said hashStruct does not have an associated MRU element when data arrives on said stream, then a second MRU element is removed from the bottom of said MRU queue, said second MRU element disassociated with any other hashStruct, and associated with this said hashStruct, and said second MRU element is moved to the top of said MRU queue.
- 51. The method of claim 51, further comprising providing an MRU circular queue, wherein when during rate limiting only streams with an MRU object are processed, and when a stream is processed in non-rate limiting mode, a least recently used (LRU) object is assigned to said stream and set as the MRU for every packet.
- 52. A method for a packet filter module incorporating VLAN technology support, said method comprising the steps of:
providing one or more network interface cards; using one or more VLAN tags having ranges; and providing a configuration mechanism comprising the steps of:
associating one or more logical device names coupled to one or more logical devices with said one or more network interface cards; and associating one or more logical device names with said one or more VLAN tag ranges; wherein a client program receives filtered packet data from one of said one or more logical devices, said one or more logical devices containing packet data from said associated interface cards having said associated VLAN tags.
- 53. The method of claim 52, wherein if a client program coupled to said packet filter module does not support VLAN tags, said packet filter module strips off VLAN tags.
- 54. The method of claim 52, wherein all traffic from said one or more network interface cards, said all traffic received with or without VLAN tag information, is transmitted to a single logical device of said one or more logical devices, wherein one client program of said logical device is able to see all packets of said all traffic from said one or more network interface cards.
- 55. An apparatus for packet filter module physical replication, said apparatus comprising:
means for logically splitting a hash table between multiple packet filter modules in a given configuration; means for said packet filter modules to share all traffic going to all said multiple packet filter modules; and means for configuring each packet filter module with a distinct non-overlapping partition of a corresponding hash space, wherein all packet filter modules exactly cover said hash space, such that each packet filter module processes input packet data only when said input packet data corresponds to a stream with hashID in a partition of said hash space, said partition assigned to said packet filter module.
- 56. The apparatus of claim 55, further comprising means for partitioning a distinct packet stream associated with different protocols by configuration to a particular packet filter module.
- 57. The apparatus of claim 56, wherein said distinct packet stream is divided using a physical layer splitter comprising any of:
an Ethernet hub; and an electro-optical Tap, such that said each packet filter module sees an identical input stream of data.
- 58. The apparatus of claim 55, wherein said method is used for load balancing of a passive network monitoring system.
- 59. The apparatus of claim 55, wherein:
said distinct packet stream is fed into a first packet filter module; each packet filter module having two network interface cards, referred to as NIC's, one for reading and one for writing; said packet filter modules are physically connected in a daisy chain fashion, wherein: on each packet filter module except the first, the reading NIC is connected to the previous packet filter module's writing NIC; and on each packet filter module the writing NIC is connected to the next packet filter module's reading NIC, or left unconnected on the last packet filter module; and each packet filter module processes packets received on the reading NIC that are associated with streams in the partition of hashID assigned to said packet filter module, while all other packets are sent down to the subsequent writing NIC.
- 60. The apparatus of claim 59, wherein said method is used for load balancing of a passive network monitoring system.
- 61. The apparatus of claim 59, wherein each packet filter module has more than two network interface cards, referred to as NIC's, one or more for reading and one or more for writing, wherein:
input stream data from said reading cards is aggregated using VLAN technology of claim 52; and input stream data outside of the partition of hashID assigned to said packet filter module is written to one of the output NIC's, based on any of, or any combination of:
said VLAN technology; and communications protocol of each packet.
- 62. The apparatus of claim 11, further comprising an algorithm to combine rollups to fit a predetermined amount of storage wherein:
MT=Target available space; MC=current available space; and MaxRollup=maximum interval over which events are currently rolled up, said algorithm operating as follows:
1. If MC<=MT then the algorithm terminates; 2. MaxRollup=N*MaxRollup (where N may be 2 or a greater integer); 3. Combine N adjacent rollups in data into rollups with the new MaxRollup interval; 4. Optionally combine rollups based on object hierarchy, using one or more of the following:
Combine host groups with containing host groups; Combine host groups with containing subnets; and Combine host groups based on reporting element; wherein said combination performed by incrementing the count of the containing group and removing the contained group, so that the total count of events remains the same; and 5. Continue with step 1.
- 63. The method of claim 39, further comprising providing an algorithm to combine rollups to fit a predetermined amount of storage wherein:
MT=Target available space; MC=current available space; and MaxRollup=maximum interval over which events are currently rolled up, said algorithm operating as follows:
1. If MC<=MT then the algorithm terminates; 2. MaxRollup=N*MaxRollup (where N may be 2 or a greater integer); 3. Combine N adjacent rollups in data into rollups with the new MaxRollup interval; 4. Optionally combine rollups based on object hierarchy, using one or more of the following:
Combine host groups with containing host groups; Combine host groups with containing subnets; and Combine host groups based on reporting element; wherein said combination performed by incrementing the count of the containing group and removing the contained group, so that the total count of events remains the same; and 5. Continue with step 1.
Priority Claims (1)
| Number |
Date |
Country |
Kind |
| PCT/US01/19063 |
Jun 2001 |
WO |
|
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application Ser. No. 60/385,252, filed May 31, 2002; and is a Continuation-in-part of U.S. patent application Ser. No. 10/311,109, filed Dec. 13, 2002, which is a national filing of International Patent Application No. PCT/US01/19063 filed Jun. 14, 2001, which claims priority to U.S. Provisional Patent Application Ser. No. 60/212,126 filed Jun. 16, 2000.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60385252 |
May 2002 |
US |