In a busy networking environment, such as a large corporation or an Internet service provider (ISP), it is often useful to be able to monitor some or all of the traffic passing through the network, such as the traffic that passes through a router between the network and the Internet. Numerous applications for such monitoring exist, such as intrusion detection systems (IDS), antivirus and antispam monitoring, or bandwidth monitoring. A significant barrier to such monitoring is the sheer quantity of data to be monitored. Even in a relatively small corporate environment, network traffic through the central router may represent dozens of simultaneous transactions, for multiple users across multiple protocols. As the networking environment increases in size, so too does the magnitude of data to monitored, quickly surpassing the point where a single monitoring system can handle the workload.
Load-balancing is an approach intended to help alleviate this problem. Multiple monitoring systems are utilized, and the data to be monitored is spread across them. However, load-balancing introduces different problems, such as how to distribute the data across multiple servers quickly and efficiently. While several software-based approaches exist, they cannot scale to handle a large networking environment; as additional data passes through a central router, a software load-balancing system becomes a new bottleneck. Software takes too long to process data and forward it to the appropriate monitoring server, which results in loss of packets.
Embodiments described herein discuss an approach to implementing load-balancing across multiple monitoring servers. One such embodiment describes a network monitoring device. The network monitoring device includes an ingress port, for receiving mirrored network packets, and a number of egress ports. The egress ports are associated with a number of monitoring servers, and used to forward the mirrored network packets to the monitoring servers. A packet classifier, coupled to the ingress port, examines the mirrored network packets, and determines which of the monitoring servers should receive the packets.
Another embodiment describes a method of load-balancing across multiple monitoring servers in a monitored network. The method calls for generating forwarding information from anticipated network traffic and a number of available monitoring servers. The method also entails receiving an incoming network packet into a network monitoring device, and examining a header associated with a network packet. A destination monitoring server is selected from the available monitoring servers, using the header and the forwarding information.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention:
Reference will now be made in detail to several embodiments. While the subject matter will be described in conjunction with the alternative embodiments, it will be understood that they are not intended to limit the claimed subject matter to these embodiments. On the contrary, the claimed subject matter is intended to cover alternative, modifications, and equivalents, which may be included within the spirit and scope of the claimed subject matter as defined by the appended claims.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. However, it will be recognized by one skilled in the art that embodiments may be practiced without these specific details or with equivalents thereof. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects and features of the subject matter.
Portions of the detailed description that follows are presented and discussed in terms of a method. Although steps and sequencing thereof are disclosed in a figure herein (e.g.,
Some portions of the detailed description are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that can be performed on computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer-executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout, discussions utilizing terms such as “accessing,” “writing,” “including,” “storing,” “transmitting,” “traversing,” “associating,” “identifying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Computing devices typically include at least some form of computer readable media. Computer readable media can be any available media that can be accessed by a computing device. By way of example, and not limitation, computer readable medium may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device. Communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signals such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
Some embodiments may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
Load Balancing in a Monitored Network
In the embodiments described below, an approach to load-balancing for monitoring systems is described. In several embodiments, a network device, such as a high-speed router or switch or other network device, can be configured to perform load-balancing, by distributing traffic across multiple monitoring servers, and across multiple groups of such servers, at line speeds. Groups can be configured, e.g., based on networking protocol, and monitoring servers within these groups can be assigned a range of anticipated network traffic. As traffic is received, header information from each packet, e.g., a hash of source and destination identifiers, is used to quickly retrieve the appropriate monitoring server, and route the packet to the destination. The load-balancing network device may forward the traffic to the monitoring servers using pre-programmed hardware, or using a processor under software control.
These embodiments have multiple applications. For example, passive monitoring systems, such as intrusion detection systems (IDS), can be implemented, which allow network traffic to be examined by a pool of IDS servers, e.g., grouped by networking protocol. Similar implementations allow for monitoring of potential virus, spyware, or spam activity within the network. Further, several embodiments can be extended for more active monitoring rules, e.g., by allowing a monitoring server a degree of control over the (monitored) central router.
Load-Balancing in a Networking Environment
With reference now to
Network monitoring device 110, in turn, is coupled to a plurality of monitoring servers, e.g., monitoring servers 121, 123, 125, 131, 133, 141, and 143. These monitoring servers are shown as being arranged in groups according to networking protocol, e.g., HTTP traffic is associated with group 120, while FTP and telnet traffic is associated with group 130, and other (unspecified) protocols are associated with group 140. Monitoring load balancer 110 is tasked with load-balancing network traffic across these various groups of monitoring servers, and across individual monitoring servers within the groups. In some embodiments, traffic is uni-directional along this path between network device 101 to the monitoring servers. In these embodiments, the monitoring servers do not reply to network device 101. Rather, each monitoring server may analyze the received packets for determining the desired monitoring output, e.g., providing intrusion detection information for a system administrator.
With reference now to
With reference now to step 210, network traffic flows to a network device. In different embodiments, such traffic may pass between a network and the Internet, or between locations within the same network. Also, such traffic may include any of a wide range of networking protocols, e.g., HTTP, telnet, or FTP. For example, client 198 transmits an HTTP request to server 103 by way of Internet 199 and network device 101.
With reference now to step 220, the network traffic is copied, and forwarded to a network monitoring device. In different embodiments, different approaches are utilized for diverting copies of network traffic. In one embodiment for example, with reference to
With reference now to step 230, the network monitoring device receives the copied traffic, selects between available monitoring servers, and forwards the copied network traffic to the selected monitoring server via an egress port of the network monitoring device associated with the selected monitoring server. In different embodiments, such selection is accomplished in different ways. For example, in one embodiment, data is extracted from the traffic, and used to select a server or egress port. For instance, one or more fields of a networking packet header are extracted from a received packet, and subjected to a function, e.g., a hash function. The resulting value, e.g., hash value, is used to determine which egress port should output the received packet, thereby forwarding the received packet to the selected monitoring server coupled to that egress port.
Exemplary Networking Device
With reference now to
As shown, network device 300 includes memory 310, processor 320, storage 330, switching fabric 340, and several communications ports, e.g., ports 351, 353, 361, and 363. Processor 320 executes instructions for controlling network device 300, and for managing traffic passing through network device 300. An operating system 325 is shown as executing on processor 320; in some embodiments, operating system 325 supplies the programmatic interface to network device 300.
Network device 300 is also shown as including storage 330. In different embodiments, different types of storage may be utilized, as well as the differing amounts of storage. For example, in some embodiments, storage 330 may consist of flash memory, magnetic storage media, or any other appropriate storage type, or combinations thereof. In some embodiments, storage 330 is used to store operating system 325, which is loaded into processor 320 when network device 300 is initialized.
Network device 300 also includes switching fabric 340. In the depicted embodiment, switching fabric 340 is the hardware, software, or combination thereof that passes traffic between an ingress port and one or more egress ports of network device 300. Switching fabric 340, as shown, may include packet processors, e.g., application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs), and/or controlling programming used to analyze network traffic, applying appropriate networking rules, and forward traffic between ports of network device 300. In several of the embodiments described herein, it is understood that configuring or instructing a port to perform an action involves configuring or instructing that portion of the switching fabric that controls the indicated port to perform that action.
Network device 300 can be utilized as network device 101 in network 200. When so used, network device 300 can be configured to “mirror” network traffic received on one or more ports, and forward copies of the traffic to a specified location. For example, network device 300, used as network device 101, could be configured such that traffic received on ports 351 and 353 is copied, and forwarded to port 361, which connects to network monitoring device 110.
Network device 300 could also be utilized as a network monitoring device, e.g., monitoring load balancer 110 in network 200. When so used, network device 300 is configured to receive copies of network traffic, select between a number of available monitoring servers, and forward the traffic to the selected server. One such embodiment is explored in greater detail, below. It is understood that network device 101 and monitoring load balancer 110 may be any device capable of performing at least the operations ascribed to them herein.
With reference now to
As shown, network device 400 includes networking interface module 401, memory 410, processor 420, storage 430, and interconnect fabric 440. Processor 420 executes instructions for controlling network device 400, and for managing traffic passing through network device 400. An operating system 425 is shown as executing on processor 420; in some embodiments, operating system 425 supplies the programmatic interface to network device 400.
Network device 400 is also shown as including storage 430. In different embodiments, different types of storage may be utilized, as well as the differing amounts of storage. For example, in some embodiments, storage 430 may consist of flash memory, magnetic storage media, or any other appropriate storage type, or combinations thereof. In some embodiments, storage 430 is used to store operating system 425, which is loaded into processor 420 when network device 400 is initialized.
Networking interface module 401, in the depicted embodiment, is made up of a number of physical ports connected to port group switching logic subsystems, interconnected by an interconnect fabric. Networking interface module 401 is shown as incorporating a number of physical ports, e.g., ports 451, 452, 461, 462, 471, 472, 481, and 482. These physical ports, in this embodiment, provide conductivity to end stations or other network devices. In different embodiments, such physical ports may be auto-sensing, half duplex, or full-duplex. These physical ports may offer connectivity for Ethernet, Fast Ethernet, gigabit Ethernet, 10 gigabit Ethernet, or any other compatible network protocol.
In the depicted embodiment, physical ports are coupled to interconnect fabric 440 by means of port group switching logic, e.g., port group switching logic 450, 460, 470, or 480. In the depicted embodiment, port group switching logic of 450 includes media access controller (MAC) 453, packet classifier 454, content addressable memory (CAM) 455, parameter random access memory (PRAM) 456, multicast assist module 457, and transmit pipeline 458. In different embodiments, different arrangements or combinations of these or similar elements may be utilized. In particular, in some embodiments, some or all of these elements may be shared across multiple port group switching logic subsystems.
Media access controller 453, in the depicted embodiment, provides continuous data flow between the physical ports, e.g., physical ports 451 and 452, and packet classifier 454. MAC 453 is responsible for encoding and decoding, cyclic redundancy check (CRC) verification for incoming packets, CRC calculation for outgoing packets, and auto-negotiation logic.
Packet classifier 454, in the depicted embodiment, is an integrated circuit, responsible for parsing and examining incoming packets in order to determine how the packet should be forwarded. In some embodiments, packet classifier 454 may be implemented as an application-specific integrated circuit (ASIC); in other embodiments, packet classifier 454 may be implemented as a field-programmable gate array (FPGA). In one embodiment, packet classifier 454 is a processor, executing software instructions.
Packet classifier 454 interfaces with CAM 455 and PRAM 456. CAM 455 and PRAM 456 are programmed, e.g., by OS 425, with information which, directly or indirectly, indicates how the packet should be forwarded through network device 400. In particular, packet classifier 454 attaches a forwarding header to an incoming packet, which indicates to network device 400 how to route the packet within the device. Moreover, in an embodiment where network device 400 serves as network device 101, packet classifier 454 can be configured to “mirror” some or all incoming packets, e.g., by modifying the forwarding header, such that a duplicate copy of network traffic is diverted to an indicated port. Alternatively, some or all of the mirroring functionality may be performed in the port group switching logic associated with the egress port(s).
Network device 400 is depicted as also including content addressable memory (CAM) 455. Content addressable memory, such as CAM 455, is a special type of computer memory often used in high-speed searching applications. A CAM is designed in such a way that a data word or data point can be provided to the CAM, and it will return a list of storage addresses. In some embodiments, other similar solutions may be utilized. In the depicted embodiment, CAM 455 is used by a packet classifier 454 to provide lookups based on the content of the packet, e.g., a hash generated from source IP, source port, destination IP, and destination port. CAM 455 will return an index location within PRAM 456, where information on how to forward the packet can be located.
Network device 400 is depicted as also including parameter random access memory (PRAM) 456. PRAM 456, in the depicted embodiment, takes an index returned by CAM 455, and returns information used to forward a packet through network device 400. For example, in one embodiment, a lookup in PRAM 456 may return a forwarding identifier indicating, directly or indirectly, the destination port(s), MAC address of the next hop, virtual local area network (VLAN) data, and/or other forwarding header information required for the completion of the packet forwarding process within network device 400. This information can be used by a packet classifier 454 to build the forwarding header. In some embodiments, after a packet has been forwarded to the egress port group switching logic, the forwarding header is stripped off.
In some embodiments, multicast assist module 457 is included as part of a transmit path. In one such embodiment, multicast assist module 457 provides for replicating packets, e.g., in order to transmit a multicast packet from the same port multiple times with different VLAN IDs. The multicast assist module may include its own VLAN multicast counters, as well as its own VLAN multicast map; these data structures may be programmable by operating system 425. The module is able to automatically manage the transmission of the multicast packet multiple times, each time with the new VLAN ID as specified by the VLAN multicast map.
In the depicted embodiment, interconnect fabric 440 is utilized for passing packets from an incoming port to an egress port. Interconnect fabric 440 interfaces with port group switching logic, e.g., at packet classifier 454 and multicast assist module 457. In one such embodiment, interconnect fabric 440 transfers packets to and from (high-speed) shared memory, e.g., memory 410, or a dedicated high-speed memory included in interconnect fabric 440, such as packet buffer 445. In this embodiment, when interconnect fabric 440 receives a packet from the packet classifier, it transfers the packet to a free location in the shared memory, and retrieves it as appropriate. In some embodiments, it may report information to a buffer manager (not shown), e.g., buffer number, forwarding identifier, or packet priority; the buffer manager makes use of this information, e.g., to determine the location of the packet within the shared memory, manage buffer space, determine the destination(s) of the packet, or identify which queue(s) it should be placed in.
In some embodiments, multiple networking interface modules may be included in the same networking device. In one such embodiment, the interconnect fabric for each networking interface module is coupled to a module crosspoint switching fabric (not shown), which allows packets to pass between networking interface modules. In such embodiments, each interface module can route traffic between ports within that interface module; a packet is only passed to the module crosspoint switching fabric if it is to be routed to a different interface module.
Networking Packets
With reference now to
Networking packet 500, in the depicted embodiment, is a typical networking packet. It is made up of a header section 510, and a payload section 520. Header 510, in the depicted embodiment, contains information identifying the source, destination, and protocol of packet 500. Header 510 may include information such as source IP 511, source port 513, destination IP 515, destination port 517, and protocol 519; in other embodiments, other information may be included. Payload 520, in the depicted embodiment, contains the data being passed between the source and the destination, e.g., in an HTTP packet, payload 520 may contain some portion of a web page.
When a networking packet, such as packet 500, passes through a networking device, such as network device 400, it is initially processed by a packet processor or packet classifier, e.g., packet classifier 454. This initial processing, which may include examination of the packet headers, e.g., header 510, allows the packet processor to determine how to pass the packet through the network device. In the depicted embodiment, this internal forwarding information is incorporated into a forwarding header 560, which is prepended to packet 500, resulting in packet 550. The network device uses this forwarding header to govern when and how a packet is passed through the internal hardware of the network device. The forwarding header can be stripped off at the outgoing port, allowing the original packet to continue to its next hop.
For example, with reference to
Implementing Load Balancing Across Multiple Monitoring Servers
With reference now to
The embodiment described herein details a number of steps for configuring a network monitoring device, such as network device 400, to provide load-balancing across multiple monitoring servers in a given network. It is understood that in different embodiments, various steps may be altered or omitted, depending upon the architecture or functionality of the network device being configured. For example, while flowchart 600 describes manipulations of content addressable memory (CAM), another embodiment may interact with another element, e.g., another type of memory, for a network device which does not include content addressable memory.
Moreover, while several embodiments described in conjunction with flowchart 600 discuss load-balancing across multiple intrusion detection system (IDS) servers, it is understood that embodiments are well suited to applications involving different types of monitoring, e.g., virus or spam detection, or bandwidth monitoring.
With reference now to step 610, one or more monitoring groups are configured. In some embodiment, monitoring groups can be utilized to divide network traffic. For example, one monitoring group may be assigned to monitor HTTP traffic, while a second monitoring group handles FTP traffic. These monitoring groups allow for greater granularity in allocating and/or adding monitoring servers. For example, if additional processing power is needed to handle HTTP traffic, a new server can be added to the HTTP monitoring group. In different embodiments, different approaches are utilized for configuring monitoring groups. One such embodiment is described below, with reference to
With reference now to step 620, forwarding data is generated for the configured monitoring groups and servers, and stored in the network monitoring device. In different embodiments, this step may be implemented in different ways.
In some embodiments, the anticipated range of data is equally divided among the monitoring servers in a given monitoring group. For example, if a monitoring group associated with HTTP traffic contains three monitoring servers, this traffic may be balanced equally across all three servers, e.g., one third of the anticipated networking traffic should be sent to each monitoring server.
In one implementation, information contained in a packet header for a given protocol type is used to generate a hash value. For example, the source IP, source port, destination IP, and destination port are included in a packet header, and can be used to generate a value from a hash function very quickly. All of these possible hash values are known, and can be associated with other information, e.g., internal routing or forwarding information for directing a particular networking packet to a specified monitoring server. In order to obtain an approximately equal division of networking traffic across three monitoring servers, for example, the range of possible hash values can be subdivided so that one third of the possible range will forward traffic to a first monitoring server, one third to a second monitoring server, and one third to the third monitoring server.
In some embodiments, hash value calculation and/or association with monitoring servers is performed by the network monitoring device. For example, the operating system for the network monitoring device may calculate the possible range of hash values corresponding to a particular monitoring group, e.g., a group associated with HTTP traffic.
With reference now to step 630, the forwarding data is stored. In order to ensure that this load-balancing operation is performed quickly, some embodiments utilize a combination of content addressable memory (CAM) and parameter random access memory (PRAM). The hash values are loaded into the CAM, and the internal forwarding information is loaded into PRAM. When a packet is received, the corresponding hash value is calculated from information contained in the packet header. That hash value is passed to the CAM, which returns an index value into PRAM. Internal forwarding information is retrieved from that location in PRAM, e.g., a forwarding identifier, MAC address of the next hop, VLAN ID, as well as any internal forwarding information needed to create the internal forwarding header to pass the packet between the ingress port and the destination port. Practitioners will appreciate, however, that the utility, format, and nature of the internal forwarding information may vary, across different embodiments and/or types of network devices.
As discussed previously, in some embodiments hash value calculation can be performed by the monitoring load balancer, e.g., the operating system for the monitoring load balancer may calculate the possible range of hash values corresponding to a particular monitoring group. The operating system can then load these values into the device's CAM, associated with index values for the PRAM. The operating system can also calculate the range of values that should be associated with each monitoring server in a given monitoring group, and populate the PRAM with the appropriate internal forwarding information.
For example, with reference to
With reference now to step 640, the monitoring load balancer may perform a “health check” on a monitoring server, to determine the monitoring server's status. In different embodiments, different degrees of status monitoring may be available. For example, in some embodiments, the monitoring load balancer may notice if a server goes “link dead,” such that the connection between the monitoring server and the monitoring load balancer is lost. The monitoring load balancer may also be able to perform additional diagnostics on the monitoring server, e.g., by performing a remote layer 3 or layer 4 “health check.”
Further, in some embodiments, the monitoring load balancer may recognize when an additional monitoring server has been added. In some embodiments, such recognition may require additional configuration of the networking device. In other embodiments, the monitoring load balancer may be configured in such a way that it will recognize a particular type of monitoring server upon connection, and assign an appropriate designation and/or associate it with a monitoring group.
For example, with reference to
With reference now to step 650, forwarding information is recalculated, as necessary. If a monitoring server is added or removed from a monitoring group, the distribution of networking packets should be changed accordingly. As such, the monitoring load balancer can be configured so that if a monitoring server fails a health check, or a new monitoring server is added, the hash values and/or associated internal forwarding information are recalculated, and the new values utilized for handling networking traffic.
For example, with reference to
An aspect of one embodiment of monitoring load balancer device 110, e.g., network device 400, is that the mirrored traffic received from the network device 101 may be passed between the inputs and outputs of the load balancer device 110 at line rate (e.g., 1 Gig or 10 Gig) using hardware forwarding. As an example using exemplary network device 400, memory in the network device 400, e.g., CAM 455 and PRAM 456, may be pre-loaded with information that allows the network device 400 to output most, if not all, received traffic, in a load balanced manner, to the monitoring server without having to access the microprocessor 420 of the network device 400. Another aspect of such an embodiment is that, if there is a change in the monitoring servers, e.g., a monitoring server goes off-line or comes online, processor 420 of the network device 400, or operating system 425 executing on processor 420, can re-program the CAM 455 and PRAM 456 in real time to re-load-balance the traffic to the monitoring servers.
Configuring a Monitoring Group
With reference now to
In some embodiments, monitoring groups may be utilized in conjunction with the method described in flowchart 600. In one such embodiment, the method described herein with reference to flowchart 700 is used to configure monitoring groups, e.g., step 610 of flowchart 600. In other embodiments, monitoring need not be organized into (protocol) specific groups. In one such embodiment, all traffic is shared proportionately across all available monitoring servers, as opposed to first distributing traffic to specific groups of servers based on protocol, and then distributing protocol traffic across the group's monitoring servers.
With reference to step 701, a monitoring server is added to a network. In some embodiments, when a new monitoring server is added to a network, the monitoring load balancer is made aware of this monitoring server. In one such embodiment, the monitoring load balancer may be able to autodetect the monitoring server. In another embodiment, the monitoring load balancer is configured to be aware of the monitoring server, e.g., by modifying a configuration file used by the operating system of the monitoring load balancer.
The first configuration line depicted in Table 1 indicates to the monitoring load balancer that a new IDS server, named ids1, can be found at IP address 192.168.3.1. The second configuration line depicted in Table 1 removes server ids1. Alternatively, in some embodiments, a monitoring server may be connected to the monitoring load balancer, and identified by its corresponding physical port, rather than by IP address. An example configuration line utilizing this behavior is presented below, with reference to step 705.
For example, with reference to
With reference now to step 703, a monitoring server group is created. In some embodiments, network traffic to the monitor is segregated across groups of monitoring servers, e.g., by networking protocol. In several such embodiments, defining distinct monitoring groups aids in appropriate traffic handling.
The first configuration line depicted in Table 2 defines a monitoring group, specifically a group for IDS servers, within the monitoring load balancer, and identifies this as group number 1. The second configuration line depicted above deletes monitoring group 1.
Continuing the preceding example, with reference to
With reference now to step 705, a monitoring server is associated with a monitoring group. In some embodiments, a single server may be associated with multiple groups. Further, in some embodiments, a monitoring group may, and likely will, include multiple monitoring servers. The nature of the monitoring being performed, as well as the anticipated networking load upon the monitoring group, will impact the desired number of monitoring servers included in a monitoring group. For example, an IDS group intended to monitor FTP traffic across a network may require fewer servers than a group intended to monitor HTTP traffic. In some embodiments, monitoring servers are not utilized; in one such embodiment, the monitoring load balancer may be configured to spread the load equally across multiple individual servers. In one default configuration, a single monitoring group is used for all available monitoring servers.
The first configuration line depicted in Table 3 adds two monitoring servers, ids1 and ids2, to IDS monitoring group 1. The second configuration line removes these monitoring servers from the monitoring group.
Alternatively, as previously mentioned, monitoring servers can be identified by their corresponding physical ports. The first configuration line depicted in Table 4, for example, adds two monitoring servers, located at ethernet ports 4/10 and 4/12, to IDS monitoring group 1. The second configuration line removes the monitoring server at port 4/12 from IDS monitoring group 1.
In another embodiment, explicit monitoring groups may not be utilized. Instead, when a monitoring server is identified for the monitoring load balancer, a parameter is set, e.g., identifying a networking protocol for that monitoring server. In such an embodiment, multiple monitoring servers may share the same parameters.
Continuing the preceding example, with reference to
With reference now to step 707, a protocol is associated with a monitoring group. As noted previously, in some embodiments it is desirable that different monitoring groups handle different portions of networking traffic. In several such embodiments, traffic may be segregated by networking protocol. For example, one monitoring group may be assigned to monitor HTTP traffic, while the second monitoring group monitors FTP traffic, while a third monitoring group is configured to monitor all types of networking traffic. Moreover, multiple groups may be assigned to monitor the same traffic, e.g., to allow for multiple types of traffic monitoring, such as IDS and bandwidth monitoring. In other embodiments, other divisions may be utilized, e.g., by source and/or destination.
The first configuration line depicted in Table 5 associates IDS monitoring group 1 with all HTTP network traffic. The second configuration line associates IDS monitoring group 2 with all FTP network traffic. The third configuration line associates IDS monitoring group 3 with all network traffic. In some embodiments, greater weight is given to specifically enumerated protocols, over the “default” parameter. As such, HTTP traffic and FTP traffic will be passed to IDS monitoring groups 1 and 2, respectively, rather than to IDS monitoring group 3. In other embodiments, other approaches to conflict resolution may be utilized.
Continuing the preceding example, with reference to
With reference now to step 709, a monitoring group is associated with an ingress port. In some embodiments, a single monitoring group may be associated with multiple ingress ports; similarly, multiple monitoring groups may be associated with a single ingress port. In the former case, a single monitoring load balancer could receive input from multiple monitored networking devices, and distribute the monitor traffic across the same group of monitoring servers. In the latter case, different monitoring groups may need to monitor traffic from the same ingress port, e.g., one monitoring group is associated with HTTP traffic coming in on the specified port, while a second monitoring group watches FTP traffic on that same port.
The first configuration line depicted in Table 6 binds IDS monitoring group 1 to ethernet ingress interface 4/1. The second configuration line binds IDS monitoring group 2 to the same interface.
For example, with reference to
Hash Functions and Forwarding Information
Indifferent embodiments, different approaches may be utilized for determining which traffic should be forwarded to which monitoring server. In some embodiments, a hash function is utilized, with some information from an incoming network packet used to select the appropriate monitoring server. In other embodiment, other approaches may be utilized.
With reference now to
As shown, CAM 800 contains a number of entries, each conforming to a specified format, e.g., source IP address, destination IP address, source port, destination port, and protocol. Each CAM entry also has an associated index, which serves as a pointer to a location in PRAM 850. In the depicted embodiment, each such location in PRAM 850 provides forwarding information, e.g., to forward a packet from an ingress port to an egress port corresponding to a particular monitoring server. For example, entry 801 contains index 807, which corresponds to PRAM location 857. Forwarding information corresponding to monitoring server 1 is stored at PRAM location 857.
One approach to hashing incoming traffic is to selectively utilize some of the information contained in the packet header. For example, in order to reduce the number of CAM entries required to implement load balancing, it may be desirable to limit the hash value to four bits, for a total of 16 possible values. One way to obtain these four bits is to use only the final two bits from the source IP address and the destination IP address. Over a large range of IP addresses, such as may be expected in a busy network environment, network traffic should be reasonably distributed across these 16 possible values.
In some embodiments, e.g., where monitoring load balancer 110 is a network device such as a switch or router, CAM entries may have a specified format. For example, with reference to
For example, entry 801 contains a source IP field 32 bits in length, with only the final two bits specified. Similarly, entry 801 contains a destination IP field of 32 bits, with only the final two bits specified. The combination of these four bits allows for 16 possible hash values, e.g., 0000 to 1111. Further, this approach allows for defined monitoring groups, e.g., by specifying a destination port. For example, HTTP traffic is normally directed to port 80; by specifying a port in entry 801, only HTTP traffic will match that entry, and therefore be directed to the specified index in PRAM 850.
In the embodiment show in
In other embodiments, other approaches may be utilized for capturing and storing hash values. For example, in one embodiment, a more complicated hash function utilized, e.g., using multiple packet header values to calculate a hash value. Some such embodiments may utilize a dedicated module, e.g., an ASIC or FPGA, to perform such a hash calculation at high speed. Other embodiments may utilize additional CAM assets to store greater numbers of hash values. Different embodiments will make different determinations, as to the trade-off between CAM assets and hash value calculation.
Traffic Flow
In networking, the network traffic passed between a particular source and destination can be described as a “flow.” In some embodiments, it is desirable that packets corresponding to a particular flow should be sent to the same monitoring server. For example, with reference to
In some embodiments, this flow preservation is obtained by careful generation of forwarding information. The hash values corresponding to traffic flowing in one direction are unlikely to be the same as those corresponding to traffic flowing in the other direction. As such, when forwarding information is generated, the forwarding information corresponding to those to hash values should indicate that traffic should be sent to the same monitoring server.
For example, with reference to
Configuring a Load Balancing Network Device
With reference now to
With reference now to step 910, monitoring server configuration information is received. In some embodiments, this monitoring server configuration information is provided by a user, e.g., by editing a configuration file for the load-balancing network device. In other embodiments, monitoring server information may be obtained in other ways, e.g., through autodetection of available monitoring servers.
With reference now to step 920, forwarding data is calculated, with reference to the monitoring server configuration information. In some embodiments, these calculations performed by an operating system residing on the network device, e.g., operating system 425. As discussed previously, different embodiments may utilize different approaches for calculating forwarding information. For example, hash values for anticipated network traffic may be calculated, and used to allocate network traffic across the available monitoring servers.
With reference now to step 930, the forwarding data is used to configure the network device to load balance across available monitoring servers. In some embodiments, the operating system for the network device uses the forwarding information to configure the network device to allow for hardware load-balancing across monitoring servers. For example, the operating system may populate content addressable memory (CAM) with the calculated hash values, and corresponding index values pointing into parameter random access memory (PRAM). At the indicated locations in PRAM, the operating system may store appropriate forwarding information, e.g., a forwarding identifier, egress port, or similar information, to allow an incoming packet be routed to a particular monitoring server.
Embodiments of the present invention are thus described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the following claims.
This application is a continuation application of the U.S. patent application Ser. No. 11/937,285, entitled “Monitoring Server Load Balancing ”, with a filing date Nov. 8, 2007, now U.S. Pat. No. 8,248,928, issued Aug. 21, 2012, to WANG, et al., assigned to the assignee of the present application, and hereby incorporated by reference in its entirety. This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Application Ser. No. 60/998,410, filed on Oct. 9, 2007, to Wang et al., entitled “MONITORING SERVER LOAD BALANCING”and which is incorporated herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5031094 | Toegel et al. | Jul 1991 | A |
5359593 | Derby et al. | Oct 1994 | A |
5948061 | Merriman et al. | Sep 1999 | A |
5951634 | Sitbon et al. | Sep 1999 | A |
6006269 | Phaal | Dec 1999 | A |
6006333 | Nielsen | Dec 1999 | A |
6092178 | Jindal et al. | Jul 2000 | A |
6112239 | Kenner et al. | Aug 2000 | A |
6115752 | Chauhan | Sep 2000 | A |
6128279 | O'Neil et al. | Oct 2000 | A |
6128642 | Doraswamy et al. | Oct 2000 | A |
6148410 | Baskey et al. | Nov 2000 | A |
6167445 | Gai et al. | Dec 2000 | A |
6167446 | Lister et al. | Dec 2000 | A |
6182139 | Brendel | Jan 2001 | B1 |
6195691 | Brown | Feb 2001 | B1 |
6205477 | Johnson et al. | Mar 2001 | B1 |
6233604 | Van Horne et al. | May 2001 | B1 |
6260070 | Shah | Jul 2001 | B1 |
6286039 | Van Horne et al. | Sep 2001 | B1 |
6286047 | Ramanathan et al. | Sep 2001 | B1 |
6304913 | Rune | Oct 2001 | B1 |
6324580 | Jindal et al. | Nov 2001 | B1 |
6327622 | Jindal et al. | Dec 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
6381627 | Kwan et al. | Apr 2002 | B1 |
6389462 | Cohen et al. | May 2002 | B1 |
6427170 | Sitaraman et al. | Jul 2002 | B1 |
6434118 | Kirschenbaum | Aug 2002 | B1 |
6438652 | Jordan et al. | Aug 2002 | B1 |
6446121 | Shah et al. | Sep 2002 | B1 |
6449657 | Stanbach, Jr. et al. | Sep 2002 | B2 |
6470389 | Chung et al. | Oct 2002 | B1 |
6473802 | Masters | Oct 2002 | B2 |
6480508 | Mwikalo et al. | Nov 2002 | B1 |
6490624 | Sampson et al. | Dec 2002 | B1 |
6549944 | Weinberg et al. | Apr 2003 | B1 |
6567377 | Vepa et al. | May 2003 | B1 |
6578066 | Logan et al. | Jun 2003 | B1 |
6606643 | Emens et al. | Aug 2003 | B1 |
6665702 | Zisapel et al. | Dec 2003 | B1 |
6671275 | Wong et al. | Dec 2003 | B1 |
6681232 | Sitanizadeh et al. | Jan 2004 | B1 |
6681323 | Fontsnesi et al. | Jan 2004 | B1 |
6691165 | Bruck et al. | Feb 2004 | B1 |
6697368 | Chang et al. | Feb 2004 | B2 |
6735218 | Chang et al. | May 2004 | B2 |
6745241 | French et al. | Jun 2004 | B1 |
6751616 | Chan | Jun 2004 | B1 |
6772211 | Lu et al. | Aug 2004 | B2 |
6779017 | Lamberton et al. | Aug 2004 | B1 |
6789125 | Aviani et al. | Sep 2004 | B1 |
6826198 | Turina et al. | Nov 2004 | B2 |
6831891 | Mansharamani et al. | Dec 2004 | B2 |
6839700 | Doyle et al. | Jan 2005 | B2 |
6850984 | Kalkunte et al. | Feb 2005 | B1 |
6874152 | Vermeire et al. | Mar 2005 | B2 |
6879995 | Chinta et al. | Apr 2005 | B1 |
6898633 | Lyndersay et al. | May 2005 | B1 |
6901072 | Wong | May 2005 | B1 |
6901081 | Ludwig | May 2005 | B1 |
6928485 | Krishnamurthy et al. | Aug 2005 | B1 |
6944678 | Lu et al. | Sep 2005 | B2 |
6963914 | Breitbart et al. | Nov 2005 | B1 |
6963917 | Callis et al. | Nov 2005 | B1 |
6985956 | Luke et al. | Jan 2006 | B2 |
6987763 | Rochberger et al. | Jan 2006 | B2 |
6996615 | McGuire | Feb 2006 | B1 |
6996616 | Leighton et al. | Feb 2006 | B1 |
7000007 | Valenti | Feb 2006 | B1 |
7009968 | Ambe et al. | Mar 2006 | B2 |
7020698 | Andrews et al. | Mar 2006 | B2 |
7020714 | Kalyanaraman et al. | Mar 2006 | B2 |
7028083 | Levine et al. | Apr 2006 | B2 |
7031304 | Arberg et al. | Apr 2006 | B1 |
7032010 | Swildens et al. | Apr 2006 | B1 |
7036039 | Holland | Apr 2006 | B2 |
7058717 | Chao et al. | Jun 2006 | B2 |
7062642 | Langride et al. | Jun 2006 | B1 |
7086061 | Joshi et al. | Aug 2006 | B1 |
7089293 | Grosner et al. | Aug 2006 | B2 |
7117530 | Lin | Oct 2006 | B1 |
7126910 | Sridhar | Oct 2006 | B1 |
7127713 | Davis et al. | Oct 2006 | B2 |
7136932 | Schneider | Nov 2006 | B1 |
7139242 | Bays | Nov 2006 | B2 |
7177933 | Foth | Feb 2007 | B2 |
7185052 | Day | Feb 2007 | B2 |
7187687 | Davis et al. | Mar 2007 | B1 |
7188189 | Karol et al. | Mar 2007 | B2 |
7197547 | Miller et al. | Mar 2007 | B1 |
7206806 | Pineau | Apr 2007 | B2 |
7215637 | Ferguson et al. | May 2007 | B1 |
7225272 | Kelley et al. | May 2007 | B2 |
7240015 | Karmouch et al. | Jul 2007 | B1 |
7240100 | Wein et al. | Jul 2007 | B1 |
7254626 | Kommula et al. | Aug 2007 | B1 |
7257642 | Bridger et al. | Aug 2007 | B1 |
7260645 | Bays | Aug 2007 | B2 |
7266117 | Davis | Sep 2007 | B1 |
7266120 | Cheng et al. | Sep 2007 | B2 |
7277954 | Stewart et al. | Oct 2007 | B1 |
7292573 | LaVigne et al. | Nov 2007 | B2 |
7296088 | Padmanabhan et al. | Nov 2007 | B1 |
7321926 | Zhang et al. | Jan 2008 | B1 |
7424018 | Gallatin et al. | Sep 2008 | B2 |
7436832 | Gallatin et al. | Oct 2008 | B2 |
7440467 | Gallatin et al. | Oct 2008 | B2 |
7441045 | Skene et al. | Oct 2008 | B2 |
7450527 | Ashwood Smith | Nov 2008 | B2 |
7454500 | Hsu et al. | Nov 2008 | B1 |
7483374 | Nilakantan et al. | Jan 2009 | B2 |
7506065 | LaVigne et al. | Mar 2009 | B2 |
7555562 | See et al. | Jun 2009 | B2 |
7558195 | Kuo et al. | Jul 2009 | B1 |
7574508 | Kommula | Aug 2009 | B1 |
7581009 | Hsu et al. | Aug 2009 | B1 |
7584301 | Joshi | Sep 2009 | B1 |
7587487 | Gunturu | Sep 2009 | B1 |
7606203 | Shabtay et al. | Oct 2009 | B1 |
7647427 | Devarapalli | Jan 2010 | B1 |
7657629 | Kommula | Feb 2010 | B1 |
7690040 | Frattura et al. | Mar 2010 | B2 |
7706363 | Daniel et al. | Apr 2010 | B1 |
7716370 | Devarapalli | May 2010 | B1 |
7720066 | Weyman et al. | May 2010 | B2 |
7720076 | Dobbins et al. | May 2010 | B2 |
7747737 | Apte et al. | Jun 2010 | B1 |
7756965 | Joshi | Jul 2010 | B2 |
7774833 | Szeto et al. | Aug 2010 | B1 |
7787454 | Won et al. | Aug 2010 | B1 |
7792047 | Gallatin et al. | Sep 2010 | B2 |
7835358 | Gallatin et al. | Nov 2010 | B2 |
7840678 | Joshi | Nov 2010 | B2 |
7848326 | Leong et al. | Dec 2010 | B1 |
7889748 | Leong et al. | Feb 2011 | B1 |
7899899 | Joshi | Mar 2011 | B2 |
7940766 | Olakangil et al. | May 2011 | B2 |
7953089 | Ramakrishnan et al. | May 2011 | B1 |
8208494 | Leong | Jun 2012 | B2 |
8238344 | Chen et al. | Aug 2012 | B1 |
8239960 | Frattura et al. | Aug 2012 | B2 |
8248928 | Wang et al. | Aug 2012 | B1 |
8270845 | Cheung et al. | Sep 2012 | B2 |
8315256 | Leong et al. | Nov 2012 | B2 |
8386846 | Cheung | Feb 2013 | B2 |
8391286 | Gallatin et al. | Mar 2013 | B2 |
8504721 | Hsu et al. | Aug 2013 | B2 |
8514718 | Zijst | Aug 2013 | B2 |
8537697 | Leong et al. | Sep 2013 | B2 |
8570862 | Leong et al. | Oct 2013 | B1 |
8615008 | Natarajan et al. | Dec 2013 | B2 |
8654651 | Leong et al. | Feb 2014 | B2 |
8824466 | Won et al. | Sep 2014 | B2 |
8830819 | Leong et al. | Sep 2014 | B2 |
8873557 | Nguyen | Oct 2014 | B2 |
8891527 | Wang | Nov 2014 | B2 |
8897138 | Yu et al. | Nov 2014 | B2 |
8953458 | Leong et al. | Feb 2015 | B2 |
20010049741 | Skene et al. | Dec 2001 | A1 |
20010052016 | Skene et al. | Dec 2001 | A1 |
20020018796 | Wironen | Feb 2002 | A1 |
20020023089 | Woo | Feb 2002 | A1 |
20020026551 | Kamimaki et al. | Feb 2002 | A1 |
20020038360 | Andrews et al. | Mar 2002 | A1 |
20020055939 | Nardone et al. | May 2002 | A1 |
20020059170 | Vange | May 2002 | A1 |
20020059464 | Hata et al. | May 2002 | A1 |
20020062372 | Hong et al. | May 2002 | A1 |
20020078233 | Biliris et al. | Jun 2002 | A1 |
20020091840 | Pulier et al. | Jul 2002 | A1 |
20020112036 | Bohannan et al. | Aug 2002 | A1 |
20020120743 | Shabtay et al. | Aug 2002 | A1 |
20020124096 | Loguinov et al. | Sep 2002 | A1 |
20020133601 | Kennamer et al. | Sep 2002 | A1 |
20020150048 | Ha et al. | Oct 2002 | A1 |
20020154600 | Ido et al. | Oct 2002 | A1 |
20020188862 | Trethewey et al. | Dec 2002 | A1 |
20020194324 | Guha | Dec 2002 | A1 |
20020194335 | Maynard | Dec 2002 | A1 |
20030031185 | Kikuchi et al. | Feb 2003 | A1 |
20030035430 | Islam et al. | Feb 2003 | A1 |
20030065711 | Acharya et al. | Apr 2003 | A1 |
20030065763 | Swildens et al. | Apr 2003 | A1 |
20030105797 | Dolev et al. | Jun 2003 | A1 |
20030115283 | Barbir et al. | Jun 2003 | A1 |
20030135509 | Davis et al. | Jul 2003 | A1 |
20030202511 | Sreejith et al. | Oct 2003 | A1 |
20030210686 | Terrell et al. | Nov 2003 | A1 |
20030210694 | Jayaraman et al. | Nov 2003 | A1 |
20030229697 | Borella | Dec 2003 | A1 |
20040019680 | Chao et al. | Jan 2004 | A1 |
20040024872 | Kelley et al. | Feb 2004 | A1 |
20040064577 | Dahlin et al. | Apr 2004 | A1 |
20040194102 | Neerdaels | Sep 2004 | A1 |
20040249939 | Amini et al. | Dec 2004 | A1 |
20040249971 | Klinker | Dec 2004 | A1 |
20050021883 | Shishizuka et al. | Jan 2005 | A1 |
20050033858 | Swildens et al. | Feb 2005 | A1 |
20050060418 | Sorokopud | Mar 2005 | A1 |
20050060427 | Phillips et al. | Mar 2005 | A1 |
20050086295 | Cunningham et al. | Apr 2005 | A1 |
20050149531 | Srivastava | Jul 2005 | A1 |
20050169180 | Ludwig | Aug 2005 | A1 |
20050190695 | Phaal | Sep 2005 | A1 |
20050207417 | Ogawa et al. | Sep 2005 | A1 |
20050278565 | Frattura et al. | Dec 2005 | A1 |
20050286416 | Shimonishi et al. | Dec 2005 | A1 |
20060036743 | Deng et al. | Feb 2006 | A1 |
20060039374 | Belz et al. | Feb 2006 | A1 |
20060045082 | Fertell et al. | Mar 2006 | A1 |
20060143300 | See et al. | Jun 2006 | A1 |
20070195761 | Tatar et al. | Aug 2007 | A1 |
20070233891 | Luby et al. | Oct 2007 | A1 |
20080002591 | Ueno | Jan 2008 | A1 |
20080031141 | Lean et al. | Feb 2008 | A1 |
20080159141 | Soukup et al. | Jul 2008 | A1 |
20080195731 | Harmel et al. | Aug 2008 | A1 |
20080225710 | Raja et al. | Sep 2008 | A1 |
20080304423 | Chuang et al. | Dec 2008 | A1 |
20090135835 | Gallatin et al. | May 2009 | A1 |
20090262745 | Leong et al. | Oct 2009 | A1 |
20100011126 | Hsu et al. | Jan 2010 | A1 |
20100135323 | Leong | Jun 2010 | A1 |
20100209047 | Cheung et al. | Aug 2010 | A1 |
20100293296 | Hsu et al. | Nov 2010 | A1 |
20100325178 | Won et al. | Dec 2010 | A1 |
20110044349 | Gallatin et al. | Feb 2011 | A1 |
20110058566 | Leong et al. | Mar 2011 | A1 |
20110211443 | Leong et al. | Sep 2011 | A1 |
20110216771 | Gallatin et al. | Sep 2011 | A1 |
20120023340 | Cheung | Jan 2012 | A1 |
20120157088 | Gerber et al. | Jun 2012 | A1 |
20120243533 | Leong | Sep 2012 | A1 |
20120257635 | Gallatin et al. | Oct 2012 | A1 |
20130010613 | Cafarelli et al. | Jan 2013 | A1 |
20130034107 | Leong et al. | Feb 2013 | A1 |
20130156029 | Gallatin et al. | Jun 2013 | A1 |
20130173784 | Wang et al. | Jul 2013 | A1 |
20130201984 | Wang | Aug 2013 | A1 |
20130259037 | Natarajan et al. | Oct 2013 | A1 |
20130272135 | Leong | Oct 2013 | A1 |
20140016500 | Leong et al. | Jan 2014 | A1 |
20140022916 | Natarajan et al. | Jan 2014 | A1 |
20140029451 | Nguyen | Jan 2014 | A1 |
20140040478 | Hsu et al. | Feb 2014 | A1 |
20140204747 | Yu et al. | Jul 2014 | A1 |
20140321278 | Cafarelli et al. | Oct 2014 | A1 |
20150033169 | Lection et al. | Jan 2015 | A1 |
20150180802 | Chen et al. | Jun 2015 | A1 |
20150215841 | Hsu et al. | Jul 2015 | A1 |
Number | Date | Country |
---|---|---|
2654340 | Oct 2013 | EP |
20070438 | Feb 2008 | IE |
2010135474 | Nov 2010 | WO |
Entry |
---|
IBM User Guide, Version 2.1 AIX, Solaris and Windows NT, Third Edition (Mar. 1999) 102 Pages. |
U.S. Appl. No. 60/169,502, filed Dec. 7, 2009 by Yeejang James Lin. |
U.S. Appl. No. 60/182,812, filed Feb. 16, 2000, by Skene et al. |
U.S. Appl. No. 09/459,815, filed Dec. 13, 1999, by Skene et al. |
Delgadillo, “Cisco Distributed Director,” White Paper, 1999, at URL: http://www-europe.cisco.warp/public/751/distdir/dd—wp.htm, (19 pages) with Table of Contents for TeleCon (16 pages). |
Cisco LocalDirector Version 1.6.3 Release Notes, Oct. 1997, Cisco Systems, Inc. Doc No. 78-3880-05. |
Foundry Networks Announces Application Aware Layer 7 Switching on SeverIron Platform, (Mar. 1999). |
Foundry ServerIron Installation and Configuration Guide (May 2000), Table of Contents—Chapter 5, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html. |
Foundry ServerIron Installation and Configuration Guide (May 2000), Chapter 6-10, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html. |
Foundry ServerIron Installation and Configuration Guide (May 2000), Chapter 11—Appendix C, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html. |
U.S. Appl. No. 14/320,138, filed Jun. 30, 2014 by Chen et al., (Unpublished). |
U.S. Appl. No. 61/919,244, filed Dec. 20, 2013 by Chen et al. |
U.S. Appl. No. 61/932,650, filed Jan. 28, 2014 by Munshi et al. |
U.S. Appl. No. 61/994,693, filed May 16, 2014 by Munshi et al. |
U.S. Appl. No. 62/088,434, filed Dec. 5, 2014 by Hsu et al. |
U.S. Appl. No. 62/137,073, filed Mar. 23, 2015 by Chen et al. |
U.S. Appl. No. 62/137,084, filed Mar. 23, 2015 by Chen et al. |
U.S. Appl. No. 62/137,096, filed Mar. 23, 2015 by Laxman et al. |
U.S. Appl. No. 62/137,106, filed Mar. 23, 2015 by Laxman et al. |
U.S. Appl. No. 60/998,410, filed Oct. 9, 2007 by Wang et al. |
PCT Patent Application No. PCT/US2015/012915 filed on Jan. 26, 2015 by Hsu et al. |
U.S. Appl. No. 14/848,586, filed Sep. 9, 2015 by Chen et al., (Unpublished). |
U.S. Appl. No. 14/848,645, filed Sep. 9, 2015 by Chen et al., (Unpublished). |
U.S. Appl. No. 14/848,677, filed Sep. 9, 2015 by Chen et al., (Unpublished). |
Brocade and IBM Real-Time Network Analysis Solution; 2011 Brocade Communications Systems, Inc.; 2 pages. |
Brocade IP Network Leadership Technology; Enabling Non-Stop Networking for Stackable Switches with Hitless Failover; 2010; 3 pages. |
Gigamon Adaptive Packet Filtering; Feature Breif; 3098-03 Apr. 2015; 3 pages. |
Gigamon: Active Visibility for Multi-Tiered Security Solutions Overview; 3127-02; Oct. 2014; 5 pages. |
Gigamon: Application Note Stateful GTP Correlation; 4025-02; Dec. 2013; 9 pages. |
Gigamon: Enabling Network Monitoring at 40Gbps and 100Gbps with Flow Mapping Technology White Paper; 2012; 4 pages. |
Gigamon: Enterprise System Reference Architecture for the Visibility Fabric White Paper; 5005-03; Oct. 2014; 13 pages. |
Gigamon: Gigamon Intelligent Flow Mapping White Paper; 3039-02; Aug. 2013; 7 pages. |
Gigamon: GigaVUE-HB1 Data Sheet; 4011-07; Oct. 2014; 4 pages. |
Gigamon: Maintaining 3G and 4G LTE Quality of Service White Paper; 2012; 4 pages. |
Gigamon: Monitoring, Managing, and Securing SDN Deployments White Paper; 3106-01; May 2014; 7 pages. |
Gigamon: Netflow Generation Feature Brief; 3099-04; Oct. 2014; 2 pages. |
Gigamon: Service Provider System Reference Architecture for the Visibility Fabric White Paper; 5004-01; Mar. 2014; 11 pages. |
Gigamon: The Visibility Fabric Architecture—A New Approach to Traffic Visibility White Paper; 2012-2013; 8 pages. |
Gigamon: Unified Visibility Fabric—A New Approach to Visibility White Paper; 3072-04; Jan. 2015; 6 pages. |
Gigamon: Unified Visibility Fabric Solution Brief; 3018-03; Jan. 2015; 4 pages. |
Gigamon: Unified Visibility Fabric; https://www.gigamon.com/unfied-visibility-fabric; Apr. 7, 2015; 5 pages. |
Gigamon: Visibility Fabric Architecture Solution Brief; 2012-2013; 2 pages. |
Gigamon: Visibility Fabric; More than Tap and Aggregation.bmp; 2014; 1 page. |
Gigamon: Vistapointe Technology Solution Brief; Visualize-Optimize-Monetize-3100-02; Feb. 2014; 2 pages. |
IBM User Guide, Version 2.1AIX, Solaris and Windows NT, Third Edition (Mar. 1999) 102 Pages. |
International Search Report & Written Opinion for PCT Application PCT/US2015/012915 mailed Apr. 10, 2015, 15 pages. |
Ixia Anue GTP Session Controller; Solution Brief; 915-6606-01 Rev. A, Sep. 2013; 2 pages. |
Ixia: Creating a Visibility Architecture—a New Perspective on Network Visibilty White Paper; 915-6581-01 Rev. A, Feb. 2014; 14 pages. |
Netscout: nGenius Subscriber Intelligence; Data Sheet; SPDS—001-12; 2012; 6 pages. |
Netscout; Comprehensive Core-to-Access IP Session Analysis for GPRS and UMTS Networks; Technical Brief; Jul. 16, 2010; 6 pages. |
ntop: Monitoring Mobile Networks (2G, 3G and LTE) using nProbe; http://www.ntop.org/nprobe/monitoring-mobile-networks-2g-3g-and-lte-using-nprobe; Apr. 2, 2015; 4 pages. |
White Paper, Foundry Networks, “Server Load Balancing in Today's Web-Enabled Enterprises” Apr. 2002 10 Pages. |
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Dec. 10, 2009, 15 pages. |
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Jun. 2, 2010, 14 pages. |
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Nov. 26, 2010, 16 pages. |
Final Office Action for U.S. Appl. No. 11/827,524 mailed on May 6, 2011, 19 pages. |
Advisory Action for U.S. Appl. No. 11/827,524 mailed on Jul. 14, 2011, 5 pages. |
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Oct. 18, 2012, 24 pages. |
Notice of Allowance for U.S. Appl. No. 11/827,524 mailed on Jun. 25, 2013, 11 pages. |
Non-Final Office Action for U.S. Appl. No. 14/030,782 mailed on Oct. 6, 2014, 14 pages. |
Final Office Action for U.S. Appl. No. 14/030,782 mailed on Jul. 29, 2015, 26 pages. |
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Jul. 6, 2009, 28 pages. |
Final Office Action for U.S. Appl. No. 11/937,285 mailed on Mar. 3, 2010, 28 pages. |
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Aug. 17, 2010, 28 pages. |
Final Office Action for U.S. Appl. No. 11/937,285 mailed on Jan. 20, 2011, 41 pages. |
Final Office Action for U.S. Appl. No. 11/937,285 mailed on May 20, 2011, 37 pages. |
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Nov. 28, 2011, 40 pages. |
Notice of Allowance for U.S. Appl. No. 11/937,285 mailed on Jun. 5, 2012, 10 pages. |
Notice of Allowance for U.S. Appl. No. 14/030,782, mailed on Nov. 16, 2015, 20 pages. |
Number | Date | Country | |
---|---|---|---|
20130173784 A1 | Jul 2013 | US |
Number | Date | Country | |
---|---|---|---|
60998410 | Oct 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11937285 | Nov 2007 | US |
Child | 13584534 | US |