The present disclosure relates generally to data networks and network devices, and in particular, to systems and methods for managing network switches.
Data networks often comprise numerous network devices connected together to move data between various sources and destinations. Managing the operation of such networks can be a challenge. Maintaining uptime in a network is often imperative, and when various network devices become inoperable, traffic must be rerouted quickly to prevent loss of service.
One common network device is a network switch. A network switch may interconnect many sources and destinations of network traffic. A typical switch may include many switch processors, each having at least one input port and output port to send and receive data over a wired connection. However, when a switch processor becomes inoperable, there may be a corresponding interruption in the flow of data through the card.
The present disclosure presents an innovative technique for managing switches and other resources in a network device to reduce downtime, for example.
With respect to the discussion to follow and in particular to the drawings, it is stressed that the particulars shown represent examples for purposes of illustrative discussion, and are presented in the cause of providing a description of principles and conceptual aspects of the present disclosure. In this regard, no attempt is made to show implementation details beyond what is needed for a fundamental understanding of the present disclosure. The discussion to follow, in conjunction with the drawings, makes apparent to those of skill in the art how embodiments in accordance with the present disclosure may be practiced. Similar or same reference numbers may be used to identify or otherwise refer to similar or same elements in the various drawings and supporting descriptions.
Described herein are techniques for managing components of a network device. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of some embodiments. Some embodiments as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
Features and advantages of the present disclosure include techniques for managing components of a network device. A network device may comprise multiple of the same components, such as switches. It may be desirable for software to interface with one software component, which in turn programs multiple components of the network device. For example, multiple switch processors may be programmed by a switch processor programming agent. The switch processor programming agent may be a single point of interface for software used to program the switch processors.
Embodiments of the present disclosure may include a switch processor programming agent to establish a logical interface to multiple switch processors in a network device. Feature agents for performing operations on a plurality of switch processors receive configuration data. The switch processor programming agents translate the configuration data from a first format to a second format and program multiple switch processors in the second format. Switch processors may be switch ASICs for routing network traffic. In one embodiment, a switch processor and a redundant switch processor are maintained in the same state by the switch processor programming agent for seamless transition to the redundant switch processor when the other switch processor becomes inoperable.
Processor 101 may generate feature agents 120. Feature agents 120 may comprise code for managing particular features of switch processors 102-103. Feature agents 120 may program related sets of features into switch processors 102-103. For example, a feature agent may comprise logic for controlling mirroring in a switch processor (e.g., copying traffic from one port to another port). Another feature agent may comprise logic for configuring a layer 2 media access controller (L2 MAC) in a switch processor (e.g., L2 Ethernet tables) or layer 3 internet protocol (IP) tables. Yet another feature agent may comprise logic for configuring virtual local area network (VLAN) translations. A variety of feature agents may be used to perform other tasks on switch processors 102-103.
Typically, control and configuration software may be required to interface directly with each switch processor, which may require such software to include switch processor specific logic and information. Advantageously, certain embodiments of the present disclosure include one or more switch processor programming agents 121, which may comprise a logical representation of the switch processor (e.g., a virtual representation of the physical hardware in the switch processors). Switch processor programming agents 121 may act as an interface for one or more feature agents to program multiple switch processors 102-103, for example, such that the feature agents may be unaware of the physical hardware in the switch processors. Switch processor programming agents 121 may store switch state information (e.g., configuration data) for a plurality of switch processors 102-103 to program multiple switch processors into the same state, for example.
Switch processor programming agents 121 may receive configuration data from feature agents 120 in a first format. The configuration data may comprise a table of data in one of the feature agents in a first format, such as in an intermediate level programming language format (e.g., C++) or the like. Switch processor programming agents 121 translate the configuration data into a second format, which may be used to program the switch processor. The second format may be a hardware specific format for directly programming circuits (e.g., particular registers) in the switch processors, for example. For instance, a table of data in a feature agent used to specify a particular operation of the switch processor may be converted to a table of data in another format. The translated data may be used to program switch processors 102-103 to carry out the particular operation. Switch processor programming agents 121 may automatically program a plurality switch processors with the configuration data in the second format. For example, in one embodiment, the configuration data may be converted to a direct memory access (DMA) format, and a switch processor programming agent 121 executing on processor 101 may perform a direct memory access (DMA) write to switch processors 102 and 103 over data bus 150, for example.
In this illustrative example, switches 302 and 303 comprise control plane processors 310 and 312 and packet processors 311 and 313, respectively. Packet processors 311/313 route network traffic (e.g., across a “data plane”) and are examples of switch processors. Control plane processors 310/312 are used to configure, control, and otherwise manage the operation of the packet processors 311/313 (e.g., from a “control plane”) and provide an interface for network administration, for example.
Processor 301 may execute a plurality of feature agent software components 320a-n. In this example, each feature agent 320a-n controls a particular aspect of the packet processors 311/313 and has a corresponding switch processor programming agent 321a-n to program the hardware in packet processors 311/313. For instance, feature agent 320a may receive Table 1330a comprising configuration logic for a particular feature of packet processors 311/313 (e.g., mirroring). Switch processor programming agent A 321a may be an interface for feature agent 320a to packet processors 311/313 and convert the logic (code) in table 1330a to a direct memory access (DMA) format 331a. Processor 301 may signal control plane processors 310 and 312 that a DMA transaction is to occur, and processor 301 may write the DMA code into both packet processor 311 and 313 (e.g., simultaneously) so that both packet processors are in the same state. Similarly, other feature agents 320b-n may receive configuration data (e.g., Tables 2-N 330b-n) and corresponding switch processor programming agents 321b-n translate the configuration logic to DMA format to configure packet processors 311 and 313 in the same state. In this example, feature agents working with corresponding customized switch processor programming agents may result in reduced latency in programming the switch processors given the potentially large volumes of configuration data that may be programmed for all the different feature agents.
As shown, network device 600 includes a management module 602, an internal fabric module 604, and a number of I/O modules 606(1)-606(P). Management module 602 includes one or more management CPUs 608 for managing/controlling the operation of the device. Each management CPU 608 can be a general purpose processor, such as an Intel/AMD x86 or ARM-based processor, for example, that operates under the control of software stored in an associated memory (not shown). Management module 602 may receive configuration data 603 (e.g. software executed by CPU 608 or sent to packet processors) as described above.
Internal fabric module 604 and I/O modules 606(1)-606(P) collectively represent the data, or forwarding, plane of network device 600. Internal fabric module 604 is configured to interconnect the various other modules of network device 600. Each I/O module 606(1)-606(P) includes one or more input/output ports 610(1)-610(Q) that are used by network device 600 to send and receive network packets. Each I/O module 606(1)-606(P) can also include a packet processors 612(1)-612(P). Packet processors 612(1)-612(P) are a hardware processing component (e.g., an ASIC) that can make wire speed decisions on how to handle incoming or outgoing network packets. In certain embodiments, translated configuration data 630(1)-(P) may be received within packet processors 612(1)-612(P) to configure the packet processors. In this example, I/O module 606(1)-606(P) further includes a routing table 613(1)-613(P), which may include a content addressable memory (CAM, such as a TCAM).
It should be appreciated that network device 600 is illustrative and many other configurations having more or fewer components than network device 600 are possible.
Bus subsystem 704 can provide a mechanism that enables the various components and subsystems of computer system 700 to communicate with each other as intended. Although bus subsystem 704 is shown schematically as a single bus, alternative embodiments of the bus subsystem can utilize multiple busses.
Network interface subsystem 716 can serve as an interface for communicating data between computer system 700 and other computer systems or networks. Embodiments of network interface subsystem 716 can include, e.g., an Ethernet card, a Wi-Fi and/or cellular adapter, and the like.
User interface input devices 712 can include a keyboard, pointing devices (e.g., mouse, trackball, touchpad, etc.), a touch-screen incorporated into a display, audio input devices (e.g., voice recognition systems, microphones, etc.) and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and mechanisms for inputting information into computer system 700.
User interface output devices 714 can include a display subsystem, a printer, or non-visual displays such as audio output devices, etc. The display subsystem can be, e.g., a flat-panel device such as a liquid crystal display (LCD) or organic light-emitting diode (OLED) display. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system 700.
Data subsystem 706 includes memory subsystem 708 and file/disk storage subsystem 710 represent non-transitory computer-readable storage media that can store program code and/or data, which when executed by processor 702, can cause processor 702 to perform operations in accordance with embodiments of the present disclosure.
Memory subsystem 708 includes a number of memories including main random access memory (RAM) 718 for storage of computer executable instructions and data during program execution and read-only memory (ROM) 720 in which fixed instructions are stored. File storage subsystem 710 can provide persistent (i.e., non-volatile) storage for program and data files, and can include a magnetic or solid-state hard disk drive, an optical drive along with associated removable media (e.g., CD-ROM, DVD, Blu-Ray, etc.), a removable flash memory-based drive or card, and/or other types of storage media known in the art.
It should be appreciated that computer system 700 is illustrative and many other configurations having more or fewer components than system 700 are possible.
Each of the following non-limiting examples may stand on its own or may be combined in various permutations or combinations with one or more of the other examples.
In one embodiment, the present disclosure includes a method for managing switches in a network device comprising: generating, on a processor, one or more feature agents, the one or more feature agents performing a plurality of operations on a plurality of switch processors; generating, on the processor, one or more switch processor programming agents; receiving, by the one or more switch processor programming agents, a plurality of configuration data in a first format received from a corresponding one or more feature agents; translating, by the one or more switch processor programming agents, the plurality of configuration data into a second format; and automatically programming, by the one or more switch processor programming agents, the plurality switch processors with the configuration data in the second format.
In another embodiment, non-transitory computer-readable storage medium having stored thereon computer executable instructions for performing a method of configuring a network device, wherein the instructions, when executed by said network device, cause said network device to be operable for: generating, on a processor, one or more feature agents, the one or more feature agents performing a plurality of operations on a plurality of switch processors; generating, on the processor, one or more switch processor programming agents; receiving, by the one or more switch processor programming agents, a plurality of configuration data in a first format received from a corresponding one or more feature agents; translating, by the one or more switch processor programming agents, the plurality of configuration data into a second format; and automatically programming, by the one or more switch processor programming agents, the plurality switch processors with the configuration data in the second format.
In another embodiment, the present disclosure includes a network device comprising: a processor; a first switch processor configured to route network traffic; and a second switch processor configured to route network traffic, wherein the processor executes one or more feature agents and one or more switch processor programming agents, wherein the one or more feature agents interface with the first switch processor and the second switch processor through the one or more switch processor programming agents to program the first and second switch processors.
In one embodiment, the one or more switch processor programming agents maintain the plurality of switch processors in a same state.
In one embodiment, the plurality of switch processors comprise a first switch processor that routes network traffic and a second redundant switch processor that does not route network traffic, wherein the second redundant switch processor maintains a same state as the first switch processor, and wherein the second redundant switch processor routes network traffic when the first switch processor becomes inoperable.
In one embodiment, in response to a change in the configuration data in the first format, the switch processor programming agent automatically translates the change in the configuration data from the first format to the second format and automatically programs the plurality of switch processors with the change in the configuration data in the second format.
In one embodiment, the switch processors are switch application specific integrated circuits (ASIC).
In one embodiment, each switch processor is a same processor maintained in a same state.
In one embodiment, the second format is a direct memory access (DMA) format and the one or more switch processor programming agents automatically program the plurality of switch processors using direct memory access (DMA) transactions over a bus coupled between the processor and the plurality of switch processors.
In one embodiment, a first switch processor programming agent accesses a first table of data for a first feature agent, translates the first table of data into a second table of data in a direct memory access (DMA) format, and performs a first direct memory access (DMA) write between the processor and a first switch processor and performs a second direct memory access (DMA) write between the processor and a second switch processor.
In one embodiment, a plurality of feature agents send configuration data in the first format to a single switch processor programming agent.
In one embodiment, a plurality of feature agents send configuration data in the first format to a corresponding plurality of switch processor programming agents.
In one embodiment, the first switch processor routes network traffic and the second switch processor does not route network traffic, wherein the second switch processor maintains a same state as the first switch processor, and wherein the second switch processor routes network traffic when the first switch processor becomes inoperable.
In one embodiment, the one or more switch processor programming agents receive a plurality of configuration data in a first format from a corresponding one or more feature agents and translate the plurality of configuration data into a second format, and wherein the one or more switch processor programming agents automatically program the first and second switch processors with the configuration data in the second format.
The above description illustrates various embodiments along with examples of how aspects of some embodiments may be implemented. Accordingly, the above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of some embodiments as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope hereof as defined by the claims.