1. Field of the Invention
The invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs. In particular, the invention relates to a high bandwidth architecture for an optical switch or packet processor and methods that provide efficient processing of cell and packetized data by the optical switch or packet processor.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks have increased. Faster computer processors and higher memory capabilities need networks having high bandwidths to enable high speed transfer of significant amounts of data. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks.
Switches, as they relate to computer networking, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Basic ethernet wirespeed is up to 10 megabits per second, and Fast Ethernet is up to 100 megabits per second. The newest ethernet is referred to as gigabit (Gbit) ethernet, and is capable of transmitting data over a network at a rate of up to 1,000 megabits per second.
With increasing speed in computer processors and higher memory capabilities, the need for high speed switches capable of 10 Gbit and 40 Gbit processing is becoming apparent. The hardware and software systems designed to meet the performance criteria for the next generation of switches have a common set of problems. These include handling data at 10 Gbit rates, adequate multicast replication and forwarding, and issues with Quality of Service (QoS) and Service Level Agreement (SLA). The latter are important in determining raw queue behavior, latency and congestion and providing traffic policing, bandwidth management and SLA support.
In the prior art, the ability to process at the 10 Gbit rate and above is limited by the software used in switching and packet processing. To overcome such limitations, dedicated hardware can be used to do the processing and have the software be concerned with the higher-level functions of the switch. Such dedicated hardware can be implemented in network components and can provide the desired functionality at the desired speeds. The difficulty with dedicated hardware solutions is that they are, by design, directed to specific processing environments and many different network components would be necessary to meet the needs of differing network setups.
Because of this, there is a need in the prior art for a network switch that is fully scalable and fully configurable to differing network environments. There is also a need for a switch can perform dedicated packet processing based on the hardware and rely minimally on higher level software and still be adaptable to varying network architectures. There is also a need in the prior art of a method of switching packets on a network switch that is highly customizable and still able to switch packet at high speeds.
Accordingly, it is a principal object of the present invention to provide a switch that can operate at higher switching rates and adaptable to different network configurations. The present invention provides for dedicated modules, that can be swapped in or out depending on the needs of the switching environment, and an architecture that passes information to these dedicated modules and allows the modules to request more information and/or pass the information to another module. Based on the dedicated modules configured in the architecture, the switch can handle routing specific to the network requirements into which the switch is placed. Thus, the switch is fully scalable and can achieve higher switching rates because of the dedicated hardware modules.
The present invention is directed to a network switch, the network switch comprising at least one data port interface for receiving data, at least one link interface configured to transmit the data between the network switch and other network switches, and a data processor, connected to the at least one data port interface and the at least one link interface. The data processor has a segmented ring with a plurality of dedicated modules designed to process the data, a programmable ring dispatcher for dispatching at least a portion of the data along the segmented ring to at least one of the plurality of dedicated modules, and a command processor for processing commands received from the dedicated modules. The programmable ring dispatcher determines a first dedicated module of the plurality of dedicated modules to receive the portion of said data and the first dedicated module determines a next destination for the portion of the data selected from the plurality of dedicated modules and the command processor.
In a particular embodiment, the data received by the network switch is in the form of packets having headers, the portion of the data sent to the ring is a parsed field derived from the headers. Additionally, the programmable ring dispatcher has a set of rules for determining which of the plurality of dedicated modules receives the parsed field based on values contained in the parsed field and dispatches the parsed field to the determined dedicated module. The plurality of dedicated modules can be varied in number and type depending on the networking environment of the network switch.
The present invention is also directed to a method of processing data by a network switch. The steps of the method include receiving data by the network switch and parsing the data to obtain a portion of the data. At least portion of the data is dispatched along a segmented ring having a plurality of dedicated modules designed to process the data. A first dedicated module of the plurality of dedicated modules receives the data portion based on the content of data portion. The data is processed by the first dedicated module and then the first dedicated module dispatches at least the portion of the data along the segmented ring to additional dedicated modules or to a command processor of the network switch. The data is forwarded from the network switch based on the processing performed by the dedicated modules.
In a specific embodiment, the method of processing data by a network switch includes a step of parsing packets having headers to obtain a parsed field derived from the headers. Additionally, the dispatching at least portion of the data can be performed according to a set of rules for determining which of the plurality of dedicated modules receives the parsed field based on values contained in the parsed field. Also, the dispatching to a specified plurality of dedicated modules is possible, where the number and type of the dedicated modules are varied based on the networking environment of the network switch
The above and other objects, features and advantages of the invention will become apparent from the following description of the preferred embodiment taken in conjunction with the accompanying drawings.
The present invention is directed to a network switch and a method for processing data by a network switch. The present invention provides of a high bandwidth architecture to ensure that data is not dropped and that data packets are managed effectively. The architecture of the present invention creates a separate channel for the datapath and forwarding/management information in each packet or cell. A ring structure is used to manage the Header/Delivery/Priority/Management information for the packets or cells while the datagram is handled by a packet processor. The architecture allows for a modular building block approach based on a programmable dispatcher and application specific and/or protocol specific processors designed for the ring.
In the context of the present application, a packet is defined as the entire transmitted bit sequence as viewed on a network medium, from the first bit of the preamble sequence to the last bit of the Frame Check Sequence (FCS) field. A frame is a portion of a packet that includes the destination address, source address, length or type and FCS fields, but excludes the preamble sequence. A packet also contains headers which contain control information regarding encapsulated data included in the packets for network transmission. A cell is a fixed-length unit used in Asynchronous Transfer Mode (ATM) networks to support multiple classes of service. Additionally, most networking applications are discussed with respect to the Open Systems Interconnection (OSI) 7-layer model. Layers 1 and 2 (L1 and L2) refer to the physical layer and the data link layer, where the physical layer is concerned with the transmission of raw bits over a communication channel and the data link layer is concerned with dividing data into frames and acknowledging receipt of frames. Layer 3 (L3) is the network layer and is concerned with the routing of information and packet congestion control. Layer 4 (L4) is the transport layer and is concerned with creating and managing connections between senders and recipients.
A schematic illustrating portions of the network switch of the present invention is shown in
An important element of the present invention is the implementation of a segmented ring 15. The ring architecture allow for designated modules, such as processors denoted as 20, 30 and 40, to be added depending the functionality desired and the nature of the network. Illustrated in
A preferred embodiment of the present invention is illustrated in
Segments of the optical packet processor will now be discussed. The PPP/deframer block (120) deframes the packets and performs an ingress frame error check, including evaluating the FCS and the L3 Checksum. The block then parses the L2, L3 and L4 headers to provide parsed fields that typify the packet. The parsed fields are then passed on the programmable dispatcher 130.
The programmable dispatcher dispatches the parsed fields related to the frame headers to one of the designated modules 190, 200 and 210, where the designated modules are of the type discussed above. The dispatcher uses sequential rules to determine the designated modules to which the parsed field is sent. These rules provide that the dispatcher looks at the frame type and then the L2, L3 and L4 fields to make the determination. The dispatcher can also look at any arbitrary value in the header fields to determine the destination.
It is noted the designated modules can pass the parsed fields on to another designated module after it is done with its individual processing. The designated modules can also pass the parsed fields through the ring to the command processor 140 when it processing or evaluating is finished. The subsequent forwarding of parsed data is dependent on the designated module. For example, for certain architectures, the MPLS processor might always forward to the routing processor or some other module. In addition, a designated module can also send a request through the ring to have the entire packet or other portions thereof forwarded to the designated module if parsed fields are not sufficient for the processing by the module.
The command processor 140 process commands received from the designated modules. These can be related to queue and flow class assignments, conditional behavior upon destination congestion and fragmentation. The command processor can also replace frame headers that have been modified by the designated modules, can replicate frames as needed, e.g. IP multicast, and setting the frame type selection on egress.
Queue and buffer management block 150 buffers holds ingress packet frame and associates buffers with class Ids. The block also supports fragmentation and frame multicast to the switch fabric interface. The frame replacement block 160 provides a L2 frame type on egress and performs FCS and Checksum calculations again as necessary.
The scheduler/traffic shaper block 170 provides different functions depending on whether the ingress or egress datapath is followed. The scheduler provides for a proper performance for a transmission that reflects its transmission quality and service availability. The scheduler shapes the flow based on the frame egress queue. The traffic shaper provides management of the transmit buffer and queue for data flowing from the switch fabric. The traffic shaper provides for early detection of congestion and dropping of packets.
The switch fabric interface block 180 acts as an interface between the optical packet processor 100 and the switch fabric or crossbar. The interface supports 65 ingress and egress interleaved flow and has bandwidth/frame counters for support service level agreement (SLA) support.
The method of one embodiment of the present invention will now be discussed. The process begins at step 310 when data is received by the network switch. The data received is parsed to obtain a portion of the received data that provides characteristics of the received data. In the case of packet processing, the data is in the form of packets and headers in the packets are used to provide the parsed data. The data portion is dispatched by the programmable dispatcher to a first designated module, where there are a plurality of designated modules connected by a segmented ring.
The data and/or the data portion is processed by the first designated module. The processes performed by the first designated module are specific to that module. The first designated module dispatches the data portion to either another dedicated module for additional processing or to a command processor. After the data has been processed by the dedicated processors, the data is forwarded based on the processes performed and the information obtained by the processes.
Although the preferred embodiment discussed above is directed to an optical switch, the architecture and methods of the present invention is also applicable to other types switches that are not optical switches. Other types of switches and packet processors can benefit from the high bandwidth ring architecture disclosed herein.
Although embodiments of the present invention have been described in detail, it will be understood that the present invention is not limited to the above-described embodiments, and various modifications in design may be made without departing from the spirit and scope of the invention defined in claims.
This application claims priority to U.S. Provisional Patent Application Ser. No. 60/185,271, filed on Feb. 28, 2000. The contents of this patent application is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5278789 | Inoue et al. | Jan 1994 | A |
5390173 | Spinney et al. | Feb 1995 | A |
5414704 | Spinney | May 1995 | A |
5423015 | Chung | Jun 1995 | A |
5459717 | Mullan et al. | Oct 1995 | A |
5473607 | Hausman et al. | Dec 1995 | A |
5499295 | Cooper | Mar 1996 | A |
5524254 | Morgan et al. | Jun 1996 | A |
5555398 | Raman | Sep 1996 | A |
5568477 | Galand et al. | Oct 1996 | A |
5579301 | Ganson et al. | Nov 1996 | A |
5644784 | Peek | Jul 1997 | A |
5652579 | Yamada et al. | Jul 1997 | A |
5696899 | Kalwitz | Dec 1997 | A |
5742613 | MacDonald | Apr 1998 | A |
5748631 | Bergantino et al. | May 1998 | A |
5781549 | Dai | Jul 1998 | A |
5787084 | Hoang et al. | Jul 1998 | A |
5790539 | Chao et al. | Aug 1998 | A |
5802052 | Venkataraman | Sep 1998 | A |
5802287 | Rostoker et al. | Sep 1998 | A |
5825772 | Dobbins et al. | Oct 1998 | A |
5828653 | Goss | Oct 1998 | A |
5831980 | Varma et al. | Nov 1998 | A |
5842038 | Williams et al. | Nov 1998 | A |
5845081 | Rangarajan et al. | Dec 1998 | A |
5887187 | Rostoker et al. | Mar 1999 | A |
5892922 | Lorenz | Apr 1999 | A |
5898687 | Harriman et al. | Apr 1999 | A |
5909686 | Muller et al. | Jun 1999 | A |
5918074 | Wright et al. | Jun 1999 | A |
5940596 | Rajan et al. | Aug 1999 | A |
5987507 | Creedon et al. | Nov 1999 | A |
6011795 | Varghese et al. | Jan 2000 | A |
6041053 | Douceur et al. | Mar 2000 | A |
6047002 | Hartmann et al. | Apr 2000 | A |
6061351 | Erimli et al. | May 2000 | A |
6088356 | Hendel et al. | Jul 2000 | A |
6119196 | Muller et al. | Sep 2000 | A |
6122285 | Okada | Sep 2000 | A |
6175902 | Runaldue et al. | Jan 2001 | B1 |
6185185 | Bass et al. | Feb 2001 | B1 |
6266700 | Baker et al. | Jul 2001 | B1 |
6356951 | Gentry, Jr. | Mar 2002 | B1 |
Number | Date | Country |
---|---|---|
0312917 | Apr 1989 | EP |
0465090 | Jan 1992 | EP |
0752796 | Jan 1997 | EP |
0849917 | Jun 1998 | EP |
0853441 | Jul 1998 | EP |
0854606 | Jul 1998 | EP |
0859492 | Aug 1998 | EP |
0862349 | Sep 1998 | EP |
0907300 | Apr 1999 | EP |
2 725 573 | Apr 1996 | FR |
4-189023 | Jul 1992 | JP |
WO 9900948 | Jan 1998 | WO |
WO 9809473 | Mar 1998 | WO |
WO 9900938 | Jan 1999 | WO |
WO 9900939 | Jan 1999 | WO |
WO 9900944 | Jan 1999 | WO |
WO 9900945 | Jan 1999 | WO |
WO 9900949 | Jan 1999 | WO |
WO 9900950 | Jan 1999 | WO |
WO9900936 | Jun 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20010033571 A1 | Oct 2001 | US |
Number | Date | Country | |
---|---|---|---|
60185271 | Feb 2000 | US |