1. Field of the Invention
The invention relates generally to network switches.
2. Related Art
A network switch is a device that provides a switching function (i.e., it determines a physical path) in a data communications network. Switching involves transferring information, such as digital data packets or frames, among entities of the network. Typically, a switch is a computer having a plurality of circuit cards coupled to a backplane. In the switching art, the circuit cards are typically called “blades.” The blades are interconnected by a “switch fabric.” Each blade includes a number of physical ports that couple the switch to the other network entities over various types of media, such as Ethernet, FDDI (Fiber Distributed Data Interface), or token ring connections. A network entity includes any device that transmits and/or receives data packets over such media.
The switching function provided by the switch typically includes receiving data at a source port from a network entity and transferring the data to a destination port. The source and destination ports may be located on the same or different blades. In the case of “local” switching, the source and destination ports are on the same blade. Otherwise, the source and destination ports are on different blades and switching requires that the data be transferred through the switch fabric from the source blade to the destination blade. In some case, the data may be provided to a plurality of destination ports of the switch. This is known as a multicast data transfer.
Switches operate by examining the header information that accompanies data in the data frame. The header information includes the international standards organization (ISO) 7-layer OSI (open-systems interconnection model). In the OSI model, switches generally route data frames based on the lower level protocols such as Layer 2 or Layer 3. In contrast, routers generally route based on the higher level protocols and by determining the physical path of a data frame based on table look-ups or other configured forwarding or management routines to determine the physical path (i.e., route).
Ethernet is a widely used lower-layer network protocol that uses broadcast technology. The Ethernet frame has six fields. These fields include a preamble, a destination address, source address, type, data and a frame check sequence. In the case of an Ethernet frame, the digital switch will determine the physical path of the frame based on the source and destination addresses. Standard Ethernet operates at a 10 Mbps data rate. Another implementation of Ethernet known as “Fast Ethernet” (FE) has a data rate of 100 Mbps. Yet another implementation of FE operates at 10 Gbps.
A digital switch will typically have physical ports that are configured to communicate using different protocols at different data rates. For example, a blade within a switch may have certain ports that are 10 Mbps, or 100 Mbps ports. It may have other ports that conform to optical standards such as SONET and are capable of such data rates as 10 Gbps.
A performance of a digital switch is often assessed based on metrics such as the number of physical ports that are present, and the total bandwidth or number of bits per second that can be switched without blocking or slowing the data traffic. A limiting factor in the bit carrying capacity of many switches is the switch fabric. For example, one conventional switch fabric was limited to 8 gigabits per second per blade. In an eight blade example, this equates to 64 gigabits per second of traffic. It is possible to increase the data rate of a particular blade to greater than 8 gigabits per second. However, the switch fabric would be unable to handle the increased traffic.
It is desired to take advantage of new optical technologies and increase port densities and data rates on blades. However, what is needed is a switch and a switch fabric capable of handling higher bit rates and providing a maximum aggregate bit carrying capacity well in excess of conventional switches.
The present invention provides a high-performance network switch. Serial link technology is used in a switching fabric. Serial data streams, rather than parallel data streams, are switched in a switching fabric. Blades output serial data streams in serial pipes. A serial pipe can be a number of serial links coupling a blade to the switching fabric. The serial data streams represent an aggregation of input serial data streams provided through physical ports to a respective blade. Each blade outputs serial data streams with in-band control information in multiple stripes to the switching fabric.
In one embodiment, the serial data streams carry packets of data in wide striped cells across multiple stripes. Wide striped cells are encoded. In-band control information is carried in one or more blocks of a wide cell. For example, the initial block of a wide cell includes control information and state information. Further, the control information and state information is carried in each stripe. In particular, the control information and state information is carried in each sub-block of the initial block of a wide cell. In this way, the control information and state information is available in-band in the serial data streams (also called stripes). Control information is provided in-band to indicate traffic flow conditions, such as, a start of cell, an end of packet, abort, or other error conditions.
A wide cell has one or more blocks. Each block extends across five stripes. Each block has a size of twenty bytes made up of five sub-blocks each having a size of four bytes. In one example, a wide cell has a maximum size of eight blocks (160 bytes) which can carry 148 bytes of payload data and 12 bytes of in-band control information. Packets of data for full-duplex traffic can be carried in the wide cells at a 50 Gbps rate in each direction through one slot of the digital switch. According to one feature, the choice of maximum wide cell block size of 160 bytes as determined by the inventors allows a 4×10 Gbps Ethernet (also called 4×10 GE) line rate to be maintained through the backplane interface adapter. This line rate is maintained for Ethernet packets having a range of sizes accepted in the Ethernet standard including, but not limited to, packet sizes between 84 and 254 bytes.
In one embodiment, a digital switch has a plurality of blades coupled to a switching fabric via serial pipes. The switching fabric can be provided on a backplane and/or one or more blades. Each blade outputs serial data streams with in-band control information in multiple stripes to the switching fabric. The switching fabric includes a plurality of cross points corresponding to the multiple stripes. Each cross point has a plurality of port slices coupled to the plurality of blades. In one embodiment five stripes and five cross points are used. Each blade has five serial links coupled to each of the five cross points respectively. In one example implementation, the serial pipe coupling a blade to switching fabric is a 50 Gbps serial pipe made up of five 10 Gbps serial links. Each of the 10 Gbps serial links is coupled to a respective cross point and carries a serial data stream. The serial data stream includes a data slice of a wide cell that corresponds to one stripe.
In one embodiment of the present invention, each blade has a backplane interface adapter (BIA). The BIA has three traffic processing flow paths. The first traffic processing flow path extends in traffic flow direction from local packet processors toward a switching fabric. The second traffic processing flow path extends in traffic flow direction from the switching fabric toward local packet processors. A third traffic processing flow path carried local traffic from the first traffic processing flow path. This local traffic is sorted and routed locally at the BIA without having to go through the switching fabric.
The BIA includes one or more receivers, wide cell generators, and transmitters along the first path. The receivers receive narrow input cells carrying packets of data. These narrow input cells are output from packet processor(s) and/or from integrated bus translators (IBTs) coupled to packet processors. The BIA includes one or more wide cell generators. The wide cell generators generate wide striped cells carrying the packets of data received by the BIA in the narrow input cells. The transmitters transmit the generated wide striped cells in multiple stripes to the switching fabric.
According to the present invention, the wide cells extend across multiple stripes and include in-band control information in each stripe. In one embodiment, each wide cell generator parses each narrow input cell, checks for control information indicating a start of packet, encodes one or more new wide striped cells until data from all narrow input cells of the packet is distributed into the one or more new wide striped cells, and writes the one or more new wide striped cells into a plurality of send queues.
In one example, the BIA has four deserializer receivers, 56 wide cell generators, and five serializer transmitters. The four deserializer receivers receive narrow input cells output from up to eight originating sources (that is, up to two IBTs or packet processors per deserializer receiver). The 56 wide cell generators receive groups of the received narrow input cells sorted based on destination slot identifier and originating source. The five serializer transmitters transmit the data slices of the wide cell that corresponds to the stripes.
According to a further feature, a BIA can also include a traffic sorter which sorts received narrow input cells based on a destination slot identifier. In one example, the traffic sorter comprises both a global/traffic sorter and a backplane sorter. The global/traffic sorter sorts received narrow input cells having a destination slot identifier that identifies a local destination slot from received narrow input cells having destination slot identifier that identifies global destination slots across the switching fabric. The backplane sorter further sorts received narrow input cells having destination slot identifiers that identify global destination slots into groups based on the destination slot identifier.
In one embodiment, the BIA also includes a plurality of stripe send queues and a switching fabric transmit arbitrator. The switching fabric transmit arbitrator arbitrates the order in which data stored in the stripe send queues is sent by the transmitters to the switching fabric. In one example, the arbitration proceeds in a round-robin fashion. Each stripe send queue stores a respective group of wide striped cells corresponding a respective originating source packet processor and a destination slot identifier. Each wide striped cell has one or more blocks across multiple stripes. During a processing cycle, the switching fabric transmit arbitrator selects a stripe send queue and pushes the next available cell (or even one or more blocks of a cell at time) to the transmitters. Each stripe of a wide cell is pushed to the respective transmitter for that stripe.
The BIA includes one or more receivers, wide/narrow cell translators, and transmitters along the second path. The receivers receive wide striped cells in multiple stripes from the switching fabric. The wide striped cells carry packets of data. The translators translate the received wide striped cells to narrow input cells carrying the packets of data. The transmitters then transmit the narrow input cells to corresponding destination packet processors or IBTs. In one example, the five deserializer receivers receive five sub-blocks of wide striped cells in multiple stripes. The wide striped cells carrying packets of data across the multiple stripes and including destination slot identifier information.
In one embodiment, the BIA further includes stripe interfaces and stripe receive synchronization queues. Each stripe interface sorts received sub-blocks in each stripe based on originating slot identifier information and stores the sorted received sub-blocks in the stripe receive synchronization queues.
The BIA further includes along the second traffic flow processing path an arbitrator, a striped-based wide cell assembler, and the narrow/wide cell translator. The arbitrator arbitrates an order in which data stored in the stripe receive synchronization queues is sent to the striped-based wide cell assembler. The striped-based wide cell assembler assembles wide striped cells based on the received sub-blocks of data. A narrow/wide cell translator then translates the arbitrated received wide striped cells to narrow input cells carrying the packets of data.
A second level of arbitration is also provided according to an embodiment of the present invention. The BIA further includes destination queues and a local destination transmit arbitrator in the second path. The destination queues store narrow cells sent by a local traffic sorter (from the first path) and the narrow cells translated by the translator (from the second path. The local destination transmit arbitrator arbitrates an order in which narrow input cells stored in the destination queues is sent to serializer transmitters. Finally, the serializer transmitters then that transmits the narrow input cells to corresponding IBTs and/or source packet processors (and ultimately out of a blade through physical ports).
According to a further feature of the present invention, system and method for encoding wide striped cells is provided. The wide cells extend across multiple stripes and include in-band control information in each stripe. State information, reserved information, and payload data may also be included in each stripe. In one embodiment, a wide cell generator encodes one or more new wide striped cells.
The wide cell generator encodes an initial block of a start wide striped cell with initial cell encoding information. The initial cell encoding information includes control information (such as, a special K0 character) and state information provided in each sub-block of an initial block of a wide cell. The wide cell generator further distributes initial bytes of packet data into available space in the initial block. Remaining bytes of packet data are distributed across one or more blocks in of the first wide striped cell (and subsequent wide cells) until an end of packet condition is reached or a maximum cell size is reached. Finally, the wide cell generator further encodes an end wide striped cell with end of packet information that varies depending upon the degree to which data has filled a wide striped cell. In one encoding scheme, the end of packet information varies depending upon a set of end of packet conditions including whether the end of packet occurs at the end of an initial block, within a subsequent block after the initial block, at a block boundary, or at a cell boundary.
According to a further embodiment of the present invention, a method for interfacing serial pipes carrying packets of data in narrow input cells and a serial pipe carrying packets of data in wide striped cells includes receiving narrow input cells, generating wide striped cells, and transmitting blocks of the wide striped cells across multiple stripes. The method can also include sorting the received narrow input cells based on a destination slot identifier, storing the generated wide striped cells in corresponding stripe send queues based on a destination slot identifier and an originating source packet processor, and arbitrating the order in which the stored wide striped cells are selected for transmission.
In one example, the generating step includes parsing each narrow input cell, checking for control information that indicates a start of packet, encoding one or more new wide striped cells until data from all narrow input cells carrying the packet is distributed into the one or more new wide striped cells, and writing the one or more new wide striped cells into a plurality of send queues. The encoding step includes encoding an initial block of a start wide striped cell with initial cell encoding information, such as, control information and state information. Encoding can further include distributing initial bytes of packet data into available space in an initial block of a first wide striped cell, adding reserve information to available bytes at the end of the initial block of the first wide striped cell, distributing remaining bytes of packet data across one or more blocks in the first wide striped cell until an end of packet condition is reached or a maximum cell size is reached, and encoding an end wide striped cell with end of packet information. The end of packet information varies depending upon a set of end of packet conditions including whether the end of packet occurs at the end of an initial block, in any block after the initial block, at a block boundary, or at a cell boundary.
The method also includes receiving wide striped cells carrying packets of data in multiple stripes from a switching fabric, translating the received wide striped cells to narrow input cells carrying the packets of data, and transmitting the narrow input cells to corresponding source packet processors. The method further includes sorting the received sub-blocks in each stripe based on originating slot identifier information, storing the sorted received sub-blocks in stripe receive synchronization queues, and arbitrating an order in which data stored in the stripe receive synchronization queues is assembled. Additional steps are assembling wide striped cells in the order of the arbitrating step based on the received sub-blocks of data, translating the arbitrated received wide striped cells to narrow input cells carrying the packets of data, and storing narrow cells in a plurality of destination queues. In one embodiment, further arbitration is performed including arbitrating an order in which data stored in the destination queues is to be transmitted and transmitting the narrow input cells in the order of the further arbitrating step to corresponding source packet processors and/or IBTs.
The present invention further provides error detection and recovery. Such errors can include stripe synchronization errors. In one embodiment, an administrative module includes a level monitor, stripe synchronization error detector, a flow controller, and a control character presence tracker. The level monitor monitors data received at a receiving blade. The stripe synchronization error detector detects a stripe synchronization error based on the amount of data monitored by the level monitor. Example stripe synchronization errors include an incoming link error, a cross-point failure, and an outgoing link error. In one example, the data received at a receiving blade is sorted based on stripe and source information and stored in a set of data structures (e.g., FIFOs). The level monitor monitors the levels of data stored in each data structure. The stripe synchronization error detector detects at least one of an overflow and underflow condition in the amount of data received on a respective stripe from a particular source.
The flow controller initiates a recovery routine to re-synchronize data across the stripes in response to detection of a stripe synchronization error. The control character presence tracker identifies the presence of a K2 character during the recovery routine.
The present invention further includes a method for detecting stripe synchronization error in a network switch, including the steps of: sorting data received at a receiving slot based on stripe and source information; storing the sorted data in a set of data structures; monitoring the levels of data stored in each data structure; and detecting at least one of an overflow and underflow condition in the amount of data received on a respective stripe from a particular source. The source information can identify a slot that sent the data across a switching fabric of the network switch, or can identify a source packet processor that sent the data from a slot across a switching fabric of the network switch.
The present invention further includes a method for maintaining synchronization of striped cell traffic, comprising the steps of: sending a common character in striped cells in all lanes for a predetermined number of cycles; evaluating the common control characters received at stripe receive synchronization queues; and detecting when an in-synch condition is present that indicates the stripe receive synchronization queues have been cleared.
The present invention further includes a method for managing out-of-synchronization traffic flow through a cross-point switch in a switching fabric, comprising: monitoring the level of stripe-receive-synchronization queues; determining whether an out-of-synchronization condition exists; and initiating a re-synchronization routine when said out-of-synchronization condition exists. The re-synchronization routine can include the steps of: sending a common character in striped cells in all lanes for a predetermined number of cycles; evaluating the common control characters received at stripe receive synchronization queues; and detecting when an in-synch condition is present that indicates the stripe receive synchronization queues have been cleared.
According to another embodiment of the present invention, a redundant switching system is provided. The redundant switching system, includes two switching blades and at least one ingress/egress blade (or slave blade). Each switching blade has a plurality of cross points corresponding to respective stripes of serial data streams. Each ingress/egress blade is coupled to each switching blade through a backplane connection. Each ingress/egress blade also includes a plurality of redundant fabric transceivers (RFTs). The RFTs can switch traffic between the cross points on the two switching blades. This provides redundancy.
In one embodiment, a redundant fabric transceiver is coupled to a bus interface adapter and includes one or more first and second ports, a multiplexer, a downlink transceiver, and an uplink transceiver. The multiplexer selects communication data from similar data for transmission. The downlink transceiver receives, conditions, and transmits the communication data. The uplink transceiver also receives, conditions, and transmits communication data. A register module can be used that includes condition information that indicates operations for at least one of the downlink transceiver and the uplink transceiver, wherein the condition information includes configuration and parameter settings for received and transmitted data.
Further embodiments, features, and advantages of the present inventions, as well as the structure and operation of the various embodiments of the present invention, are described in detail below with reference to the accompanying drawings.
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention.
In the drawings:
The present invention will now be described with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Additionally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
Table of Contents
I. Overview and Discussion
II. Terminology
III. Digital Switch Architecture
The present invention is a high-performance digital switch. Blades are coupled through serial pipes to a switching fabric. Serial link technology is used in the switching fabric. Serial data streams, rather than parallel data streams, are switched through a loosely striped switching fabric. Blades output serial data streams in the serial pipes. A serial pipe can be a number of serial links coupling a blade to the switching fabric. The serial data streams represent an aggregation of input serial data streams provided through physical ports to a respective blade. Each blade outputs serial data streams with in-band control information in multiple stripes to the switching fabric. In one embodiment, the serial data streams carry packets of data in wide striped cells across multiple loosely-coupled stripes. Wide striped cells are encoded. In-band control information is carried in one or more blocks of a wide striped cell.
In one implementation, each blade of the switch is capable of sending and receiving 50 gigabit per second full-duplex traffic across the backplane. This is done to assure line rate, wire speed and non-blocking across all packet sizes.
The high-performance switch according to the present invention can be used in any switching environment, including but not limited to, the Internet, an enterprise system, Internet service provider, and any protocol layer switching (such as, Layer 2, Layer 3, or Layers 4-7 switching).
The present invention is described in terms of this example environment. Description in these terms is provided for convenience only. It is not intended that the invention be limited to application in these example environments. In fact, after reading the following description, it will become apparent to a person skilled in the relevant art how to implement the invention in alternative environments known now or developed in the future.
To more clearly delineate the present invention, an effort is made throughout the specification to adhere to the following term definitions as consistently as possible.
The terms “switch fabric” or “switching fabric” refer to a switchable interconnection between blades. The switch fabric can be located on a backplane, a blade, more than one blade, a separate unit from the blades, or on any combination thereof.
The term “packet processor” refers to any type of packet processor, including but not limited to, an Ethernet packet processor. A packet processor parses and determines where to send packets.
The term “serial pipe” refers to one or more serial links. In one embodiment, not intended to limit the invention, a serial pipe is a 10 Gbps serial pipe and includes four 2.5 Gbps serial links.
The term “serial link” refers to a data link or bus carrying digital data serially between points. A serial link at a relatively high bit rate can also be made of a combination of lower bit rate serial links.
The term “stripe” refers to one data slice of a wide cell. The term “loosely-coupled” stripes refers to the data flow in stripes which is autonomous with respect to other stripes. Data flow is not limited to being fully synchronized in each of the stripes, rather, data flow proceeds independently in each of the stripes and can be skewed relative to other stripes.
An overview of the architecture of the switch 100 of the invention is illustrated in
In a preferred embodiment of the invention, switch 100 having 8 blades is capable of switching of 400 gigabits per second (Gbps) full-duplex traffic. As used herein, all data rates are full-duplex unless indicated otherwise. Each blade 104 communicates data at a rate of 50 Gbps over serial pipe 106.
Switch 100 is shown in further detail in
Each cross point 202A-202E is an 8-port cross point. In one example, each cross point 2202A-E receives eight 10G streams of data. Each stream of data corresponds to a particular stripe. The stripe has data in a wide-cell format which includes, among other things, a destination port number (also called a destination slot number) and special in-band control information. The in-band control information includes special K characters, such as, a K0 character and K1 character. The K0 character delimits a start of new cell within a stripe. The K1 character delimits an end of a packet within the stripe. Such encoding within each stripe, allows each cross point 202A-202E to operate autonomously or independently of other cross points. In this way, the cross points 202A-202E and their associated stripes are loosely-coupled.
In each cross point 202, there are a set of data structures, such as data FIFOs (First in First out data structures). The data structures store data based on the source port and the destination port. In one embodiment, for an 8-port cross point, 56 data FIFOs are used. Each data FIFO stores data associated with a respective source port and destination port. Packets coming to each source port are written to the data FIFOs which correspond to a source port and a destination port associated with the packets. The source port is associated with the port (and port slice) on which the packets are received. The destination port is associated with a destination port or slot number which is found in-band in data sent in a stripe to a port.
In embodiments of the present invention, the switch size is defined as one cell and the cell size is defined to be either 8, 28, 48, 68, 88, 108, 128, or 148 bytes. Each port (or port slice) receives and sends serial data at a rate of 10 Gbps from respective serial links. Each cross point 202A-202E has a 160 Gbps switching capacity (160 Gbps=10 Gbps*8 ports*2 directions full-duplex). Such cell sizes, serial link data rate, and switching capacity are illustrative and not necessarily intended to limit the present invention. Cross-point architecture and operation is described further below.
In attempting to increase the throughput of switches, conventional wisdom has been to increase the width of data buses to increase the “parallel processing” capabilities of the switch and to increase clock rates. Both approaches, however, have met with diminishing returns. For example, very wide data buses are constrained by the physical limitations of circuit boards. Similarly, very high clock rates are limited by characteristics of printed circuit boards. Going against conventional wisdom, the inventors have discovered that significant increases in switching bandwidth could be obtained using serial link technology in the backplane.
In the preferred embodiment, each serial pipe 106 is capable of carrying full-duplex traffic at 50 Gbps, and each serial link 204 is capable of carrying full-duplex traffic at 10 Gbps. The result of this architecture is that each of the five cross points 202 combines five 10 gigabit per second serial links to achieve a total data rate of 50 gigabits per second for each serial pipe 106. Thus, the total switching capacity across backplane 102 for eight blades is 50 gigabits per second times eight times two (for duplex) or 800 gigabits per second. Such switching capacities have not been possible with conventional technology using synched parallel data buses in a switching fabric.
An advantage of such a switch having a 50 Gbps serial pipe to backplane 102 from a blade 104 is that each blade 104 can support across a range of packet sizes four 10 Gbps Ethernet packet processors at line rate, four Optical Channel OC-192C at line rate, or support one OC-768C at line rate. The invention is not limited to these examples. Other configurations and types of packet processors and can be used with the switch of the present invention as would be apparent to a person skilled in the art given this description.
Referring now to
Each packet processor 306 includes one or more physical ports. Each packet processor 306 receives inbound packets from the one or more physical ports, determines a destination of the inbound packet based on control information, provides local switching for local packets destined for a physical port to which the packet processor is connected, formats packets destined for a remote port to produce parallel data and switches the parallel data to an IBT 304. Each IBT 304 receives the parallel data from each packet processor 306. IBT 304 then converts the parallel data to at least one serial bit streams. IBT 304 provides the serial bit stream to BIA 302 via a pipe 308, described herein as one or more serial links. In a preferred embodiment, each pipe 308 is a 10 Gb/s XAUI interface.
In the example illustrated in
Also in the example of
BIA 302 receives the output of IBTs 304A-304D. Thus, BIA 302 receives 4 times 10 Gbps of data. Or alternatively, 8 times 5 gigabit per second of data. BIA 302 runs at a clock speed of 156.25 MHz. With the addition of management overhead and striping, BIA 302 outputs 5 times 10 gigabit per second data streams to the five cross points 202 in backplane 102.
BIA 302 receives the serial bit streams from IBTs 304, determines a destination of each inbound packet based on packet header information, provides local switching between local IBTs 304, formats data destined for a remote port, aggregates the serial bit streams from IBTs 304 and produces an aggregate bit stream. The aggregated bit stream is then striped across the five cross points 202A-202E.
A. Cross Point Architecture
Port slice 402F is coupled to each of the seven other port slices 402A-402E and 402G-402H through links 420-426. Links 420-426 route data received in the other port slices 402A-402E and 402G-402H which has a destination port number (also called a destination slot number) associated with a port of port slice 402F (i.e. destination port number 5). Finally, port slice 402F includes a link 430 that couples the port associated with port slice 402F to the other seven port slices. Link 430 allows data received at the port of port slice 402F to be sent to the other seven port slices. In one embodiment, each of the links 420-426 and 430 between the port slices are buses to carry data in parallel within the cross point 202. Similar connections (not shown in the interest of clarity) are also provided for each of the other port slices 402A-402E, 402G and 402H.
Port slice 402F includes a receive synch FIFO module 515 coupled between deserializer receiver(s) 510 and accumulator 520. Receive synch FIFO module 515 stores data output from deserializer receivers 510 corresponding to port slice 402F. Accumulator 520 writes data to an appropriate data FIFO (not shown) in the other port slices 402A-402E, 402G, and 402H based on a destination slot or port number in a header of the received data.
Port slice 402F also receives data from other port slices 402A-402E, 402G, and 402H. This data corresponds to the data received at the other seven ports of port slices 402A-402E, 402G, and 40211 which has a destination slot number corresponding to port slice 402F. Port slice 402F includes seven data FIFOs 530 to store data from corresponding port slices 402A-402E, 402G, and 402H. Accumulators (not shown) in the seven port slices 402A-402E, 402G, and 402H extract the destination slot number associated with port slice 402F and write corresponding data to respective ones of seven data FIFOs 530 for port slice 402F. As shown in
During operation, the FIFO RAMs accumulate data. After a data FIFO RAM has accumulated one cell of data, its corresponding FIFO controller generates a read request to FIFO read arbitrator 540. FIFO read arbitrator 540 processes read requests from the different FIFO controllers in a desired order, such as a round-robin order. After one cell of data is read from one FIFO RAM, FIFO read arbitrator 540 will move on to process the next requesting FIFO controller. In this way, arbitration proceeds to serve different requesting FIFO controllers and distribute the forwarding of data received at different source ports. This helps maintain a relatively even but loosely coupled flow of data through cross points 202.
To process a read request, FIFO read arbitrator 540 switches multiplexer 550 to forward a cell of data from the data FIFO RAM associated with the read request to dispatcher 560. Dispatcher 560 outputs the data to transmit synch FIFO 570. Transmit synch FIFO 570 stores the data until sent in a serial data stream by serializer transmitter(s) 580 to blade 104F.
B. Port Slice Operation with Wide Cell Encoding and Flow Control
According to a further embodiment, a port slice operates with respect to wide cell encoding and a flow control condition.
In step 2710, entries in receive synch FIFO 515 are managed. In one example, receive synch FIFO module 515 is an 8-entry FIFO with write pointer and read pointer initialized to be 3 entries apart. Receive synch FIFO module 515 writes 64-bit data from a SERDES deserialize receiver 510, reads 64-bit data from a FIFO with a clock signal and delivers data to accumulator 520, and maintains a three entry separation between read/write pointers by adjusting the read pointer when the separation becomes less than or equal to 1.
In step 2720, accumulator 520 receives two chunks of 32-bit data are received from receive synch FIFO 515. Accumulator 520 detects a special character K0 in the first bytes of first chunk and second chunk (step 2722). Accumulator 520 then extracts a destination slot number from the state field in the header if K0 is detected (step 2724).
As shown in
As shown in
In step 2746, a respective FIFO controller indicates to FIFO read arbitrator 540 if K0 has been read or FIFO RAM is empty. This indication is a read request for arbitration. In step 2748, a respective FIFO controller indicates to FIFO read arbitrator 540 whether K0 is aligned to the first 32-bit chunk or the second 32-bit chunk. When flow control from an output port is detected (such as when a predetermined flow control sequence of one or more characters is detected), FIFO controller stops requesting the FIFO read arbitrator 540 after the current cell is completely read from the FIFO RAM (step 2750).
As shown in
As shown in
C. Backplane Interface Adapter
To describe the structure and operation of the backplane interface adapter reference is made to components shown in
D. Overall Operation of Backplane Interface Adapter
As shown in
E. First Traffic Processing Path
Deserializer receiver(s) 602 receive narrow input cells carrying packets of data. These narrow input cells are output to deserializer receiver(s) 602 from packet processors and/or from integrated bus translators (IBTs) coupled to packet processors. In one example, four deserializer receivers 602 are coupled to four serial links (such as, links 308A-D, 318A-C described above in
F. Narrow Cell Format
G. Traffic Sorting
Traffic sorter 610 sorts received narrow input cells based on a destination slot identifier. Traffic sorter 610 routes narrow cells destined for the same blade as BIA 600 (also called local traffic) to destination queues 615. Narrow cells destined for other blades in a switch across the switching fabric (also called global traffic) are routed to wide cell generators 620.
H. Wide Striped Cell Generation
Wide cell generators 620 generate wide striped cells. The wide striped cells carry the packets of data received by BIA 600 in the narrow input cells. The wide cells extend across multiple stripes and include in-band control information in each stripe. In the interest of brevity, the operation of wide cell generators 620, 720 is further described with respect to a routine 1200 in
For each detected packet (step 1225), steps 1230-1240 are performed. In step 1230, wide cell generator 620, 720 encodes one or more new wide striped cells until data from all narrow input cells of the packet is distributed into the one or more new wide striped cells. This encoding is further described below with respect to routine 1400 and
In step 1230, wide cell generator 620 then writes the one or more new wide striped cells into a plurality of send queues 625. In the example of
I. Encoding Wide Striped Cells
According to a further feature of the present invention, system and method for encoding wide striped cells is provided. In one embodiment, wide cell generators 620, 720 each generate wide striped cells which are encoded (step 1230).
J. Initial Block Encoding
In step 1410, wide cell generator 620, 720 encodes an initial block of a start wide striped cell with initial cell encoding information. The initial cell encoding information includes control information (such as, a special K0 character) and state information provided in each sub-block of an initial block of a wide striped cell.
In step 1420, wide cell generator(s) 620, 720 distribute initial bytes of packet data into available space in the initial block. In the example wide striped cell 1500 shown in
In step 1430, wide cell generator(s) 620, 720 distribute remaining bytes of packet data across one or more blocks in of the first wide striped cell (and subsequent wide cells). In the example wide striped cell 1500, maximum size of a wide striped cell is 160 bytes (8 blocks) which corresponds to a maximum of 148 bytes of data. In addition to the data bytes D0-D7 in the initial block, wide striped cell 1500 further has data bytes D8-D147 distributed in seven blocks (labeled in
In general, packet data continues to be distributed until an end of packet condition is reached or a maximum cell size is reached. Accordingly, checks are made of whether a maximum cell size is reached (step 1440) and whether the end of packet is reached (step 1450). If the maximum cell size is reached in step 1440 and more packet data needs to be distributed then control returns to step 1410 to create additional wide striped cells to carry the rest of the packet data. If the maximum cell size is not reached in step 1440, then an end of packet check is made (step 1450). If an end of packet is reached then the current wide striped cell being filled with packet data is the end wide striped cell. Note for small packets less than 148 bytes, than only one wide striped cell is needed. Otherwise, more than one wide striped cells are used to carry a packet of data across multiple stripes. When an end of packet is reached in step 1450, then control proceeds to step 1460.
K. End of Packet Encoding
In step 1460, wide cell generator(s) 620, 720 further encode an end wide striped cell with end of packet information that varies depending upon the degree to which data has filled a wide striped cell. In one encoding scheme, the end of packet information varies depending upon a set of end of packet conditions including whether the end of packet occurs in an initial cycle or subsequent cycles, at a block boundary, or at a cell boundary.
As shown in item 2 of
L. Switching Fabric Transmit Arbitration
In one embodiment, BIA 600 also includes switching fabric transmit arbitrator 630. Switching fabric transmit arbitrator 630 arbitrates the order in which data stored in the stripe send queues 625, 725 is sent by transmitters 640, 740 to the switching fabric. Each stripe send queue 625, 725 stores a respective group of wide striped cells corresponding to a respective originating source packet processor and a destination slot identifier. Each wide striped cell has one or more blocks across multiple stripes. During operation the switching fabric transmit arbitrator 630 selects a stripe send queue 625, 725 and pushes the next available cell to the transmitters 640, 740. In this way one full cell is sent at a time. (Alternatively, a portion of a cell can be sent.) Each stripe of a wide cell is pushed to the respective transmitter 640, 740 for that stripe. In one example, during normal operation, a complete packet is sent to any particular slot or blade from a particular packet processor before a new packet is sent to that slot from different packet processors. However, the packets for the different slots are sent during an arbitration cycle. In an alternative embodiment, other blades or slots are then selected in a round-robin fashion.
M. Cross Point Processing of Stripes including Wide Cell Encoding
In on embodiment, switching fabric 645 includes a number n of cross point switches 202 corresponding to each of the stripes. Each cross point switch 202 (also referred to herein as a cross point or cross point chip) handles one data slice of wide cells corresponding to one respective stripe. In one example, five cross point switches 202A-202E are provided corresponding to five stripes. For clarity,
The operation of a cross point 202 and in particular a port slice 402F is now described with respect to an embodiment where stripes further include wide cell encoding and a flow control indication.
Port slice 402F also receives data from other port slices 402A-402E, 402G, and 402H. This data corresponds to the data received at the other seven ports of port slices 402A-402E, 402G, and 402H which has a destination slot number corresponding to port slice 402F. Port slice 402F includes seven data FIFOs 530 to store data from corresponding port slices 402A-402E, 402G, and 402H. Accumulators (not shown) in the seven port slices 402A-402E, 402G, and 402H extract the destination slot number associated with port slice 402F and write corresponding data to respective ones of seven data FIFOs 530 for port slice 402F. As shown in
During operation, the FIFO RAMs accumulate data. After a data FIFO RAM has accumulated one cell of data, its corresponding FIFO controller generates a read request to FIFO read arbitrator 540. FIFO read arbitrator 540 processes read requests from the different FIFO controllers in a desired order, such as a round-robin order. After one cell of data is read from one FIFO RAM, FIFO read arbitrator 540 will move on to process the next requesting FIFO controller. In this way, arbitration proceeds to serve different requesting FIFO controllers and distribute the forwarding of data received at different source ports. This helps maintain a relatively even but loosely coupled flow of data through cross points 202.
To process a read request, FIFO read arbitrator 540 switches multiplexer 550 to forward a cell of data from the data FIFO RAM associated with the read request to dispatcher 560. Dispatcher 560 outputs the data to transmit synch FIFO 570. Transmit synch FIFO 570 stores the data until sent in a serial data stream by serializer transmitter(s) 580 to blade 104F.
Cross point operation according to the present invention is described further below with respect to a further embodiment involving wide cell encoding and flow control.
N. Second Traffic Processing Path
As shown in
Translators 680 translate the received wide striped cells to narrow input cells carrying the packets of data. Serializer transmitters 692 transmit the narrow input cells to corresponding source packet processors or IBTs.
BIA 600 further includes stripe interfaces 660 (also called stripe interface modules), stripe receive synchronization queues (685), and controller 670 coupled between deserializer receivers 650 and a controller 670. Each stripe interface 660 sorts received sub-blocks in each stripe based on source packet processor identifier and originating slot identifier information and stores the sorted received sub-blocks in the stripe receive synchronization queues 685.
Controller 670 includes an arbitrator 672, a striped-based wide cell assembler 674, and an administrative module 676. Arbitrator 672 arbitrates an order in which data stored in stripe receive synchronization queues 685 is sent to striped-based wide cell assembler 674. Striped-based wide cell assembler 674 assembles wide striped cells based on the received sub-blocks of data. A narrow/wide cell translator 680 then translates the arbitrated received wide striped cells to narrow input cells carrying the packets of data. Administrative module 676 is provided to carry out flow control, queue threshold level detection, and error detection (such as, stripe synchronization error detection), or other desired management or administrative functionality.
A second level of arbitration is also provided according to an embodiment of the present invention. BIA 600 further includes destination queues 615 and a local destination transmit arbitrator 690 in the second path. Destination queues 615 store narrow cells sent by traffic sorter 610 (from the first path) and the narrow cells translated by the translator 680 (from the second path). Local destination transmit arbitrator 690 arbitrates an order in which narrow input cells stored in destination queues 690 is sent to serializer transmitters 692. Finally, serializer transmitters 692 then transmit the narrow input cells to corresponding IBTs and/or source packet processors (and ultimately out of a blade through physical ports).
In the second traffic path, deserializer receiver 950 is coupled to cross clock domain synchronizer 952. Deserializer receiver 950 converts serial data slices of a stripe (e.g., sub-blocks) to parallel data. Cross clock domain synchronizer 952 synchronizes the parallel data.
Stripe interface 960 has a decoder 962 and sorter 964 to decode and sort received sub-blocks in each stripe based on source packet processor identifier and originating slot identifier information. Sorter 964 then stores the sorted received sub-blocks in stripe receive synchronization queues 965. Five groups of 56 stripe receive synchronization queues 965 are provided in total. This allows one queue to be dedicated for each group of sub-blocks received from a particular source per global blade (up to 8 source packet processors per blade for seven blades not including the current blade).
Arbitrator 672 arbitrates an order in which data stored in stripe receive synchronization queues 685 sent to striped-based wide cell assembler 674. Striped-based wide cell assembler 674 assembles wide striped cells based on the received sub-blocks of data. A narrow/wide cell translator 680 then translates the arbitrated received wide striped cells to narrow input cells carrying the packets of data as described above in
Destination queues include local destination queues 982 and backplane traffic queues 984. Local destination queues 982 store narrow cells sent by local traffic sorter 716. Backplane traffic queues 984 store narrow cells translated by the translator 680. Local destination transmit arbitrator 690 arbitrates an order in which narrow input cells stored in destination queues 982, 984 is sent to serializer transmitters 992. Finally, serializer transmitters 992 then transmit the narrow input cells to corresponding IBTs and/or source packet processors (and ultimately out of a blade through physical ports).
O. Cell Boundary Alignment
P. Packet Alignment
Q. Wide Striped Cell Size at Line Rate
In one example, a wide cell has a maximum size of eight blocks (160 bytes) which can carry 148 bytes of payload data and 12 bytes of in-band control information. Packets of data for full-duplex traffic can be carried in the wide cells at a 50 Gbps rate through the digital switch.
R. IBT and Packet Processing
The integrated packet controller (IPC) and integrated giga controller (IGC) functions are provided with a bus translator, described above as the IPC/IGC Bus Translator (IBT) 304. In one embodiment, the IBT is an ASIC that bridges one or more IPC/IC ASIC. In such an embodiment, the IBT translates two ⅘ gig parallel stream into one 10 Gbps serial stream. The parallel interface can be the backplane interface of the IPC/IGC ASICs. The one 10 Gbps serial stream can be further processed, for example, as described herein with regard to interface adapters and striping.
Additionally, IBT 304 can be configured to operate with other architectures as would be apparent to one skilled in the relevant art(s) based at least on the teachings herein. For example, the IBT 304 can be implemented in packet processors using 10GE and OC-192 configurations. The functionality of the IBT 304 can be incorporated within existing packet processors or attached as an add-on component to a system.
In
More specifically, the bus translator 1702 translates data 1704 into data 1706 and data 1706 into data 104. The data 1706 is received by transceiver(s) 1710 is forwarded to a translator 1712. The translator 1712 parses and encodes the data 1706 into a desired format.
Here, the translator 1712 translates the data 1706 into the format of the data 1704. The translator 1712 is managed by an administration module 1718. One or more memory pools 1716 store the information of the data 1706 and the data 1704. One or more clocks 1714 provide the timing information to the translation operations of the translator 1712. Once the translator 1712 finishes translating the data 1706, it forwards the newly formatted information as the data 1704 to the transceiver(s) 1708. The transceiver(s) 1708 forward the data 1704.
As one skilled in the relevant art would recognize based on the teachings described herein, the operational direction of bus translator 1702 can be reversed and the data 1704 received by the bus translator 1702 and the data 1706 forwarded after translation.
For ease of illustration, but without limitation, the process of translating the data 1706 into the data 1704 is herein described as receiving, reception, and the like. Additionally, for ease of illustration, but without limitation, the process of translating the data 1704 into the data 1706 is herein described as transmitting, transmission, and the like.
In
The packet decoders 1810 receive the packets from the receivers 1808. The packet decoders 1810 parse the information from the packets. In one embodiment, as is described below in additional detail, the packet decoders 1810 copy the payload information from each packet as well as the additional information about the packet, such as time and place of origin, from the start of packet (SOP) and the end of packet (EOP) sections of the packet. The packet decoders 1810 forward the parsed information to memory pool(s) 1812. In one embodiment, the bus translator 1802 includes more than one memory pool 1812. In an alternative embodiment, alternate memory pool(s) 1818 can be sent the information. In an additional embodiment, the packet decoder(s) 1810 can forward different types of information, such as payload, time of delivery, origin, and the like, to different memory pools of the pools 1812 and 1818.
Reference clock 1820 provides timing information to the packet decoder(s) 1810. In one embodiment, reference clock 1820 is coupled to the IPC/IGC components sending the packets through the connections 1804a-n. In another embodiment, the reference clock 1820 provides reference and timing information to all the parallel components of the bus translator 1802.
Cell encoder(s) 1814 receives the information from the memory pool(s) 1812. In an alternative embodiment, the cell encoder(s) 1814 receives the information from the alternative memory pool(s) 1818. The cell encoder(s) 1814 formats the information into cells.
In the description that follows, these cells are also referred to as narrow cells. Furthermore, the cell encoder(s) 1814 can be configured to format the information into one or more cell types. In one embodiment, the cell format is a fixed size. In another embodiment, the cell format is a variable size.
The cell format is described in detail below with regard to cell encoding and decoding processes of
The cell encoder(s) 1814 forwards the cells to transmitter(s) 1816. The transmitter(s) 1816 receive the cells and transmit the cells through interface connections 1806a-n.
Reference clock 1828 provides timing information to the cell encoder(s) 1814. In one embodiment, reference clock 1828 is coupled to the interface adapter components receiving the cells through the connections 1806a-n. In another embodiment, the reference clock 1828 provides reference and timing information to all the serial components of the bus translator 1802.
Flow controller 1822 measures and controls the incoming packets and outgoing cells by determining the status of the components of the bus translator 1802 and the status of the components connected to the bus translator 1802. Such components are previously described herein and additional detail is provided with regard to the interface adapters of the present invention.
In one embodiment, the flow controller 1822 controls the traffic through the connection 1806 by asserting a ready signal and de-asserting the ready signal in the event of an overflow in the bus translator 1802 or the IPC/IGC components further connected.
Administration module 1824 provides control features for the bus translator 1802. In one embodiment, the administration module 1824 provides error control and power-on and reset functionality for the bus translator 1802.
The cell decoders 1912 receive the cells from the synchronization module 1910. The cell decoders 1912 parse the information from the cells. In one embodiment, as is described below in additional detail, the cell decoders 1912 copy the payload information from each cell as well as the additional information about the cell, such as place of origin, from the slot and state information section of the cell.
In one embodiment, the cell format can be fixed. In another embodiment, the cell format can be variable. In yet another embodiment, the cells received by the bus translator 1902 can be of more than one cell format. The bus translator 1902 can be configured to decode these cell format as one skilled in the relevant art would recognize based on the teachings herein. Further details regarding the cell formats is described below with regard to the cell encoding processes of the present invention.
The cell decoders 1912 forward the parsed information to memory pool(s) 1914. In one embodiment, the bus translator 1902 includes more than one memory pool 1914. In an alternative embodiment, alternate memory pool(s) 1916 can be sent the information. In an additional embodiment, the cell decoder(s) 1912 can forward different types of information, such as payload, time of delivery, origin, and the like, to different memory pools of the pools 1914 and 1916.
Reference clock 1922 provides timing information to the cell decoder(s) 1912. In one embodiment, reference clock 1922 is coupled to the interface adapter components sending the cells through the connections 1904a-n. In another embodiment, the reference clock 1922 provides reference and timing information to all the serial components of the bus translator 1902.
Packet encoder(s) 1918 receive the information from the memory pool(s) 1914. In an alternative embodiment, the packet encoder(s) 1918 receive the information from the alternative memory pool(s) 1916. The packet encoder(s) 1918 format the information into packets.
The packet format is determined by the configuration of the IPC/IGC components and the requirements for the system.
The packet encoder(s) 1918 forwards the packets to transmitter(s) 1920. The transmitter(s) 1920 receive the packets and transmit the packets through interface connections 1906a-n.
Reference clock 1928 provides timing information to the packet encoder(s) 1918. In one embodiment, reference clock 1928 is coupled to the IPC/IGC components receiving the packets through the connections 1906a-n. In another embodiment, the reference clock 1928 provides reference and timing information to all the parallel components of the bus translator 1902.
Flow controller 1926 measures and controls the incoming cells and outgoing packets by determining the status of the components of the bus translator 1902 and the status of the components connected to the bus translator 1902. Such components are previously described herein and additional detail is provided with regard to the interface adapters of the present invention.
In one embodiment, the flow controller 1926 controls the traffic through the connection 1906 by asserting a ready signal and de-asserting the ready signal in the event of an overflow in the bus translator 1902 or the IPC/IGC components further connected.
Administration module 1924 provides control features for the bus translator 1902. In one embodiment, the administration module 1924 provides error control and power-on and reset functionality for the bus translator 1902.
In
In terms of packet processing, packets are received by the bus translator 2002 by receivers 2012. The packets are processed into cells and forwarded to a serializer/deserializer (SERDES) 2026. SERDES 2026 acts as a transceiver for the cells being processed by the bus translator 2002. The SERDES 2026 transmits the cells via interface connection 2006.
In terms of cell processing, cells are received by the bus translator 2002 through the interface connection 2008 to the SERDES 2026. The cells are processed into packets and forwarded to transmitters 2036. The transmitters 2036 forward the packets to the IPC/IGC components through interface connections 2010a-n.
The reference clocks 2040 and 2048 are similar to those previously described in
The above-described separation of serial and parallel operations is a feature of embodiments of the present invention. In such embodiments, the parallel format of incoming and leaving packets at ports 2014a-n and 2038a-b, respectively, is remapped into a serial cell format at the SERDES 2026.
Furthermore, according to embodiments of the present invention, the line rates of the ports 2014a-n have a shared utilization limited only by the line rate of output 2006. Similarly for ports 2038a-n and input 2008.
The remapping of parallel packets into serial cells is described in further detail herein, more specifically with regard to
In
Administration module 2140 operates as previously described. As shown, the administration module 2140 includes an administration control element and an administration register. The administration control element monitors the operation of the bus translator 2102 and provides the reset and power-on functionality as previously described with regard to
The reference clocks 2134 and 2136 are similar to those previously described in
As shown in
Additionally, memory pool 2130 includes two pairs of FIFOs. The memory pool 2130 performs as previously described memory pools in
Interface connections 2106 and 2108 connect previously described interface adapters to the bus translator 2102 through the SERDES 2124. In one embodiment, the connections 2106 and 2108 are serial links. In another embodiment, the serial links are divided four lanes.
In one embodiment, the bus translator 2102 is an IBT 304 that translates one or more 4 Gbps parallel IPC/IGC components into four 3.125 Gbps serial XAUI interface links or lanes. In one embodiment, the back planes are the IPC/IGC interface connections. The bus translator 2102 formats incoming data into one or more cell formats.
In one embodiment, the cell format can be a four byte header and a 32 byte data payload. In a further embodiment, each cell is separated by a special K character into the header. In another embodiment, the last cell of a packet is indicated by one or more special K1 characters.
The cell formats can include both fixed length cells and variable length cells. The 36 bytes (4 byte header plus 32 byte payload) encoding is an example of a fixed length cell format. In an alternative embodiment, cell formats can be implemented where the cell length exceeds the 36 bytes (4 bytes+32 bytes) previously described.
In
The bus translator 2102 has memory pools 2116 to act as internal data buffers to handle pipeline latency. For each IPC/IGC component, the bus translator 2102 has two data FIFOs and one header FIFO, as shown in
In one embodiment, the cell encoder 2160 merges the data from each of the packet decoders 2150a-b into one 10 Gbps data stream to the interface adapter. The cell encoder 2160 merges the data by interleaving the data at each cell boundary. Each cell boundary is determined by the special K characters.
According to one embodiment, the received packets are 32 bit aligned, while the parallel interface of the SERDES elements is 64 bit wide.
In practice it can be difficult to achieve line rate for any packet length. Line rate means maintaining the same rate of output in cells as the rate at which packets are being received. Packets can have a four byte header overhead (SOP) and a four byte tail overhead (EOP). Therefore, the bus translators 2102 must parse the packets without the delays of typical parsing and routing components. More specifically, the bus translators 2102 formats parallel data into cell format using special K characters, as described in more detail below, to merge state information and slot information (together, control information) in band with the data streams. Thus, in one embodiment, each 32 bytes of cell data is accompanied by a four byte header.
In an additional embodiment, as shown in
Although a separate native mode data path is not shown for cell to packet translation, one skilled in the relevant art would recognize how to accomplish it based at least on the teachings described herein. For example, by configuring two FIFOs for dedicated storage of 10 Gbps link information. In one embodiment, however, the bus translator 2102 processes native mode and non-native mode data paths in a shared operation as shown in
In an additional embodiment, where there is a zero body cell format being received by the interface adapter or BIA, the IBT 304 holds one last data transfer for each source slot. When it receives the EOP with the zero body cell format, the last one or two transfers are released to be transmitted from the parallel interface.
S. Narrow Cell and Packet Encoding Processes
According to one embodiment of the present invention, the cell includes a special character K0 2190; a control information 2194; optionally, one or more reserved 2196a-b; and data 2198a-n. In an alternate embodiment, data 2198a-n can contain more than D0-D31.
In one embodiment, the four rows or slots indicated in
As previously described herein, the IBT 304 transmits and receives cells to and from the BTA 302 through the XAUI interface. The IBT 304 transmits and receives packets to and from the IPC/IGC components, as well as other controller components (i.e., 10GE packet processor) through a parallel interface. The packets are segmented into cells which consist of a four byte header followed by 32 bytes of data. The end of packet is signaled by K1 special character on any invalid data bytes within four byte of transfer or four K1 on all XAUI lanes. In one embodiment, each byte is serialized onto one XAUI lane. The following table illustrates in a right to left formation a byte by byte representation of a cell according to one embodiment of the present invention:
The packets are formatted into cells that consist of a header plus a data payload. The 4 bytes of header takes one cycle or row on four XAUI lanes. It has K0 special character on Lane 0 to indicate that current transfer is a header. The control information starts on Lane 1 of a header.
In one embodiment, the IBT 304 accepts two IPC/IGC back plane buses and translates them into one 10 Gbps serial stream.
In
In step 2204, the IBT 304 determines the port types through which it will be receiving packets. In one embodiment, the ports are configured for 4 Gbps traffic from IPC/IGC components. The process immediately proceeds to step 2206.
In step 2206, the IBT 304 selects a cell format type based on the type of traffic it will be processing. In one embodiment, the IBT 304 selects the cell format type based in part on the port type determination of step 2204. The process immediately proceeds to step 2208.
In step 2208, the IBT 304 receives one or more packets from through its ports from the interface connections, as previously described. The rate at which packets are delivered depends on the components sending the packets. The process immediately proceeds to step 2210.
In step 2210, the IBT 304 parses the one or more packets received in step 2208 for the information contained therein. In one embodiment, the packet decoder(s) of the IBT 304 parse the packets for the information contained within the payload section of the packet, as well as the control or routing information included with the header for that each given packet. The process immediately proceeds to step 2212.
In step 2212, the IBT 304 optionally stores the information parsed in step 2210. In one embodiment, the memory pool(s) of the IBT 304 are utilized to store the information. The process immediately proceeds to step 2214.
In step 2214, the IBT 304 formats the information into one or more cells. In one embodiment, the cell encoder(s) of the IBT 304 access the information parsed from the one or more packets. The information includes the data being trafficked as well as slot and state information (i.e., control information) about where the data is being sent. As previously described, the cell format includes special characters which are added to the information. The process immediately proceeds to step 2216.
In step 2216, the IBT 304 forwards the formatted cells. In one embodiment, the SERDES of the IBT 304 receives the formatted cells and serializes them for transport to the BIA 302 of the present invention. The process continues until instructed otherwise.
In
In step 2304, the IBT 304 determines the port types through which it will be receiving packets. The process immediately proceeds to step 2306.
In step 2306, the IBT 304 determines if the port type will, either individually or in combination, exceed the threshold that can be maintained. In other words, the IBT 304 checks to see if it can match the line rate of incoming packets without reaching the internal rate maximum. If it can, then the process proceeds to step 2310. In not, then the process proceeds to step 2308.
In step 2308, given that the IBT 304 has determined that it will be operating at its highest level, the IBT 304 selects a variable cell size that will allow it to reduce the number of cells being formatted and forwarded in the later steps of the process. In one embodiment, the cell format provides for cells of whole integer multiples of each of the one or more packets received. In another embodiment, the IBT 304 selects a cell format that provides for a variable cell size that allows for maximum length cells to be delivered until the packet is completed. For example, if a given packet is 2.3 cell lengths, then three cells will be formatted, however, the third cell will be a third that is the size of the preceding two cells. The process immediately proceeds to step 2312.
In step 2310, given that the IBT 304 has determined that it will not be operating at its highest level, the IBT 304 selects a fixed cell size that will allow the IBT 304 to process information with lower processing overhead. The process immediately proceeds to step 2312.
In step 2312, the IBT 304 receives one or more packets. The process immediately proceeds to step 2314.
In step 2314, the MT 304 parses the control information from each of the one or more packets. The process immediately proceeds to step 2316.
In step 2316, the IBT 304 determines the slot and state information for each of the one or more packets. In one embodiment, the slot and state information is determined in part from the control information parsed from each of the one or more packets. The process immediately proceeds to step 2318.
In step 2318, the MT 304 stores the slot and state information. The process immediately proceeds to step 2320.
In step 2320, the IBT 304 parses the payload of each of the one or more packets for the data contained therein. The process immediately proceeds to step 2322.
In step 2322, the IBT 304 stores the data parsed from each of the one or more packets. The process immediately proceeds to step 2324.
In step 2324, the IBT 304 accesses the control information. In one embodiment, the cell encoder(s) of the IBT 304 access the memory pool(s) of the IBT 304 to obtain the control information. The process immediately proceeds to step 2326.
In step 2326, the MT 304 accesses the data parsed from each of the one or more packets. In one embodiment, the cell encoder(s) of the MT 304 access the memory pool(s) of the IBT 304 to obtain the data. The process immediately proceeds to step 2328.
In step 2328, the IBT 304 constructs each cell by inserting a special character at the beginning of the cell currently being constructed. In one embodiment, the special character is K0. The process immediately proceeds to step 2330.
In step 2330, the IBT 304 inserts the slot information. In one embodiment, the JET 304 inserts the slot information into the next lane, such as space 2194. The process immediately proceeds to step 2332.
In step 2332, the IBT 304 inserts the state information. In one embodiment, the IBT 304 inserts the state information into the next lane after the one used for the slot information, such as reserved 2196a. The process immediately proceeds to step 2334.
In step 2334, the IBT 304 inserts the data. The process immediately proceeds to step 2336.
In step 2336, the IBT 304 determines if there is additional data to be formatted. For example, if there is remaining data from a given packet. If so, then the process loops back to step 2328. If not, then the process immediately proceeds to step 2338.
In step 2338, the IBT 304 inserts the special character that indicated the end of the cell transmission (of one or more cells). In one embodiment, when the last of a cells is transmitted, the special character is K1. The process proceeds to step 2340.
In step 2340, the IBT 304 forwards the cells. The process continues until instructed otherwise.
In
In step 2404, the IBT 304 receives one or more cells. In one embodiment, the cells are received by the SERDES of the IBT 304 and forwarded to the cell decoder(s) of the IBT 304. In another embodiment, the SERDES of the IBT 304 forwards the cells to a synchronization buffer or queue that temporarily holds the cells so that their proper order can be maintained. These steps are described below with regard to steps 2406 and 2408. The process immediately proceeds to step 2406.
In step 2406, the IBT 304 synchronizes the one or more cells into the proper order. The process immediately proceeds to step 2408.
In step 2408, the IBT 304 optionally checks the one or more cells to determine if they are in their proper order.
In one embodiment, steps 2506, 2508, and 2510 are performed by a synchronization FIFO. The process immediately proceeds to step 2410.
In step 2410, the IBT 304 parses the one or more cells into control information and payload data. The process immediately proceeds to step 2412.
In step 2412, the IBT 304 stores the control information payload data. The process immediately proceeds to step 2414.
In step 2414, the IBT 304 formats the information into one or more packets. The process immediately proceeds to step 2416.
In step 2416, the IBT 304 forwards the one or more packets. The process continues until instructed otherwise.
In
In step 2504, the IBT 304 receives one or more cells. The process immediately proceeds to step 2506.
In step 2506, the IBT 304 optionally queues the one or more cells. The process immediately proceeds to step 2508.
In step 2508, the IBT 304 optionally determines if the cells are arriving in the proper order. If so, then the process immediately proceeds to step 2512. If not, then the process immediately proceeds to step 2510.
In step 2510, The IBT 304 holds one or more of the one or more cells until the proper order is regained. In one embodiment, in the event that cells are lost, the IBT 304 provides error control functionality, as described herein, to abort the transfer and/or have the transfer re-initiated. The process immediately proceeds to step 2514.
In step 2512, the IBT 304 parses the cell for control information. The process immediately proceeds to step 2514.
In step 2514, the IBT 304 determines the slot and state information. The process immediately proceeds to step 2516.
In step 2516, the IBT 304 stores the slot and state information. The process immediately proceeds to step 2518.
In one embodiment, the state and slot information includes configuration information as shown in the table below:
In one embodiment, the IBT 304 has configuration registers. They are used to enable Backplane and IPC/IGC destination slots.
In step 2518, the IBT 304 parses the cell for data. The process immediately proceeds to step 2520.
In step 2520, the IBT 304 stores the data parsed from each of the one or more cells. The process immediately proceeds to step 2522.
In step 2522, the IBT 304 accesses the control information. The process immediately proceeds to step 2524.
In step 2524, the IBT 304 access the data. The process immediately proceeds to step 2526.
In step 2526, the IBT 304 forms one or more packets. The process immediately proceeds to step 2528.
In step 2528, the MT 304 forwards the one or more packets. The process continues until instructed otherwise.
T. Administrative Process and Error Control
This section describes potential error conditions that might occur in serial links and cross-point switches in the backplane as well as various error control embodiments of the present invention. Various recovery and reset routines of the present invention are also described.
The routines described herein are generally designed to detect, prevent, and recover from errors of the following nature:
1) Link Error—Link error occurs as a result of a bit error or a byte alignment problem within a SERDES. Since the clock is recovered from the data stream, there is a possibility of a byte alignment problem if there isn't enough data transition. Bit error can also occur as a result of external noise on the line. The SERDES can also detect exception conditions such as SOP characters in lane 1 and can mark them as link errors.
2) Lane Synchronization Error—The lane is defined as one serial link among the four serial links that make up the 10 Gbps SERDES. As described elsewhere herein, there are four deep FIFOs within the SERDES core to compensate for any transmission line skew and synchronize the lanes such as to present a unified 10 Gbps stream to the core logic. There are possible cases where the FIFOs might overflow or underflow, which can result in lane synchronization error. There are also scenarios when a lane synchronization sequence might determine a possible alignment problem.
3) Stripe Synchronization Error—Stripe synchronization error refers to any error in the flow of wide cells of data sent across multiple stripes through the switching fabric according to the invention. Such stripe synchronization errors (also referred to as stripe synchronization error conditions or simply error conditions) can be due to a link error in a serial pipe leading to or from a cross-point, or to an error in the cross-point itself.
In one embodiment, a receiving BIA contains deep FIFOs (such as 56 or 64 FIFOs) that are sorted according to sending source and stripe. Stripe synchronization errors can be detected by monitoring the FIFOs and detecting an overflow and/or underflow of one or more FIFOs within the striped data paths. In other scenarios, the stripes may become completely out of synchronization. In one recovery embodiment, some or all of the XPNT modules would arbitrate independently, as the XPNT modules operate independently, as described elsewhere herein, to clear the FIFOs affected and recover from a known state.
Additional error conditions and combinations of error conditions are possible, as would be apparent to one skilled in the relevant art(s) based at least on the teachings herein.
The routines for detection and prevention of these error conditions are summarized immediately below and described with respect to detailed embodiments of the present invention thereafter.
In general, the present invention can manage the bus translator as illustrated in
In step 2604, the IBT 304 determines the status of its internal components. The process immediately proceeds to step 2606.
In step 2606, the IBT 304 determines the status of its links to external components. The process immediately proceeds to step 2608.
In step 2608, the IBT 304 monitors the operations of both the internal and external components. The process immediately proceeds to step 2610.
In step 2610, the IBT 304 monitors the registers for administrative commands. The process immediately proceeds to step 2612.
In step 2612, the IBT 304 performs resets of given components as instructed. The process immediately proceeds to step 2614.
In step 2614, the IBT 304 configures the operations of given components. The process continues until instructed otherwise.
In one embodiment, any errors are detected on the receiving side of the BIA 302 are treated in a fashion identical to the error control methods described herein for errors received on the XPNT 202 from the BIA 302. In operational embodiments where the destination slot cannot be known under certain conditions by the BIA 302, the following process is carried out by BIA 302:
a. Send an abort of packet (AOP) to all slots.
b. Wait for error to go away, that is, when buffers are cleared or flushed.
c. Once buffers are clear, sync to the first K0 token with SOP to begin accepting data.
In the event that an error is detected on the receiving side of the IBT 304, it is treated as if the error was seen by the BIA 302 from IBT 304. The following process will be used:
a. Send an AOP to all slots of down stream IPC/IGC to terminate any packet in progress.
b. Wait for buffers to fill and clear error causing data.
c. Sync to K0 token after error goes away (after buffers are flushed) to begin accepting data.
(1) BIA Administrative Module
In one embodiment, administrative module 676 of
Stripe synchronization error detector 2808 detects stripe synchronization errors based on the conditions of the FIFOs monitored by level monitor 2806. A stripe synchronization error can be any error in the flow of wide cells of data sent across multiple stripes through the switching fabric according to the invention. Such stripe synchronization errors can be due to a link error in a serial pipe leading to or from a cross-point, or to an error in the cross-point itself. For clarity, a link error in a serial pipe leading from a sending BIA to a cross-point is referred to as an “incoming link error”, and a link error in a serial pipe leading from a cross-point to a receiving BIA is referred to as an “outgoing link error.” When a stripe synchronization error is detected, stripe synchronization error detector 2808 sends a signal to flow controller 2812. Flow controller 2812 then initiates an appropriate recovery routine to re-synchronize data flow across the stripes in the switching fabric. Among other things, such a recovery routine can involve sending control characters (such as special K2 characters) across the stripes in the switching fabric. Control character (K2) presence tracker 2810 monitors special K2 characters received in the data flow at a BIA. Flow controller 2812 also provides control logic for the administrative module 676 and the modules therein. Flow controller 2812 allows the modules of the administrative module 676 to perform their functions as described herein by the transmitter and receiving information regarding the status of the various FIFOs, BIAs, XPNTs, and other components of the present invention. Examples of detection and recovery from stripe synchronization errors are described further below with respect to
Consider an example where wide cells of data are sent from slots 0 and 1 across stripes 0-4 through respective cross points 2856A-E to slot 2858. One type of error can occur when link 2853 between the slot02852 to xpnt02856A is broken. In such an event, xpnt02856A will detect a broken link which will result in it sending an error signal back to the source slot02852. This will cause the slot02852 to stop sending traffic and send out a K2 sequence. The xpnt02856A can also send an abort cell (AOP) to all the destinations in order to notify them that an error has occurred. In one embodiment, this is done as soon as error is detected.
In other embodiments, there is, momentarily, a situation where xpnt12856B through xpnt42856E are still sending data from slot02852 and slot12854 to slot22858, while xpnt02856A is sending data only from slot12854 because link 2853 is broken between slot02852 and xpnt02856A. This can cause a sync queue in slot22858 that corresponds to the stripe0/slot1 link to overflow since it will receive more data from slot12854 than the other stripes and an underflow for the queue in slot22858 that corresponds to stripe0/slot02852 since that link is broken.
Administrative module 676 can detect this type of stripe synchronization error condition as follows. Level monitor 2806 monitors the levels of each of the FIFOs 2862. Stripe synchronization error detector 2808 then detects the presence of any overflow and/or underflow condition in the levels of the sorted FIFOs. In this example of an incoming link error, stripe synchronization error detector 2808 would detect the occurrence of the underflow condition in the FIFO for stripe0/slot0 and the overflow condition in the FIFO for stripe0/slot1. Stripe synchronization error detector 2808 sends a signal to flow controller 2812. Flow controller 2812 then initiates an appropriate recovery routine to re-synchronize data flow across the stripes in the switching fabric. Among other things, such a recovery routine can involve sending control characters (such as a special K2 characters) from slot0 across the stripes in the switching fabric. Control character (K2) presence tracker 2810 monitors special K2 characters received in the data flow at a BIA.
In the embodiment described above, when the slot 2852 is able to, it sends out a K2 sequence that will allow the queues to sync up. The sync is done at the first K0 character that comes from slot02852 with SOP, in other words, sync to 1st new packet after K2. Since the sync queue corresponding to slot 1/stripe0 in slot22858 can overflow, there will be a flow control event sent from slot22858 to xpnt02856A to stop sending data from slot12854 thus allowing the traffic from slot12854 not to be effected as a result of the slot02852 link failure and maintain synchronization for data from slot12854.
In another example, where the XPNT02856A goes down and is no longer operational. In such a case, the switch shown in
Still another example is when the link 2857 between xpnt02856A to slot22858 is broken. In such a case, the BIA at slot2 detects the break. In one embodiment, a RFT of the BIA detects the break, as described below with respect to embodiments of the present invention. Flow controller 2812 of the BIA sends a flow control event/signal back to the xpnt02856A which will get propagated back to slot02852, slot12854, and any slots present in the system. This can cause the source slots to stop sending traffic to slot22858. These slots can still send traffic to other destination slots, similar to slot22858. In the meantime, the BIA will abort any partial packets that it has received and wait for the K2 sequence to recover the link. As described herein, it will sync to the first SOP following a K2. The presence of a first SOP following a K2 can be detected by control character presence tracker 2810.
The functionality of the administrative module 676 is further described with respect to
In step 2902, module 676 sends a common control character in striped cells in all the lanes for a predetermined number of cycles. In one embodiment, a number of the common control characters are sent through the system.
In step 2904, module 676 evaluates the common control characters received in stripe receive synchronization queues. The module 676 evaluates the received common control characters to determine whether the system is re-synchronized.
In step 2906, the module 676 determines the re-synchronization condition. If the system is re-synchronized, then the routine proceeds to step 2910. If not, then the system proceeds to step 2908. In one embodiment, the module 676 determines if the FIFOs are all empty or cleared at the same time. In another embodiment, the module 676 is checks the state bits for each of the FIFOs.
In step 2908, the module 676 generates an error messages or other administrative signal. In one embodiment, the module 676 generates an error message such that the other components of the system begin recovery measures anew.
In step 2910, the module 676 returns to step 2902 and awaits reception of an error condition or other administrative command to begin routine 2900.
Another routine of the module 676 is illustrated in
In step 3002, the module 676 monitors the levels of stripe receive synchronization queues. In one embodiment, level monitor 2806 performs this function within the module 676.
In step 3004, the module 676 determines whether an out of synchronization queue threshold, such as, an overflow and/or underflow condition, is detected. In one embodiment, stripe synchronization error detector 2808 performs this function within the module 676. If so, then the process proceeds to step 3006. If not, then the process proceeds to step 3002. In one embodiment, the module 676 transmits a no error message or signal that can be received by other systems and logged for future reference.
In step 3006, the module 676 generates an out of synchronization message or other administrative signal that alerts the other components of the present invention that synchronization has been lost. In one embodiment, flow controller 2812 sends a signal back to the transmitting SXPNT which is further sent back to the RFT, which can then instantiate the K2 sequence of the present invention, as described elsewhere herein.
In step 3008, the module 676 initiates a re-synchronization routine for striped cell traffic across all lanes. In one embodiment, the module 676 initiates the routine of
Administrative module 676, and any of a level monitor 2806, a stripe synchronization error detector 2808, a control character (K2) presence tracker 2810, and a flow controller 2812, can be implemented in software, firmware, hardware or any combination thereof. Further, the functionality carried out in administrative module 676, and each of level monitor 2806, stripe synchronization error detector 2808, control character (K2) presence tracker 2810, and flow controller 2812, is described for convenience with respect to modules or blocks; however, the boundaries of such modules and distribution of functionality there between is illustrative and not intended to limit the present invention. Indeed, the functionality of administrative module 676, and each of level monitor 2806, stripe synchronization error detector 2808, control character (K2) presence tracker 2810, and flow controller 2812, can be combined into one module or distributed across any combination of modules.
(2) Redundant Fabric Transceivers
Additional detailed embodiments of the present invention are described immediately herein with respect to the implementation of one or more redundant fabric transceivers (RFTs) that implement the features of module 676.
According to embodiments of the present invention, RFT ASICs are a bridge between one SBIA ASIC and two switching fabric modules (SFMs) in order to provide switching redundancy in the switching system described herein.
In the redundant switching case of
Thus, the RFT of the present invention provides redundant switching and is capable of performing the following tasks: i) operations as a multiplexer and de-multiplexer; ii) sorting of traffic based on encoded source/destination slot information in order to handle flow control; iii) flow control generation; iv) SERDES; and v) error handling. As such, the RFT is an implementation of the present invention that performs the previously detailed features described herein with regard to the module 676.
Within the blade 3308, in one embodiment, there is one RFT for each stripe received. The RFTs 3316A-E forward the received data to a SBIA 3320. In an alternative embodiment, one RFT provides a bridge for the XAUI links (e.g., 15 links, 10 links from the two switching blades, and 5 links the SBIA). Such an implementation would likely require several dozen SERDES, since one reliable embodiment calls for four SERDES for each XAUI link). Furthermore, using a single RFT may introduce vulnerability to the system as the one RFT would handle all traffic. Therefore, the illustrated embodiment of five RFT modules provides a logical division of the processing workload.
In one embodiment, the received serial data is converted to parallel data by the SERDES, as described elsewhere herein. Along with the data, a clock can be recovered from the incoming data stream. Thus, each SERDES will generate a clock recovered from the data. In one embodiment, the FIFOs 3354 and 3356 provide clock compensation for transmit and receiving data by adding and/or removing idle characters to/from the FIFO data stream. Both FIFOs 3354 and 3356 feed into MUX 3358. MUX 3358 combines the incoming traffic and splits the outgoing traffic and provides both data/control signals and flow control signals for redundant stripes.
In one embodiment, all traffic is routed into a symmetric architecture for uplink/downlink logic. This architecture is shown in
In one embodiment, any latency in the SERDES 3350, 3352, and 3374 is compensated for by throttling the traffic at the seven logic data queues described above.
Both BIA_TX 3364 and BP_TX 3366 modules arbitrate the read operation from the downlink/uplink ram, 3362 and 3368, respectively, and compose data for transmission.
RFT registers 3376 provides access to internal registers that can be managed from module 676. The operations of the modules of RFT 3300B depend on the parameters set in the registers of module 3376. In one embodiment, the module 3376 provides the module 676 with information about the status of the modules of the RFT 3300B.
As described above with respect to
The packet-encoding scheme is described in detail with respect to sections I and J above, and the striping scheme is illustrated with respect to
In one embodiment, the maximum size of a payload for transfer in the backplane is 160 bytes (148 bytes of data max, 10 bytes of “Start of Cell” (SOC) control information, and 2 bytes reserved. A complete 160-byte transfer, in this embodiment, is referred to as a “cell,” as described elsewhere herein cells are not limited by this embodiment. Thus, a cycle is a single 3.2 ns clock pulse (i.e. 312.5 MHZ). The cell transfer can accomplished (as shown in
The “state” byte can be assigned as shown in the following table:
It is noted that the information in this table is similar to the previously described with respect to
K0 indicates “start of cell” that is the first block of a cell across all five stripes.
K1 indicates “end of packet” that can appear in any block of a cell. It is transparent to RFT and SXPNT.
K2 is used to encode the stripe synchronization sequence. Stripe synchronization requires a K2 character to be sent across all lanes and all stripes. In one embodiment, the special character is sent 112 times. After that, all stripes of the sync queues are marked as “in sync.” The number 112 is chosen because it matches, in this embodiment, the depth of the sync queues, thus, if there is any data left in the queue after the final K2 character is detected, this can be considered a stripe synchronization error. The present invention is not limited by this embodiment, and the sync queues can be of a different depth.
As one skilled in the relevant art would recognize based on the teachings described herein, the feature for implementing the special characters is to fill/flush the sync queues. In the one embodiment, the SBIA will send out 112 times the pattern shown in
In one embodiment, the state field is encoded with the source slot number as well as 1 bit used to tell whether the cell is toward the beginning or end of the sequence. For example, the state field can be encoded with the source slot number as well as 1 bit used to tell whether the cell is within the first 96 (of 112) transfers of the stripe sequence or whether this is the last 16 (of 112) K 2 transfer after which valid data follows.
A routine for K2 sequence synchronization is illustrated in flow chart 3450 in
In step 3452, the source SBIA checks the RFT/SXPNT for a ready state.
In step 3454, the RFT/SXPNT returns its state. If it is ready, then the routine proceeds to step 3456. If it is not ready, then the routine returns to prior to step 3452. In one embodiment, the source SBIA can re-check after a predetermined period of time.
In step 3456, the source SBIA sends Idle characters to the RFT/SXPNT. In one embodiment, the source SBIA sends enough idle characters to give the destination SBIA enough time to drain any remaining data from its buffers. In an embodiment, the source SBIA sends 768×2 words of idle characters.
In step 3458, the source SBIA sends special characters (K2) to the RFT/SXPNT. In one embodiment, the FIFOs in the RFT/SXPNT for the source slot should be empty by the time the K2s are sent. When it receives the K2 sequence, if the FIFO is not empty, then it will treat the sequence as an error in the SBIA received data. Once the RFT receives the data successfully, it checks to see if the SXPNT is ready to receive the data before sending the K2 sequence. In one embodiment, once the K2 sequence is sent from the RFT to the SXPNT, it won't stop until the whole sequence is sent. In one embodiment, 112 words of K2 characters are sent.
Steps 3460, 3462, and 3464 illustrate the above-mentioned contingency.
In step 3466, the source SBIA sends more idle characters to the RFT/SXPNT in order to clear any remaining K2 characters from the buffers. In one embodiment, the source SBIA sends 512×2 words of idle characters.
In one embodiment, the routine 3450 is executed by the module 676 periodically in order to clear the FIFOs and re-synchronize the systems of the present invention.
The discussion of
In embodiments of the present invention, both synchronous and asynchronous systems can be implemented. In a synchronous system, all the blades including fabric use the same clock source. The clock source can sit on the fabric and be distributed to the slave modules across the backplane so that the backplane will serve as a purely passive component.
In one embodiment of the redundant switch fabric system, two system clocks can be fed into one slave module from two switch fabric modules. The circuitry on the slave module would serve as the master clock. If the master clock fails in a fail-over event, then the other clock will become the master clock and the switching should be transparent for the components on the slave module.
In an asynchronous system, the system de-couples the clock domain between blades, which means every blade now has its own clock source. The motivation to design an asynchronous system is to eliminate the stringent jitter requirement imposed by a MUX delivered clock signal. However, it creates a new problem with respect to re-synchronization of the interface signals on both ends (at the slave modules).
For the SERDES signals, as previously described above, there is some built-in capability to do RX clock compensation when TX and RX are using different clock sources. However, enabling the RX compensation can increase the latency inside the SERDES.
In terms of the flow control signals mentioned above, the system implements control logic on the fabric to decode a time-division multiplexed (TDM) signal to parallel signal to eliminate the need of a central ready synchronization signal. A detailed embodiment is described below.
For a synchronous flow control implementation, the flow control information that passes between the SXPNT and RFT is TDM and requires a common sync signal to define the start of the time slot. A central synchronization signal that tracks the clock distribution increases the robustness of the system.
In one embodiment, there are two sets of flow control signals across back plane. In other embodiments, more than two signals used for flow control. In the former embodiment, the following ready signals can be implemented:
a) Receive Ready: each SBIA 3512 has a dedicated 1-bit ready signal for each RFT 3510A-E to stop a particular stripe from sending packets from each of the specific slots. Each RFT 3510A-E also sends a dedicated 1-bit ready signal to control the receiving of packets from the specific source SXPNT 3508A-E based on the available space in the internal receive FIFO (e.g., downlink ram); and
b) Transmit Ready: each SXPNT has a dedicated 2-bit ready signal for each RFT 3510A-E to notify the congestion situation at destination slots. Every SBIA 3512 also receives 2-bit ready signal from each RFT 3510A-E to stop the traffic for the destination slots.
In one embodiment, a common synchronization signal is used to synchronize all of the transmit and receive ready signals between RFT/SXPNT and RFT/SBIA. For example, and not by way of limitation, the transmit ready signal uses 2-bit to encode 7 states in four slots (8 cycles) and receive ready uses only one bit to encode 7 states in 7 slots (14 cycles). The common synchronization can be a synchronization pulse at every 56 cycles that is the minimum common multiple of 8 and 14. Of course, the present invention is not limited to these cycle counts, as one skilled in the relevant art(s) would recognize that different durations can be implemented.
In one embodiment, the time slot for each state can be set at 78.125 MHz if that frequency is half of the core frequency, i.e., if the core frequency is at 156.25 MHz. The motivation to use a two-cycle approach for the time slot unit is that it gives a 2 cycle margin to the wire/cell delay between SBIA and SXPNT ready registers.
In a detailed embodiment, three cycles later the ready state shows across the backplane. Then the SBIA adds another two cycles of latency to the ready signal. Thus the ready signal is latched inside the SBIA when the count is equal to 5. This will ensure that the path is a true multi-cycle path from SXPNT to SBIA.
When the RFT is placed between the SBIA and the SXPNT, the flow control operation remains the same. However, the latency of SBIA/RFT and SXPNT/RFT is programmable to leave additional margins in the hardware trace. Thus, in embodiments of the present invention, offset can be introduced to predetermine the latency levels of the system and thus better predict the operating parameters of the system.
Similar to
The flow control between SXPNTs 3708A-E and RFTs 3714A-E can be changed to asynchronous via control logic modules 3710 in blade 3702 and module 3712 in blade 3704. In one embodiment, the control logic module 3710 sits on the fabric and interfaces with the SXPNTs 3708A-E for the synchronous flow control interface. The control logic module 3710 can receive, interpret, and transmit various signals. In one embodiment, the module 3710 performs the following operations:
a) Decode a 2-bit transmit ready signal into 7-bit ready signal from each SXPNT 3708A-E and combine them to generate a 7-bit transmit slot ready signal to each RFT 3714A-E.
By “combine” is meant that if any SXPNT is not ready for a specific slot, no RFT is allowed to send packets for that slot. This is different than the synchronous system that has independent flow control between stripes; and
b) Receive the 7-bit receive slot ready from the RFT that is also a combined ready signal from the 5 stripes and encoded to a 1-bit receive ready signal for the 5 SXPNT.
With respect to the RFT embodiments described herein, the error conditions that might occur with serial links and in the backplane, as well as preventive and recovery measures are described. Additionally, embodiments for fail-over procedures to change from one switch blade to another are described.
The RFT module of the present invention can be on the receiving end of the errors described above. The type of errors that can be detected by the RFT chips includes:
a) Link error: This can be the result of a bit error or byte alignment error. In one embodiment, the SERDES should send an “/E” special character (error notification character) on the parallel data path to indicate the link error.
b) Lane synchronization error: This is a result of a synchronization FIFO overflow/underflow. In one embodiment, the SERDES should send a “GLINK” signal to indicate the receiving lane sync error.
c) Format error: This is a result of incorrect formatted cell. In one embodiment, a “/K0” special character that appears in lanes other than lane 0 would indicate the format error.
d) XPNT error. This is a wire or signal from the five SXPNT chips. In one embodiment, it indicates that SXPNT has an error or problems with receiving data.
The RFT error-handling routines are consistent with the routines previously described (e.g., the routines of
In one embodiment, from SBIA to RFT: the RFT detects an error in the received data from the SBIA. The errors can include link error, lane synchronization error and format error. Once the error is detected, the following procedure (steps 1-4) can be applied to recover from the error.
1) Send an RFT error signal to the SBIA. The SBIA will stop sending data at a cell boundary and repeat lane sync sequence until RFT error is de-asserted by the RFT. In one embodiment, once de-asserted, stripe synchronization sequence will be sent out for all slots (e.g., as described with respect to
2) Send AOP to all slots and flush uplink RAM. When there is error detected in received data, the encoded destination slot may be malfunctioning. Thus, the abort is sent to all the destination slots to discard the packets sent earlier.
3) Wait for buffers to clear, and thus, the error to be clear.
4) Wait for Stripe Sync Sequence and SOP to start accepting data.
In one embodiment, from SXPNT to RFT: The RFT detects the error in the received data from one of the SXPNTs to which is it connected. The errors can include link error, lane synchronization error and format error. Once one or more errors is detected, the following procedure can be applied to recover from the error(s).
1) Stop the SXPNT from receiving any more data at this slot.
2) Send AOP to the SBIA for all slots and flush the downlink RAM.
3) Wait for buffer to clear, and thus, the error to be clear.
4) Wait for Stripe Sync sequence and SOP to start accepting data.
In embodiments of the present invention, the RFT error signal notifies the SBIA that its RFT is under error condition so that the SBIA will stop packet transmission to RFT. This signal includes the following error notifications:
a) Cross point error: This is the wired or result from 5 SXPNT on the active switching module.
b) Fabric Active Error: The error occurs when “Fabric Active” signals are either active or inactive at both sides at the same time.
c) The link error, lane sync error or format error detected in received data from SBIA.
In the event that an error is detected in or considered to be switching module related, the module 676 has the capability to disable the current switching module and enable the standby switching module to keep the system's processes active.
In one embodiment, when the RFT detects an error in the received data from the SXPNT, it can generate an interrupt signal to disrupt the flow control monitored within module 676. The module 676 then reads the status registers in the SXPNT and the RFT to determine what kind of error occurred and which routine to instantiate to correct for it.
The errors that can generate the interrupt signal can be predetermined by programming an interrupt mask register within the RFT. These errors can include, but are not limited to: a) Core to SERDES sync FIFO overflow; b) SERDES to Core sync FIFO overflow; c) link is down; e) Code error, and/or format error; and f) XPNT error. Additional errors can be monitored and predetermined as one skilled in the relevant art(s) would recognize based on at least the teaching described herein.
The module 676 collects the interrupt signals from all slave modules and, in one embodiment, the module 676 also collects another 2-bit “Fabric Present” signal to start its fail-over decision procedure. The “Fabric Present” signal can indicate that the corresponding switching module is in place. For example, if a user unplugs one switching module, then the corresponding “Fabric Present” will get de-asserted.
The module 676 uses the 2-bit “Fabric Active” to tell all slave modules which switch module to direct the traffic. In one embodiment, to initiate the fail-over procedure, the module 676 first resets the standby switch module and inverts the 2-bit signal.
In the redundant switching embodiments, the network switch has one active/working switching blade and one idle/standby switching blade. According to these embodiments, the RFT can send packets to the active blade and can send idle characters to the idle blade. When the module 676 detects the failure of the working switching blade or the working switching blade is unplugged, the RFT will be notified the fail-over situation by the system using 2-bit “Fabric Active” signal. When the fail-over occurs, the new switching blade is assumed to be in the initial state after reset. The module 676 checks the status of the new switching blade before it issues a fail-over command.
The RFT always sends the lane sync sequence to the standby switching blade to maintain a healthy link. Thus, when fail-over occurs, no time is needed to activate the standby switching blade.
When fail-over occurs, the fail-over procedure can be performed to make sure the safe transition to another switching blade. The following are two example routines detailing specific embodiments of the routines described herein.
In one embodiment, the SBIA to RFT: RFT detects the fail-over by monitoring “Fabric Active” signals:
1) Send RFT error signal to SBIA. SBIA will stop sending data at cell boundary and repeat lane sync sequence until RFT error signal is de-asserted. Once de-asserted, stripe sync sequence will be sent out for all slots.
2) Flush uplink RAM.
3) Wait for buffer to clear, and thus, the error to clear.
4) Wait for Stripe Sync sequence and SOP to start accepting data.
In one embodiment, the SXPNT to RFT: RFT detects fail-over by monitoring “Fabric Active” signals:
1) Send AOP to SBIA for all slots and flush downlink RAM. When SBIA receives AOP, it will discard received data before the stripes sync.
2) Wait for buffer to clear, and thus, the error to clear.
3) Wait for Stripe Sync sequence and SOP to start accepting data.
According to a feature of the present invention, a hitless switch-over of the blades of the system is possible. The word “hitless” means there in no packet loss due to fabric change. Under normal conditions, a user might still want to change the fabric for a better or more robust performance. In this case, the user would want to avoid any unnecessary packet drops. Additionally, another reason to use the upgrade procedure is to do fabric testing. At least two procedures can be used to perform the switch-over: debug and production.
In one embodiment, a first procedure allows the module 676 to control the switch-over event through register programming:
1) First, the module 676 sets ‘1’ to “Fabric enable mode” and “Hitless enable mode” bit in Configuration register. This will allow the module 676 to enable new fabric and hitless mode through register programming.
2) The module 676 sets “Hitless Enable” bit in RFT “Configuration” register. This will put the RFT in the mode for no loss switch-over.
3) Then the module 676 disables the BIA receiver by setting bits in, for example, the RFT register accordingly. This will throttle the SBIA and prevent it from sending more cells to the RFT.
4) After a certain amount of time (long enough to drain all the packets in SXPNT and RFT buffers, the module 676 can determine the duration, as described previously herein.), the module 676 selects the new fabric by setting “Fabric Active” bits in RFT register.
5) The module 676 then clears the bits so that the SBIA can continue (be set to enabled) sending new cells to the RFT. The RFT will forward the cells to new fabric without dropping any data.
6) The module 676 clears “Hitless Enable” bit to put the RFT in fail-over mode.
In another embodiment, the following routine is used as second procedure. In one embodiment, the switch-over timer to drain packets in the RFT/SXPNT buffers is located in the RFT and the SBIA traffic throttling is done automatically, as described above. In this embodiment, the module 676 does not need to intervene:
1) First, in one hardware embodiment of the present invention, a command input pin can be driven “high” to enable the hitless switch-over. It is also noted that, in one software embodiment, a “Hitless enable mode” bit and/or “switch delay enable” bit in Configuration register can also set to enable the hitless switch-over.
2) Prior to any throttling, the module 676 can determine the value of “Switch Delay Counter” register. This is used to program the switch-over timer when “Fabric Active” signals toggled.
3) The “Fabric Active” input pin is toggled in all the RFTs, each RFT throttles the SBIA traffic and continues sending packets to the old switching fabric until the switch-over timer expires.
4) After the timer expires, both RFT and SXPNT should have sent all the packets in the internal buffers. RFT will activate new fabric and start sending/receiving packets to/from new switching fabric.
5) In the above embodiment, the command input pin is driven “low” to disable hitless switch-over.
It is noted that in both fail-over and switch-over cases, the module 676 is suggested to reset the new fabric first before the change. Because the SXPNT will generate the AOP for all slots after the reset (because the links go down), the module 676 can allow enough time before it changes the switch fabric.
U. Reset and Recovery Procedures
The following reset procedure will be followed to get the SERDES in sync. An external reset will be asserted to the SERDES core when a reset is applied to the core. The duration of the reset pulse for the SERDES need not be longer than 10 cycles. After reset pulse, the transmitter and the receiver of the SERDES will sync up to each other through defined procedure. It is assumed that the SERDES will be in sync once the core comes out of reset. For this reason, the reset pulse for the core must be considerably greater than the reset pulse for the SERDES core.
The core will rely on software interaction to get the core in sync. Once the BIA 302, 600, IBT 304, and XPNT 202 come out of reset, they will continuously send lane synchronization sequence. The receiver will set a software visible bit stating that its lane is in sync. Once software determines that the lanes are in sync, it will try to get the stripes in sync. This is done through software which will enable continuously sending of stripe synchronization sequence. Once again, the receiving side of the BIA 302 will set a bit stating that it is in sync with a particular source slot. Once software determines this, it will enable transmit for the BIA 302, XPNT 202 and IBT 304.
The management software residing on management blade is in charge of the system maintenance work. According to embodiments of the present invention, module 676 provides instantiation and access for the management software. In an additional embodiment, the management blade includes a dedicated reset signal for each slave module and switching module.
In one embodiment, the following reset procedure can be performed at system reboot:
1) An external reset will be asserted to the SERDES core when a reset is applied to the core. The duration of the reset pulse for the SERDES needs to be longer than 32 cycles (for 156 MHz clock).
2) After reset pulse, the transmitter and the receiver of the SERDES will sync up to each other through defined procedure. It can be assumed that the SERDES will be in sync once the core comes out of reset. For this reason, the reset pulse for the core must be considerably greater than the reset pulse for the SERDES core.
3) The core will rely on the module 676 for interaction to get the core in sync. Once the BIA, IBT, and XPNT come out of reset, they will continuously send lane synchronization sequence.
4) SERDES makes the lane synchronization status visible to the module 676.
5) Once the module 676 determines that the lanes are in sync, it will try to get the stripes in sync. This is done through software that will enable continuously sending of stripe synchronization sequence.
6) Once again, the receiving side of the BIA will set a bit stating that it is in sync with a particular source slot.
7) Once the module 676 determines this, it will enable transmit for the BIA, XPNT and IBT.
Similar to the SBIA/SXPNT reset procedure, the RFT allows the module 676 to reset each of its three 10 Gbps SERDES individually. When the SERDES gets reset, the link will go down and the received data from SERDES will be corrupted. The error recovery process can be the same as the link error handling described previously.
To reduce the packet loss due to reset, the following procedure will be applied:
a) Stop sending data to the transmitting SERDES at the cell boundary.
b) Send lane sync sequence during SERDES reset.
c) Start sending data (SERDES is out of reset state).
The RFT has three SERDES but, in one embodiment, only two SERDES are forwarding packets with one SERDES in standby mode. If user only installs one switching fabric in the chassis, the redundant SERDES does not have its corresponding SERDES Transceiver. Thus, the link for the redundant SERDES will always be down. If the user does not plan to put the switching fabric in the chassis, the user can power down the redundant SERDES to save energy, cycles, and processing overhead. To do this, the module 676 can access the “Power Control” register within the registers of the RFT.
Functionality described above with respect to the operation of switch 100 can be implemented in control logic. Such control logic can be implemented in software, firmware, hardware or any combination thereof.
While specific embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application is a continuation application of U.S. Ser. No. 12/400,594, filed Mar. 9, 2009, which is a continuation application of U.S. application Ser. No. 09/988,066, filed Nov. 16, 2001, which is a continuation-in-part application of U.S. application Ser. No. 09/855,038, filed May 15, 2001, U.S. application Ser. No. 09/988,066 claims the benefit of provisional U.S. Application No. 60/249,871, filed Nov. 17, 2000, and U.S. application Ser. No. 09/855,038 claims the benefit of provisional U.S. Application No. 60/249,871, filed Nov. 17, 2000, which are all incorporated by reference herein in their entireties. This patent application is potentially related to the following co-pending U.S. utility patent applications, which are all herein incorporated by reference in their entireties: “High-Performance Network Switch,” Ser. No. 09/855,031, filed May 15, 2001. “Method and System for Encoding Striped Cells,” Ser. No. 09/855,024, filed May 15, 2001. “Method and System for Translating Data Formats,” Ser. No. 09/855,025, filed May 15, 2001. “Network Switch Cross Point,” Ser. No. 09/855,015, filed May 15, 2001.
Number | Name | Date | Kind |
---|---|---|---|
3866175 | Seifert, Jr. et al. | Feb 1975 | A |
4628480 | Floyd | Dec 1986 | A |
4667323 | Engdahl et al. | May 1987 | A |
4679190 | Dias et al. | Jul 1987 | A |
4683564 | Young et al. | Jul 1987 | A |
4698748 | Juzswik et al. | Oct 1987 | A |
4723243 | Joshi et al. | Feb 1988 | A |
4754482 | Weiss | Jun 1988 | A |
4791629 | Burns et al. | Dec 1988 | A |
4794629 | Pastyr et al. | Dec 1988 | A |
4807280 | Posner et al. | Feb 1989 | A |
4876681 | Hagiwara et al. | Oct 1989 | A |
4896277 | Vercellotti et al. | Jan 1990 | A |
4985889 | Frankish et al. | Jan 1991 | A |
5101404 | Kunimoto et al. | Mar 1992 | A |
5136584 | Hedlund | Aug 1992 | A |
5195181 | Bryant et al. | Mar 1993 | A |
5208856 | Leduc et al. | May 1993 | A |
5224108 | McDysan et al. | Jun 1993 | A |
5231633 | Hluchyj et al. | Jul 1993 | A |
5280582 | Yang et al. | Jan 1994 | A |
5282196 | Clebowicz | Jan 1994 | A |
5287477 | Johnson et al. | Feb 1994 | A |
5301192 | Henrion | Apr 1994 | A |
5307345 | Lozowick et al. | Apr 1994 | A |
5323386 | Wiher et al. | Jun 1994 | A |
5365512 | Combs et al. | Nov 1994 | A |
5377189 | Clark | Dec 1994 | A |
5390173 | Spinney et al. | Feb 1995 | A |
5392279 | Taniguchi | Feb 1995 | A |
5406643 | Burke et al. | Apr 1995 | A |
5408469 | Opher et al. | Apr 1995 | A |
5430442 | Kaiser et al. | Jul 1995 | A |
5436893 | Barnett | Jul 1995 | A |
5461615 | Henrion | Oct 1995 | A |
5490258 | Fenner | Feb 1996 | A |
5506840 | Pauwels et al. | Apr 1996 | A |
5521923 | Willmann et al. | May 1996 | A |
5539733 | Anderson et al. | Jul 1996 | A |
5546385 | Caspi et al. | Aug 1996 | A |
5550816 | Hardwick et al. | Aug 1996 | A |
5563948 | Diehl et al. | Oct 1996 | A |
5566170 | Bakke et al. | Oct 1996 | A |
5598410 | Stone | Jan 1997 | A |
5600795 | Du | Feb 1997 | A |
5619497 | Gallagher et al. | Apr 1997 | A |
5640504 | Johnson, Jr. | Jun 1997 | A |
5646878 | Samra | Jul 1997 | A |
5663952 | Gentry, Jr. | Sep 1997 | A |
5663959 | Nakagawa | Sep 1997 | A |
5666353 | Klausmeier et al. | Sep 1997 | A |
5721819 | Galles et al. | Feb 1998 | A |
5732080 | Ferguson et al. | Mar 1998 | A |
5740176 | Gupta et al. | Apr 1998 | A |
5745708 | Weppler et al. | Apr 1998 | A |
5751710 | Crowther et al. | May 1998 | A |
5802287 | Rostoker et al. | Sep 1998 | A |
5815146 | Youden et al. | Sep 1998 | A |
5818816 | Chikazawa et al. | Oct 1998 | A |
5835496 | Yeung et al. | Nov 1998 | A |
5838684 | Wicki et al. | Nov 1998 | A |
5862350 | Coulson | Jan 1999 | A |
5867675 | Lomelino et al. | Feb 1999 | A |
5870538 | Manning et al. | Feb 1999 | A |
5872769 | Caldara et al. | Feb 1999 | A |
5872783 | Chin | Feb 1999 | A |
5875200 | Glover et al. | Feb 1999 | A |
5907566 | Benson et al. | May 1999 | A |
5907660 | Inoue et al. | May 1999 | A |
5909686 | Muller et al. | Jun 1999 | A |
5915094 | Kouloheris et al. | Jun 1999 | A |
5920566 | Hendel et al. | Jul 1999 | A |
5920886 | Feldmeier | Jul 1999 | A |
5936939 | Des Jardins et al. | Aug 1999 | A |
5936966 | Ogawa et al. | Aug 1999 | A |
5956347 | Slater | Sep 1999 | A |
5999528 | Chow et al. | Dec 1999 | A |
6000016 | Curtis et al. | Dec 1999 | A |
6016310 | Muller et al. | Jan 2000 | A |
6023471 | Haddock et al. | Feb 2000 | A |
6035414 | Okazawa et al. | Mar 2000 | A |
6038288 | Thomas et al. | Mar 2000 | A |
6067298 | Shinohara | May 2000 | A |
6067606 | Holscher et al. | May 2000 | A |
6076115 | Sambamurthy et al. | Jun 2000 | A |
6081522 | Hendel et al. | Jun 2000 | A |
6088356 | Hendel et al. | Jul 2000 | A |
6094434 | Kotzur et al. | Jul 2000 | A |
6104696 | Kadambi et al. | Aug 2000 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6108306 | Kalkunte et al. | Aug 2000 | A |
6118787 | Kalkunte et al. | Sep 2000 | A |
6125417 | Bailis et al. | Sep 2000 | A |
6128666 | Muller et al. | Oct 2000 | A |
6144668 | Bass et al. | Nov 2000 | A |
6147996 | Laor et al. | Nov 2000 | A |
6151301 | Holden | Nov 2000 | A |
6151497 | Yee et al. | Nov 2000 | A |
6154446 | Kadambi et al. | Nov 2000 | A |
6157643 | Ma | Dec 2000 | A |
6160809 | Adiletta et al. | Dec 2000 | A |
6172990 | Deb et al. | Jan 2001 | B1 |
6178520 | DeKoning et al. | Jan 2001 | B1 |
6181699 | Crinion et al. | Jan 2001 | B1 |
6185222 | Hughes | Feb 2001 | B1 |
6195335 | Calvignac et al. | Feb 2001 | B1 |
6201492 | Amar et al. | Mar 2001 | B1 |
6222845 | Shue et al. | Apr 2001 | B1 |
6243667 | Kerr et al. | Jun 2001 | B1 |
6249528 | Kothary | Jun 2001 | B1 |
6263374 | Olnowich et al. | Jul 2001 | B1 |
6272144 | Berenbaum et al. | Aug 2001 | B1 |
6304903 | Ward | Oct 2001 | B1 |
6320859 | Momirov | Nov 2001 | B1 |
6333929 | Drottar et al. | Dec 2001 | B1 |
6335932 | Kadambi et al. | Jan 2002 | B2 |
6335935 | Kadambi et al. | Jan 2002 | B2 |
6343072 | Bechtolsheim et al. | Jan 2002 | B1 |
6351143 | Guccione et al. | Feb 2002 | B1 |
6356550 | Williams | Mar 2002 | B1 |
6356942 | Bengtsson et al. | Mar 2002 | B1 |
6363077 | Wong et al. | Mar 2002 | B1 |
6366557 | Hunter | Apr 2002 | B1 |
6369855 | Chauvel et al. | Apr 2002 | B1 |
6370579 | Partridge | Apr 2002 | B1 |
6421352 | Manaka et al. | Jul 2002 | B1 |
6424658 | Mathur | Jul 2002 | B1 |
6424659 | Viswanadham et al. | Jul 2002 | B2 |
6427185 | Ryals et al. | Jul 2002 | B1 |
6430190 | Essbaum et al. | Aug 2002 | B1 |
6457175 | Lerche | Sep 2002 | B1 |
6460088 | Merchant | Oct 2002 | B1 |
6463063 | Bianchini et al. | Oct 2002 | B1 |
6466608 | Hong et al. | Oct 2002 | B1 |
6470436 | Croft et al. | Oct 2002 | B1 |
6473433 | Bianchini et al. | Oct 2002 | B1 |
6477174 | Dooley et al. | Nov 2002 | B1 |
6480477 | Treadaway et al. | Nov 2002 | B1 |
6490280 | Leung | Dec 2002 | B1 |
6493347 | Sindhu et al. | Dec 2002 | B2 |
6496502 | Fite et al. | Dec 2002 | B1 |
6505281 | Sherry | Jan 2003 | B1 |
6510138 | Pannell | Jan 2003 | B1 |
6522656 | Gridley | Feb 2003 | B1 |
6532229 | Johnson et al. | Mar 2003 | B1 |
6532234 | Yoshikawa et al. | Mar 2003 | B1 |
6535504 | Johnson et al. | Mar 2003 | B1 |
6549519 | Michels et al. | Apr 2003 | B1 |
6553370 | Andreev et al. | Apr 2003 | B1 |
6556208 | Congdon et al. | Apr 2003 | B1 |
6567404 | Wilford | May 2003 | B1 |
6577631 | Keenan et al. | Jun 2003 | B1 |
6587432 | Putzolu et al. | Jul 2003 | B1 |
6591302 | Boucher et al. | Jul 2003 | B2 |
6601186 | Fox et al. | Jul 2003 | B1 |
6606300 | Blanc et al. | Aug 2003 | B1 |
6628650 | Saite et al. | Sep 2003 | B1 |
6629099 | Cheng | Sep 2003 | B2 |
6636483 | Pannell | Oct 2003 | B1 |
6643269 | Fan et al. | Nov 2003 | B1 |
6654342 | Dittia et al. | Nov 2003 | B1 |
6654346 | Mahalingaiah et al. | Nov 2003 | B1 |
6654370 | Quirke et al. | Nov 2003 | B1 |
6654373 | Maher, III et al. | Nov 2003 | B1 |
6654862 | Morris | Nov 2003 | B2 |
6658002 | Ross et al. | Dec 2003 | B1 |
6661791 | Brown | Dec 2003 | B1 |
6671275 | Wong et al. | Dec 2003 | B1 |
6678248 | Haddock et al. | Jan 2004 | B1 |
6681332 | Byrne et al. | Jan 2004 | B1 |
6683872 | Saito | Jan 2004 | B1 |
6687217 | Chow et al. | Feb 2004 | B1 |
6687247 | Wilford et al. | Feb 2004 | B1 |
6690757 | Bunton et al. | Feb 2004 | B1 |
6691202 | Vasquez et al. | Feb 2004 | B2 |
6696917 | Heitner et al. | Feb 2004 | B1 |
6697359 | George | Feb 2004 | B1 |
6697368 | Chang et al. | Feb 2004 | B2 |
6700894 | Shung | Mar 2004 | B1 |
6708000 | Nishi et al. | Mar 2004 | B1 |
6721229 | Cole | Apr 2004 | B1 |
6721268 | Ohira et al. | Apr 2004 | B1 |
6721313 | Van Duyne | Apr 2004 | B1 |
6721338 | Sato | Apr 2004 | B1 |
6731875 | Kartalopoulos | May 2004 | B1 |
6735218 | Chang et al. | May 2004 | B2 |
6745277 | Lee et al. | Jun 2004 | B1 |
6751224 | Parruck et al. | Jun 2004 | B1 |
6754881 | Kuhlmann et al. | Jun 2004 | B2 |
6765866 | Wyatt | Jul 2004 | B1 |
6775706 | Fukumoto et al. | Aug 2004 | B1 |
6778546 | Epps et al. | Aug 2004 | B1 |
6781990 | Puri et al. | Aug 2004 | B1 |
6792484 | Hook | Sep 2004 | B1 |
6792502 | Pandya et al. | Sep 2004 | B1 |
6798740 | Senevirathne et al. | Sep 2004 | B1 |
6798933 | Steinberg | Sep 2004 | B2 |
6804220 | Odenwalder et al. | Oct 2004 | B2 |
6804731 | Chang et al. | Oct 2004 | B1 |
6804815 | Kerr et al. | Oct 2004 | B1 |
6807179 | Kanuri et al. | Oct 2004 | B1 |
6807363 | Abiko et al. | Oct 2004 | B1 |
6810046 | Abbas et al. | Oct 2004 | B2 |
6813243 | Epps et al. | Nov 2004 | B1 |
6813266 | Chiang et al. | Nov 2004 | B1 |
6816467 | Muller et al. | Nov 2004 | B1 |
6829682 | Kirihata et al. | Dec 2004 | B2 |
6831923 | Laor et al. | Dec 2004 | B1 |
6831932 | Boyle et al. | Dec 2004 | B1 |
6836808 | Bunce et al. | Dec 2004 | B2 |
6839346 | Kametani | Jan 2005 | B1 |
6842422 | Bianchini | Jan 2005 | B1 |
6854117 | Roberts | Feb 2005 | B1 |
6859438 | Haddock et al. | Feb 2005 | B2 |
6865153 | Hill et al. | Mar 2005 | B1 |
6873630 | Muller et al. | Mar 2005 | B1 |
6901072 | Wong | May 2005 | B1 |
6912637 | Herbst | Jun 2005 | B1 |
6920154 | Achler | Jul 2005 | B1 |
6925516 | Struhsaker et al. | Aug 2005 | B2 |
6934305 | Duschatko et al. | Aug 2005 | B1 |
6937606 | Basso et al. | Aug 2005 | B2 |
6946948 | McCormack et al. | Sep 2005 | B2 |
6952419 | Cassiday et al. | Oct 2005 | B1 |
6957258 | Maher, III et al. | Oct 2005 | B2 |
6959007 | Vogel et al. | Oct 2005 | B1 |
6961340 | Karlsson et al. | Nov 2005 | B2 |
6961347 | Bunton et al. | Nov 2005 | B1 |
6965615 | Kerr et al. | Nov 2005 | B1 |
6973092 | Zhou et al. | Dec 2005 | B1 |
6978309 | Dorbolo | Dec 2005 | B1 |
6980552 | Belz et al. | Dec 2005 | B1 |
6982974 | Saleh et al. | Jan 2006 | B1 |
6990102 | Kaniz et al. | Jan 2006 | B1 |
6993032 | Dammann et al. | Jan 2006 | B1 |
7005812 | Mitchell | Feb 2006 | B2 |
7009968 | Ambe et al. | Mar 2006 | B2 |
7012919 | So et al. | Mar 2006 | B1 |
7050430 | Kalkunte et al. | May 2006 | B2 |
7080238 | Van Hoof et al. | Jul 2006 | B2 |
7082133 | Lor et al. | Jul 2006 | B1 |
7103041 | Speiser et al. | Sep 2006 | B1 |
7106692 | Schulz | Sep 2006 | B1 |
7120744 | Klein | Oct 2006 | B2 |
7124205 | Craft et al. | Oct 2006 | B2 |
7126956 | Scholten | Oct 2006 | B2 |
7130903 | Masuda et al. | Oct 2006 | B2 |
7171487 | Herkersdorf et al. | Jan 2007 | B2 |
7176911 | Kidono et al. | Feb 2007 | B1 |
7185141 | James et al. | Feb 2007 | B1 |
7185266 | Blightman et al. | Feb 2007 | B2 |
7187687 | Davis et al. | Mar 2007 | B1 |
7190696 | Manur et al. | Mar 2007 | B1 |
7191277 | Broyles | Mar 2007 | B2 |
7206283 | Chang et al. | Apr 2007 | B2 |
7212536 | Mackiewich et al. | May 2007 | B2 |
7218637 | Best et al. | May 2007 | B1 |
7219293 | Tsai et al. | May 2007 | B2 |
7228509 | Dada et al. | Jun 2007 | B1 |
7236490 | Chang et al. | Jun 2007 | B2 |
7237058 | Srinivasan | Jun 2007 | B2 |
7249306 | Chen | Jul 2007 | B2 |
7266117 | Davis | Sep 2007 | B1 |
7277425 | Sikdar | Oct 2007 | B1 |
7283547 | Hook et al. | Oct 2007 | B1 |
7286534 | Kloth | Oct 2007 | B2 |
7298752 | Moriwaki et al. | Nov 2007 | B2 |
7324509 | Ni | Jan 2008 | B2 |
7355970 | Lor | Apr 2008 | B2 |
7356030 | Chang et al. | Apr 2008 | B2 |
7366100 | Anderson et al. | Apr 2008 | B2 |
7391769 | Rajkumar et al. | Jun 2008 | B2 |
7414979 | Jarvis | Aug 2008 | B1 |
7428693 | Obuchi et al. | Sep 2008 | B2 |
7468975 | Davis | Dec 2008 | B1 |
7512127 | Chang et al. | Mar 2009 | B2 |
7561590 | Walsh | Jul 2009 | B1 |
7596139 | Patel et al. | Sep 2009 | B2 |
7613991 | Bain | Nov 2009 | B1 |
7636369 | Wong | Dec 2009 | B2 |
7649885 | Davis et al. | Jan 2010 | B1 |
7657703 | Singh | Feb 2010 | B1 |
7738450 | Davis | Jun 2010 | B1 |
7817659 | Wong | Oct 2010 | B2 |
7830884 | Davis | Nov 2010 | B2 |
7948872 | Patel et al. | May 2011 | B2 |
7978702 | Chang et al. | Jul 2011 | B2 |
7995580 | Patel et al. | Aug 2011 | B2 |
8448162 | Ramanathan et al. | May 2013 | B2 |
20010001879 | Kubik et al. | May 2001 | A1 |
20010007560 | Masuda et al. | Jul 2001 | A1 |
20010026551 | Horlin | Oct 2001 | A1 |
20010048785 | Steinberg | Dec 2001 | A1 |
20010053150 | Clear et al. | Dec 2001 | A1 |
20020001307 | Nguyen et al. | Jan 2002 | A1 |
20020012585 | Kalkunte et al. | Jan 2002 | A1 |
20020040417 | Winograd et al. | Apr 2002 | A1 |
20020054594 | Hoof et al. | May 2002 | A1 |
20020054595 | Ambe et al. | May 2002 | A1 |
20020069294 | Herkersdorf et al. | Jun 2002 | A1 |
20020073073 | Cheng | Jun 2002 | A1 |
20020085499 | Toyoyama et al. | Jul 2002 | A1 |
20020087788 | Morris | Jul 2002 | A1 |
20020089937 | Venkatachary et al. | Jul 2002 | A1 |
20020089977 | Chang et al. | Jul 2002 | A1 |
20020091844 | Craft et al. | Jul 2002 | A1 |
20020091884 | Chang et al. | Jul 2002 | A1 |
20020105966 | Patel et al. | Aug 2002 | A1 |
20020126672 | Chow et al. | Sep 2002 | A1 |
20020131437 | Tagore-Brage | Sep 2002 | A1 |
20020141403 | Akahane et al. | Oct 2002 | A1 |
20020146013 | Karlsson et al. | Oct 2002 | A1 |
20020161967 | Kirihata et al. | Oct 2002 | A1 |
20020181476 | Badamo et al. | Dec 2002 | A1 |
20020191605 | Lunteren et al. | Dec 2002 | A1 |
20030009466 | Ta et al. | Jan 2003 | A1 |
20030033435 | Hanner | Feb 2003 | A1 |
20030043800 | Sonksen et al. | Mar 2003 | A1 |
20030043848 | Sonksen | Mar 2003 | A1 |
20030048785 | Calvignac et al. | Mar 2003 | A1 |
20030061459 | Aboulenein et al. | Mar 2003 | A1 |
20030074657 | Bramley, Jr. | Apr 2003 | A1 |
20030081608 | Barri et al. | May 2003 | A1 |
20030095548 | Yamano | May 2003 | A1 |
20030103499 | Davis et al. | Jun 2003 | A1 |
20030103500 | Menon et al. | Jun 2003 | A1 |
20030108052 | Inoue et al. | Jun 2003 | A1 |
20030110180 | Calvignac et al. | Jun 2003 | A1 |
20030115403 | Bouchard et al. | Jun 2003 | A1 |
20030120861 | Calle et al. | Jun 2003 | A1 |
20030128668 | Yavatkar et al. | Jul 2003 | A1 |
20030137978 | Kanetake | Jul 2003 | A1 |
20030152084 | Lee et al. | Aug 2003 | A1 |
20030152096 | Chapman | Aug 2003 | A1 |
20030156586 | Lee et al. | Aug 2003 | A1 |
20030159086 | Arndt | Aug 2003 | A1 |
20030165160 | Minami et al. | Sep 2003 | A1 |
20030169470 | Alagar et al. | Sep 2003 | A1 |
20030174719 | Sampath et al. | Sep 2003 | A1 |
20030177221 | Ould-Brahim et al. | Sep 2003 | A1 |
20030198182 | Pegrum et al. | Oct 2003 | A1 |
20030214956 | Navada et al. | Nov 2003 | A1 |
20030215029 | Limberg | Nov 2003 | A1 |
20030223424 | Anderson et al. | Dec 2003 | A1 |
20030227943 | Hallman et al. | Dec 2003 | A1 |
20040022263 | Zhao et al. | Feb 2004 | A1 |
20040028060 | Kang | Feb 2004 | A1 |
20040054867 | Stravers et al. | Mar 2004 | A1 |
20040062246 | Boucher et al. | Apr 2004 | A1 |
20040088469 | Levy | May 2004 | A1 |
20040128434 | Khanna et al. | Jul 2004 | A1 |
20040141504 | Blanc | Jul 2004 | A1 |
20040179548 | Chang et al. | Sep 2004 | A1 |
20040190547 | Gordy et al. | Sep 2004 | A1 |
20040205393 | Kitamorn et al. | Oct 2004 | A1 |
20040208177 | Ogawa | Oct 2004 | A1 |
20040223502 | Wybenga et al. | Nov 2004 | A1 |
20040264380 | Kalkunte et al. | Dec 2004 | A1 |
20050010630 | Doering et al. | Jan 2005 | A1 |
20050010849 | Ryle et al. | Jan 2005 | A1 |
20050041684 | Reynolds et al. | Feb 2005 | A1 |
20050089049 | Chang et al. | Apr 2005 | A1 |
20050097432 | Obuchi et al. | May 2005 | A1 |
20050132132 | Rosenbluth et al. | Jun 2005 | A1 |
20050138276 | Navada et al. | Jun 2005 | A1 |
20050144369 | Jaspers | Jun 2005 | A1 |
20050152324 | Benveniste | Jul 2005 | A1 |
20050152335 | Lodha et al. | Jul 2005 | A1 |
20050175018 | Wong | Aug 2005 | A1 |
20050185577 | Sakamoto et al. | Aug 2005 | A1 |
20050185652 | Iwamoto | Aug 2005 | A1 |
20050193316 | Chen | Sep 2005 | A1 |
20050201387 | Willis | Sep 2005 | A1 |
20050226236 | Klink | Oct 2005 | A1 |
20050246508 | Shaw | Nov 2005 | A1 |
20050249124 | Elie-Dit-Cosaque et al. | Nov 2005 | A1 |
20060031610 | Liav et al. | Feb 2006 | A1 |
20060034452 | Tonomura et al. | Feb 2006 | A1 |
20060077891 | Smith et al. | Apr 2006 | A1 |
20060092829 | Brolin et al. | May 2006 | A1 |
20060092929 | Chun | May 2006 | A1 |
20060114876 | Kalkunte | Jun 2006 | A1 |
20060146374 | Ng et al. | Jul 2006 | A1 |
20060165089 | Klink | Jul 2006 | A1 |
20060209685 | Rahman et al. | Sep 2006 | A1 |
20060221841 | Lee et al. | Oct 2006 | A1 |
20060268680 | Roberts et al. | Nov 2006 | A1 |
20070038798 | Bouchard et al. | Feb 2007 | A1 |
20070088974 | Chandwani et al. | Apr 2007 | A1 |
20070179909 | Channasagara | Aug 2007 | A1 |
20070208876 | Davis | Sep 2007 | A1 |
20070253420 | Chang et al. | Nov 2007 | A1 |
20070258475 | Chinn et al. | Nov 2007 | A1 |
20070288690 | Wang et al. | Dec 2007 | A1 |
20080002707 | Davis | Jan 2008 | A1 |
20080025309 | Swallow | Jan 2008 | A1 |
20080031263 | Ervin et al. | Feb 2008 | A1 |
20080037544 | Yano et al. | Feb 2008 | A1 |
20080049742 | Bansal et al. | Feb 2008 | A1 |
20080069125 | Reed et al. | Mar 2008 | A1 |
20080092020 | Hasenplaugh et al. | Apr 2008 | A1 |
20080095169 | Chandra et al. | Apr 2008 | A1 |
20080181103 | Davies | Jul 2008 | A1 |
20080205407 | Chang et al. | Aug 2008 | A1 |
20080307288 | Ziesler et al. | Dec 2008 | A1 |
20090175178 | Yoon et al. | Jul 2009 | A1 |
20090279423 | Suresh et al. | Nov 2009 | A1 |
20090279440 | Wong et al. | Nov 2009 | A1 |
20090279441 | Wong et al. | Nov 2009 | A1 |
20090279541 | Wong et al. | Nov 2009 | A1 |
20090279542 | Wong et al. | Nov 2009 | A1 |
20090279546 | Davis | Nov 2009 | A1 |
20090279548 | Davis et al. | Nov 2009 | A1 |
20090279549 | Ramanathan et al. | Nov 2009 | A1 |
20090279558 | Davis et al. | Nov 2009 | A1 |
20090279559 | Wong et al. | Nov 2009 | A1 |
20090282148 | Wong et al. | Nov 2009 | A1 |
20090282322 | Wong et al. | Nov 2009 | A1 |
20090287952 | Patel et al. | Nov 2009 | A1 |
20090290499 | Patel et al. | Nov 2009 | A1 |
20100046521 | Wong | Feb 2010 | A1 |
20100061393 | Wong | Mar 2010 | A1 |
20100100671 | Singh | Apr 2010 | A1 |
20100135313 | Davis | Jun 2010 | A1 |
20100161894 | Singh | Jun 2010 | A1 |
20100246588 | Davis | Sep 2010 | A1 |
20120236722 | Patel et al. | Sep 2012 | A1 |
20120294312 | Davis et al. | Nov 2012 | A1 |
Number | Date | Country |
---|---|---|
1380127 | Jan 2004 | EP |
2003289359 | Oct 2003 | JP |
2004-537871 | Dec 2004 | JP |
WO 0184728 | Nov 2001 | WO |
WO 0241544 | May 2007 | WO |
Entry |
---|
U.S. Appl. No. 12/639,762, filed Dec. 16, 2009, Singh. |
U.S. Appl. No. 12/639,749, filed Dec. 16, 2009, Singh. |
U.S. Appl. No. 12/624,300, filed Nov. 23, 2009, Davis et al. |
U.S. Appl. No. 12/608,985, filed Oct. 29, 2009, Wong. |
U.S. Appl. No. 12/608,972, filed Oct. 29, 2009, Wong. |
Final Office Action for U.S. Appl. No. 11/831,950, mailed on Jan. 6, 2010, 21 pages. |
Non-Final Office Action for U.S. Appl No. 11/953,742, mailed on Nov. 19, 2009, 51 pages. |
Non-Final Office Action for U.S. Appl. No. 11/953,743, mailed on Nov. 23, 2009, 47 pages. |
Non-Final Office Action for U.S. Appl. No. 11/953,745, mailed on Nov. 24, 2009, 48 pages. |
Non-Final Office Action for U.S. Appl. No. 11/953,751, mailed on Nov. 16, 2009, 55 pages. |
Requirement for Restriction/Election for U.S. Appl. No. 11/668,322, mailed on Oct. 29, 2009, 6 pages. |
Notice of Allowance for U.S. Appl. No. 10/139,912, mailed on Oct. 19, 2009, 17 pages. |
Supplemental Notice of Allowance for U.S. Appl. No. 10/139,912, mailed on Nov. 23, 2009, 4 pages. |
Final Office Action for U.S. Appl. No. 11/745,008, mailed on Dec. 30, 2009, 27 pages. |
Notice of Allowance for U.S. Appl. No. 11/828,246, mailed on Nov. 16, 2009, 20 pages. |
Final Office Action for U.S. Appl. No. 11/621,038, mailed on Dec. 23, 2009, 27 pages. |
Non-Final Office Action for U.S. Appl. No. 11/611,067, mailed on Oct. 16, 2009, 35 pages. |
Final Office Action for U.S. Appl. No. 11/611,067, mailed on Dec. 8, 2009, 11 pages. |
U.S. Appl. No. 10/139,912, filed May 6, 2002, Davis et al. |
U.S. Appl. No. 10/140,749, filed May 6, 2002, Davis et al. |
U.S. Appl. No. 10/140,751, filed May 6, 2002, Davis. |
U.S. Appl. No. 10/140,752, filed May 6, 2002, Davis et al. |
U.S. Appl. No. 10/140,753, filed May 6, 2002, Davis et al. |
U.S. Appl. No. 10/141,223, filed May 7, 2002, Veerabadran et al. |
U.S. Appl. No. 10/810,208, filed Mar. 26, 2004, Wong et al. |
U.S. Appl. No. 10/832,086, filed Apr. 26, 2004, Wong. |
U.S. Appl. No. 11/118,697, filed Apr. 28, 2005, Singh. |
U.S. Appl. No. 11/586,991, filed Oct. 25, 2006, Ramanathan et al. |
U.S. Appl. No. 11/646,845, filed Dec. 27, 2006, Ramanathan et al. |
U.S. Appl. No. 11/621,038, filed Jan. 8, 2007, Davis. |
U.S. Appl. No. 11/724,965. |
U.S. Appl. No. 11/779,714, filed Jul. 18, 2007, Wong et al. |
U.S. Appl. No. 11/779,778, filed Jul. 18, 2007, Wong et al. |
U.S. Appl. No. 11/828,246, filed Jul. 25, 2007, Davis. |
U.S. Appl. No. 11/831,950, filed Jul. 31, 2007, Suresh et al. |
U.S. Appl. No. 11/953,742, filed Dec. 10, 2007, Wong et al. |
U.S. Appl. No. 11/953,743, filed Dec. 10, 2007, Wong et al. |
U.S. Appl. No. 11/953,745, filed Dec. 10, 2007, Wong et al. |
U.S. Appl. No. 11/953,751, filed Dec. 10, 2007, Wong et al. |
U.S. Appl. No. 12/400,594, filed Mar. 9, 2009, Patel et al. |
U.S. Appl. No. 12/400,645, filed Mar. 9, 2009, Patel et al. |
U.S. Appl. No. 12/417,913, filed: Apr. 3, 2009, Patel et al. |
10 Gigabit Ethernet—Technology Overview White Paper, Sep. 2001, 16 pages. |
10 Gigabit Ethernet Alliance, Interconnection with Wide Area Networks, Version 1.0, Mar. 2002, 6 pages. |
“IEEE Standard for Information technology—Telecommunications and information exchange between systems—Local and metropolitan area network—common specifications. Part 3: Media Access Control (MAC) Bridges,” ANSI/IEEE Standard 802.1D, 1998,373, pp. 1998 Edition, IEEE. |
Belhadj et al., “Feasibility of a 100GE MAC”, PowerPoint Presentation, IEEE Meeting Nov. 2006, Nov. 13-15, 2006, 18 pages. |
Braun et al., “Fast incremental CRC updates for IP over ATM networks,” IEEE Workshop on High Performance Switching and Routing, 2001, 6 pages. |
Degermark, et al., “Small Forwarding Tables for Fast Routing Lookups,” ACM Computer Communications Review, Oct. 1997, 12 pages, vol. 27 No. 4. |
Biglron Architecture Technical Brief, Oct. 1998, 15 pages, Version 1.0, Foundry Networks. |
Biglron Architecture Technical Brief, Oct. 1998, 15 pages, Version 1.02, Foundry Networks. |
Biglron Architecture Technical Brief, Dec. 1998, 14 pages, Version 1.03, Foundry Networks,. |
Biglron Architecture Technical Brief, May 1999, 15 pages, Version 2.0, Foundry Networks. |
Biglron Architecture Technical Brief, May, 1999, 15 pages, Version 2.01, Foundry Networks. |
Biglron Architecture Technical Brief, Jul. 2001, 16 pages, Version 2.02, Foundry Networks. |
“Foundry Networks, Next Generation Terabit System Architecture—The High Performance Revolution for 10 Gigabit Networks,” Nov. 17, 2003, 27 pages. |
“JetCore™ Based Chassis Systems—An Architecture Brief on Netlron, Biglron, and Fastlron Systems,” IronClad Network Performance, Jan. 17, 2003, 27 pages, Foundry Networks, Inc. |
“Gigabit Ethernet Alliance, Accelerating the Standard for Speed,” 1998, 19 pages, Gigabit Ethernet Alliance. |
Kichorowsky, et al., “Mindspeed Switch Fabric Offers the Most Comprehensive Solution for Multi-Protocol Networking Equipment,” Apr. 30, 2001, 3 pages. |
Matsumoto, et al., “Switch Fabrics Touted at Interconnects Conference,” Aug. 21, 2000, 2 pages, at URL: http://www.eetimes.com/story/OEG2000821S0011, printed on Aug. 12, 2002. |
McAuley, et al., “Fast Routing Table Lookup Using CAMs,” Proceedings of INFOCOM, Mar.-Apr. 1993, pp. 1382-1391. |
Mier Communications, Inc., “Lab Testing Summary Report—Product Category: Layer-3 Switches, Vendor Tested:, Product Tested: Foundry Networks, Biglron 4000,” Report No. 231198, Oct. 1998, 6 pages. |
Mier Communications, Inc.,“Lab Testing Summary Report—Product Category: Gigabit Backbone Switches, Vendor Tested: Foundry Networks, Product Tested: Biglron 4000,” Report No. 210998, Sep. 1998, 6 pages. |
“17×17 3.2 Gbps Crosspoint Switch with Input Egualization—M21110,” Feb. 1, 2001, 2 pages, Mindspeed—A Conexant Business. |
“Switch Fabric Chipset—CX27300 iScale.TM.,” Apr. 30, 2001, 2 pages, Mindspeed—A Conexant Business. |
Newton, Newton's Telecom Dictionary, CMP Books, Mar. 2004, 20th Ed., p. 617. |
Satran et al., “Out of Order Incremental CRC Computation,” IEEE Transactions on Computers,Feb. 25, 2003,11 pages, vol. 54, issue 9 . |
Spurgeon, “Ëthernet, The Definitive Guide,” O'Reilly & Associates, Inc., Sebastapol, CA, Feb. 2000. |
“Foundry Networks, Inc.—Biglron 4000, Layer 2 & Layer 3 Interoperability Evaluation,” Oct. 1999, 4 pages, No. 199133, The Tolly Group. |
“Foundry Networks, Inc.—Biglron 8000 Gigabit Ethernet Switching Router, Layer 2 & Layer 3 Performance Evaluation,” May 1999, 4 pages, No. 199111, The Tolly Group. |
International Search Report of the International Searching Authority for Application No. PCT/US03/08719, mailed Jun. 19, 2003. 1 page. |
International Preliminary Examination Report for Application No. PCT/US2001/043113, mailed Oct. 21, 2003 , 6pages. |
Written Opinion of the International Searching Authority for Application No. PCT/US2001/043113, mailed May 1, 2003, 6 pages. |
International Search Report for Application No. PCT/US2001/043113, mailed Dec. 13, 2002, 2 pages. |
Non-Final Office Action for U.S. Appl. No. 09/855,024. mailed Jun. 4, 2002, 10 pages. |
Final Office Action for U.S. Appl. No. 09/855,024, mailed Jan. 15, 2003, 20 pages. |
Advisory Action for U.S. Appl. No. 09/855,024, mailed May 2, 2003. |
Notice of Allowance for U.S. Appl. No. 09/855,024, mailed Nov. 3, 2003. |
Notice of Allowance for U.S. Appl. No. 09/855,024, mailed Dec. 15, 2003. 6 pages. |
Non-Final Office Action for U.S. Appl. No. 10/810,301, mailed Mar. 17, 2005,11 pages. |
Non-Final Office Action for U.S. Appl. No. 10/810,301, mailed Feb. 16, 2006, 12 pages. |
Notice of Allowance for U.S. Patent Appl. No. 10/810,301, mailed Jul. 28, 2006, 5 pages. |
Notice of Allowance for U.S. Appl. No. 10/810,301, mailed Feb. 6, 2007, 9 pages. |
Non-Final Office Action for U.S. Appl. No. 09/855,025, mailed Nov. 23, 2004, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/855,031, mailed May 22, 2002. |
Non-Final Office Action for U.S. Appl. No. 09/855,031, mailed Dec. 10, 2002. |
Final Office Action for U.S. Appl. No. 09/855,031, mailed Jul. 30, 2003. |
Notice of Allowance for U.S. Appl. No. 09/855,031, mailed Nov. 4, 2003. |
Non-Final Office Action for U.S. Appl. No. 10/736,680, mailed Feb. 16, 2006, 18 pages. |
Final Office Action for U.S. Appl. No. 10/736,680, mailed Aug. 3, 2006, 10 pages. |
Notice of Allowance for U.S. Appl. No. 10/736,680, mailed Feb. 22, 2007, 12 pages. |
Non-Final Office Action for U.S. Appl. No. 10/210,041, mailed Sep. 10, 2003, 12 pages. |
Final Office Action for U.S. Appl. No. 10/210,041, mailed Jan. 7, 2004, 14 pages. |
Non-Final Office Action for U.S. Appl. No. 10/210,041, mailed Mar. 11, 2004, 12 pages. |
Final Office Action for U.S. Appl. No. 10/210,041, mailed Jul. 7, 2004, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 10/210,041, mailed Feb. 9, 2005, 7 pages. |
Final Office Action for U.S. Appl. No. 10/210,041, mailed Aug. 24, 2005, 7 pages. |
Advisory Action for U.S. Appl. No. 10/210,041, mailed Dec. 13, 2005, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 10/210,108, mailed Jun. 12, 2003, 6 pages. |
Notice of Allowance for U.S. Appl. No. 10/210,108, mailed Oct. 7, 2003. |
Requirement for Restriction/Election for U.S. Appl. No. 10/438,545, mailed Oct. 31,2003. |
Non-Final Office Action for U.S. Appl. No. 10/438,545, mailed Dec. 12, 2003, 7 pages. |
Notice of Allowance for U.S. Appl. No. 10/438,545, mailed Jun. 15, 2004, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 11/611,067, mailed Feb. 20, 2009, 11 pages. |
Non-Final Office Action for U.S. Appl. No. 11/615,769, mailed Apr. 15, 2009, 11 pages. |
Non-Final Office Action for U.S. Appl. No. 10/832,086, mailed Sep. 19, 2007, 12 pages. |
Final Office Action for U.S. Appl. No. 10/832,086, mailed May 1, 2008, 31 pages. |
Advisory Action for U.S. Appl. No. 10/832,086, mailed Jul. 21, 2008, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 10/832,086, mailed Sep. 18, 2008, 18 pages. |
Non Final Office Action for U.S. Appl. No. 10/832,086, mailed Apr. 1, 2009 ,17 pages. |
Non-Final Office Action for U.S. Appl. No. 11/586,991, mailed Oct. 2, 2008, 23 pages. |
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Jul. 16, 2007, 24 pages. |
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Dec. 18, 2007, 40 pages. |
Final Office Action for U.S. Appl. No. 10/810,208, mailed Jun. 11, 2008, 34 pages. |
Advisory Action for U.S. Appl. No. 10/810,208, mailed Aug. 27, 2008, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed Feb. 13, 2009, 17 pages. |
Requirement for Restriction/Election for U.S. Appl. No. 10/140,752, mailed May 18, 2006, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,752, mailed Dec. 14, 2006, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,752, mailed Apr. 23, 2007, 6 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,752, mailed Jan. 24, 2008, 8 pages. |
Notice of Allowance of U.S. Appl. No. 10/140,752, mailed Jul. 24, 2008, 14 pages. |
Notice of Allowance of U.S. Appl. No. 10/140,752, mailed Sep. 10, 2008, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 11/668,322, mailed Mar. 23, 2009, 19 pages. |
Non-Final Office Action for U.S. Appl. No. 11/854,486, mailed Jul. 20, 2009, 29 pages. |
Non-Final Office Action for U.S. Appl. No. 10/139,912, mailed Jan. 25, 2006, 14 pages. |
Final Office Action for U.S. Appl. No. 10/139,912, mailed Aug. 11, 2006, 26 pages. |
Non-Final Office Action for U.S. Appl. No. 10/139,912, mailed Apr. 20, 2007, 20 pages. |
Final Office Action for U.S. Appl. No. 10/139,912, mailed Nov. 28, 2007, 20 pages. |
Non-Final Office Action for U.S. Appl. No. 10/139,912, mailed Aug. 1, 2008, 21 pages. |
Notice of Allowance for U.S. Appl. No. 10/139,912, mailed Feb. 5, 2009, 8 pages. |
Notice of Allowance for U.S. Appl. No. 10/139,912, mailed Jun. 8, 2009, 8 pages. |
Requirement for Restriction/Election for U.S. Appl. No. 10/140,751, mailed Apr. 27, 2006, 5 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Aug. 10, 2006, 15 pages. |
Final Office Action for U.S. Appl. No. 10/140,751, mailed Apr. 10, 2007, 16 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Oct. 30, 2007, 14 pages. |
Final Office Action for U.S. Appl. No. 10/140,751, mailed May 28, 2008, 19 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Sep. 17, 2008, 15 pages. |
Final Office Action for U.S. Appl. No. 10/140,751, mailed Mar. 17, 2009, 17 pages. |
Advisory Action for U.S. Appl. No. 10/140,751, mailed Jun. 1, 2009, 3 pages. |
Non-Final Office Action for U.S. Appl. No. 11/745,008, mailed May 14, 2009, 27 pages. |
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Feb. 23, 2006, 25 pages. |
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Feb. 13, 2007, 29 pages. |
Final Office Action for U.S. Appl. No. 10/141,223, mailed Aug. 21, 2007, 25 pages. |
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Dec. 28, 2007, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 10/141,223, mailed Sep. 3, 2008, 22 pages. |
Non-Final Office Action for U.S. Appl. No. 10/139,831, mailed Oct. 17, 2005, 7 pages. |
Notice of Allowance for U.S. Appl. No. 10/139,831, mailed Feb. 9, 2006, 7 pages. |
Non-Final Office Action for U.S. Appl. No. 10/139,831, mailed Jun. 27, 2006, 9 pages. |
Final Office Action for U.S. Appl. No. 10/139,831, mailed Nov. 28, 2006, 17 pages. |
Notice of Allowance for U.S. Appl. No. 10/139,831, mailed Jun. 14, 2007, 26 pages. |
Notice of Allowance for U.S. Appl. No. 10/139,831, mailed Jun. 26, 2007, 25 pages. |
Non-Final Office Action for U.S. Appl. No. 11/828,246, mailed Jun. 15, 2009, 26 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,088, mailed Apr. 27, 2006, 13 pages. |
Notice of Allowance for U.S. Appl. No. 10/140,088, mailed Sep. 7, 2006, 13 pages. |
Notice of Allowance for U.S. Appl. No. 10/140,088, mailed Oct. 24, 2006, 8 pages. |
Notice of Allowance for U.S. Appl. No. 10/140,088, mailed Jan. 11, 2007, 5 pages. |
Non-Final Office Action for U.S. Appl. No. 11/621,038, mailed Apr. 23, 2009, 44 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,749, mailed Aug. 10, 2006, 22 pages. |
Final Office Action for U.S. Appl. No. 10/140,749, mailed Jun. 27, 2007, 23 pages. |
Final Office Action for U.S. Appl. No. 10/140,749, mailed Jan. 8, 2008, 23 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,749, mailed Jun. 6, 2008, 28 pages. |
Final Office Action for U.S. Appl. No. 10/140,749, mailed Dec. 8, 2008, 30 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,749, mailed May 27, 2009, 38 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,753, mailed Apr. 20, 2006, 11 pages. |
Final Office Action for U.S. Appl. No. 10/140,753, mailed Jan. 10, 2007, 27 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,753, mailed Aug. 22, 2007, 14 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,753, mailed Jan. 8, 2008, 14 pages. |
Final Office Action for U.S. Appl. No. 10/140,753, mailed Aug. 25, 2008, 22 pages. |
Requirement for Restriction/Election for U.S. Appl. No. 11/000,359, mailed Jun. 20, 2008, 7 pages. |
Non-Final Office Action for U.S. Appl. No. 11/000,359, mailed Oct. 23, 2008, 28 pages. |
Non-Final Office Action for U.S. Appl. No. 11/000,359, mailed May 29, 2009, 14 pages. |
Requirement for Restriction/Election for U.S. Appl. No. 11/118,697, mailed Jun. 2, 2009, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 09/855,038, mailed Jun. 2, 2005, 14 pages. |
Final Office Action for U.S. Appl. No. 09/855,038, mailed Feb. 7, 2006, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 09/855,038, mailed Oct. 4, 2006, 14 pages. |
Notice of Allowance for U.S. Appl. No. 09/855,038, mailed Apr. 26, 2007, 8 pages. |
Requirement for Restriction/Election for U.S. Appl. No. 09/988,066, mailed Dec. 13, 2005, 7 pages. |
Non-Final Office Action for U.S.Appl. No. 09/988,066, mailed Jul. 14, 2006, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/988,066, mailed Apr. 6, 2007, 22 pages. |
Final Office Action for U.S. Appl. No. 09/988,066, mailed Oct. 31, 2007, 16 pages. |
Advisory Action for U.S. Appl. No. 09/988,066, mailed May 28, 2008, 4 pages. |
Notice of Allowance for U.S. Appl. No. 09/988,066, mailed Oct. 30, 2008, 16 pages. |
Notice of Allowance for U.S. Appl. No. 09/988,066, mailed Jan. 9, 2009. |
Non Final Office Action U.S. Appl. No. 11/804,977, mailed Jan. 14, 2008, 13 pages. |
Notice of Allowance for U.S. Appl. No. 11/804,977, mailed Nov. 19, 2008, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/855,015, mailed Oct. 28, 2004, 12 pages. |
Non-Final Office Action for U.S. Appl. No. 09/855,015, mailed Jan. 12, 2006, 6 pages. |
Notice of Allowance for U.S. Appl. No. 09/855,015, mailed Sep. 8, 2006, 3 pages. |
Requirement for Restriction/Election for U.S. Appl. No. 09/855,015, mailed Nov. 3, 2006, 6 pages. |
Notice of Allowance for U.S. Appl. No. 09/855,015, mailed Jan. 7, 2008, 4 pages. |
Supplemental Notice of Allowance for U.S. Appl. No. 09/855,015, mailed Feb. 4, 2008, 3 pages. |
U.S. Appl. No. 12/198,697, filed Aug. 26, 2008, Hsu et al. |
U.S. Appl. No. 12/505,390, filed Jul. 17, 2009, Patel et al. |
Final Office Action for U.S. Appl. No. 10/832,086, mailed on Sep. 29, 2009, 26 pages. |
Non-Final Office Action for U.S. Appl. No. 11/831,950, mailed on Aug. 18, 2009, 49 pages. |
Non-Final Office Action for U.S. Appl. No. 11/779,714, mailed on Sep. 1, 2009, 58 pages. |
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed on Aug. 24, 2009, 38 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed on Sep. 28, 2009, 34 pages. |
Notice of Allowance for U.S. Appl. No. 11/000,359, mailed on Sep. 22, 20009, 17 pages. |
Notice of Allowance for U.S. Appl. No. 11/118,697, mailed on Sep. 30, 2009, 41 pages. |
Advisory Action for U.S. Appl. No. 11/831,950, mailed on Mar. 4, 2010, 4 pages. |
Non-Final Office Action for U.S. Appl. No. 11/779,714, mailed on Mar. 31, 2010, 26 pages. |
Non-Final Office Action for U.S. Appl. No. 10/810,208, mailed on Feb. 5, 2010, 13 pages. |
Final Office Action for U.S. Appl. No. 11/668,322, mailed on Feb. 24, 2010, 33 pages. |
Non-Final Office Action for U.S. Appl. No. 11/854,486, mailed on Jan. 12, 2010, 23 pages. |
Final Office Action for U.S. Appl. No. 10/140,751, mailed on Mar. 25, 2010, 29 pages. |
Non-Final Office Action for U.S. Appl. No. 12/198,697, mailed on Feb. 2, 2010, 50 pages. |
Final Office Action for U.S. Appl. No. 10/140,749, mailed on Jan. 13, 2010, 44 pages. |
Final Office Action for U.S. Appl. No. 11/615,769, mailed on Jan. 22, 2010, 34 pages. |
Final Office Action for U.S. Appl. No. 11/953,742, mailed on Jun. 14, 2010, 21 pages. |
Final Office Action for U.S. Appl. No. 11/953,743, mailed on Jul. 15, 2010, 21 pages. |
Non-Final Office Action for U.S. Appl. No. 11/953,745, mailed on Jun. 14, 2010, 19 pages. |
Final Office Action for U.S. Appl. No. 11/953,751, mailed on Jun. 25, 2010, 24 pages. |
Notice of Allowance for U.S. Appl. No. 10/810,208, mailed on Jul. 15, 2010, 15 pages. |
Non-Final Office Action for U.S. Appl. No. 11/668,322, mailed on Jun. 22, 2010, 16 pages. |
Notice of Allowance for U.S. Appl. No. 11/854,486, mailed on Jul. 13, 2010, 12 pages. |
Advisory Action for U.S. Appl. No. 11/745,008, mailed on Apr. 21, 2010, 8 pages. |
Notice of Allowance for U.S. Appl. No. 11/621,038, mailed on Apr. 28, 2010, 15 pages. |
Final Office Action for U.S. Appl. No. 12/198,697, mailed on Aug. 2, 2010, 55 pages. |
Non-Final Office Action for U.S. Appl. No. 12/400,594, mailed on May 14, 2010, 53 pages. |
Non-Final Office Action for U.S. Appl. No. 12/070,893, mailed on Jun. 10, 2010, 44 pages. |
Advisory Action for U.S. Appl. No. 11/615,769, mailed on May 25, 2010, 3 pages. |
Notice of Allowance for U.S. Appl. No. 11/615,769, mailed on Jul. 12, 2010, 14 pages. |
Non-Final Office Action for U.S. Appl. No. 11/646,845, mailed on Oct. 4, 2010, 48 pages. |
Final Office Action for U.S. Appl. No. 11/779,714, mailed on Nov. 9, 2010, 24 pages. |
Non-Final Office Action for U.S. Appl. No. 12/198,697, mailed on Oct. 25, 2010, 36 pages. |
Non-Final Office Action for U.S. Appl. No. 12/198,710, mailed on Sep. 28, 2010, 15 pages. |
U.S. Appl. No. 12/795,492, filed Jun. 7, 2010, Davis et al. |
U.S. Appl. No. 12/702,031, filed Feb. 8, 2010, Davis. |
U.S. Appl. No. 12/466,277, filed May 14, 2009, Lin. |
U.S. Appl. No. 12/198,710, filed Aug. 26, 2008, Zhang et al. |
U.S. Appl. No. 12/883,073, filed Sep. 15, 2010, Davis. |
U.S. Appl. No. 12/900,279, filed Oct. 7, 2010, Bansal et al. |
Requirement for Restriction/Election for U.S. Appl. No. 12/639,749, mailed on Dec. 7, 2010, 3 pages. |
Non-Final Office Action for U.S. Appl. No. 12/639,762, mailed on Sep. 1, 2010, 40 pages. |
Final Office Action for U.S. Appl. No. 12/400,594, mailed on Oct. 28, 2010, 13 pages. |
Non-Final Office for U.S. Appl. No. 12/400,645, mailed on Sep. 1, 2010, 45 pages. |
Non-Final Office Action for U.S. Appl. No. 12/505,390, mailed on Sep. 1, 2010, 45 pages. |
Final Office Action for U.S. Appl. No. 12/070,893, mailed on Nov. 24, 2010, 11 pages. |
Non-Final Office Action for U.S. Appl. No. 10/140,751, mailed Dec. 20, 2010, 23 pages. |
Notice of Allowance for U.S. Appl. No. 12/400,645, mailed on Jan. 26, 2011, 14 pages. |
U.S. Appl. No. 12/880,518, filed Sep. 13, 2010, Wong. |
U.S. Appl. No. 13/152,715, filed Jun. 3, 2011, Chang. |
U.S. Appl. No. 13/488,229, filed Jun. 4, 2012, Patel. |
Non-Final Office Action for U.S. Appl. No. 13/548,116 mailed on Apr. 15, 2013, 8 pages |
Non-Final Office Action for U.S. Appl. No. 12/900,279 mailed on Apr. 11, 2013, 7 pages |
Non-Final Office Action for U.S. Appl. No. 12/608,985 mailed on May 31, 2013, 9 pages |
Notice of Allowance for U.S. Appl. No. 12/198,710 mailed on May 28, 2013, 10 pages. |
Notice of Allowance for U.S. Appl. No. 12/684,022 mailed on Aug. 20, 2013, 10 pages. |
Non-Final Office Action for U.S. Appl. No. 13/398,725 mailed on Aug. 30, 2013, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 10/832,086 mailed on Sep. 9, 2013, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 12/608,972 mailed on Sep. 16, 2013, 6 pages. |
Final Office Action for U.S. Appl. No. 12/900,279 mailed on Sep. 27, 2013, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 13/458,650 mailed on Oct. 2, 2013, 5 pages. |
Notice of Allowance for U.S. Appl. No. 11/745,008 mailed on Oct. 7, 2013, 9 pages. |
Final Office Action for U.S. Appl. No. 12/624,300 mailed on Oct. 31, 2013, 16 pages. |
Number | Date | Country | |
---|---|---|---|
20110268108 A1 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
60249871 | Nov 2000 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12400594 | Mar 2009 | US |
Child | 13083481 | US | |
Parent | 09988066 | Nov 2001 | US |
Child | 12400594 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09855038 | May 2001 | US |
Child | 09988066 | US |