Multipoint-to-multipoint echo processing in a network switch

Information

  • Patent Grant
  • 5933429
  • Patent Number
    5,933,429
  • Date Filed
    Tuesday, April 29, 1997
    27 years ago
  • Date Issued
    Tuesday, August 3, 1999
    25 years ago
Abstract
An apparatus and a method are disclosed for unencumbering valuable switching resources in a network switch involved in a multipoint-to-multipoint switching scenario. The network switch includes an input processing port that is connected to a plurality of input links, and an output processing port that is connected to a plurality of output links. A data cell received on an input link is processed by the input processing port by appending a link number, a port number, and a connection identification code associated with the input processing port to the data cell. The data cell is then transferred to the output processing port where it is processed by comparing the appended link number, port number, and connection identification code with those associated with the output processing port. The data cell is then stored in a data buffering queue in the output processing port according to a matching scheme.
Description

FIELD OF THE INVENTION
The present invention is generally related to network switching and, more particularly, to an apparatus and a method for unencumbering valuable switching resources in a network switch involved a multipoint-to-multipoint switching scenario.
BACKGROUND OF THE INVENTION
Telecommunications networks such as asynchronous transfer mode (ATM) networks are used for the transfer of audio, video, and other data. ATM networks deliver data by routing data units such as ATM cells from a source to a destination through switches. Switches typically include multiple input/output (I/O) ports through which ATM cells are received and transmitted. The appropriate output port to which a received ATM cell is to be routed to and thereafter transmitted from is determined based upon an ATM cell header.
In a multipoint-to-multipoint switching scenario, ATM cells from a variety of sources are transferred from multiple input queues to multiple output queues within a switch. In such a scenario, it is often beneficial to eliminate duplicate processing of ATM cells or to otherwise prevent the flow of certain ATM cells through a switch by selectively screening ATM cells before allowing them to be transferred through the switch. By only allowing certain ATM cells to be transferred through the switch, valuable switching resources become unencumbered. Accordingly, it would be desirable to devise a scheme whereby valuable switching resources in a network switch may become unencumbered in a multipoint-to-multipoint switching scenario.
SUMMARY OF THE INVENTION
An apparatus and a method are disclosed for unencumbering valuable switching resources in a network switch involved a multipoint-to-multipoint switching scenario. The network switch includes a switch fabric, an input processing port connected between a plurality of input links and the switch fabric and having a plurality of data buffering queues, and an output processing port connected between the switch fabric and a plurality of output links and having a plurality of data buffering queues. All of the data buffering queues have a connection identification code, and the data buffering queues in the output processing port have a data cell processing code. The input processing port processes a data cell received on one of the input links by appending to the data cell a link number indicating the input link where the data cell arrived, a port number indicating the input processing port, and a connection identification code associated with a data buffering queue in the input processing port where the data cell will be buffered. The output processing port processes a data cell processed by the input processing port and transferred to the output processing port through the switch fabric by comparing the link number to a link number of a link connected to the output processing port, the port number to a port number of the output processing port, and the connection identification code to a connection identification code associated with a data buffering queue in the output processing port. The data cell is then stored in the data buffering queue in the output processing port according to a matching scheme between the link numbers, the port numbers, and the connection identification codes as dictated by the value of the data cell processing code.
For a first value of the data cell processing code, the output processing port matching scheme requires that the link numbers, the port numbers, and the connection identification codes match in order for the data cell to be stored in the data buffering queue in the output processing port. For a second value of the data cell processing code, the output processing port matching scheme requires that the link numbers, the port numbers, and the connection identification codes do not match in order for the data cell to be stored in the data buffering queue in the output processing port. This mechanism allows each output queue to receive a unique set of ATM cells from a variety of sources, wherein the ATM cells are transferred from multiple input queues to each output queue.
From the above descriptive summary it is apparent how the present invention apparatus can save valuable switching resources in a network switch.
Accordingly, the primary object of the present invention is to provide an apparatus and a method for unencumbering valuable switching resources in a network switch involved a multipoint-to-multipoint switching scenario.
The above-stated primary object, as well as other objects, features, and disadvantages, of the present invention will become readily apparent from the following detailed description which is to be read in conjunction with the appended drawings.





BRIEF DESCRIPTION OF THE DRAWINGS
In order to facilitate a fuller understanding of the present invention, reference is now made to the appended drawings. These drawings should not be construed as limiting the present invention, but are intended to be exemplary only.
FIG. 1 is a block diagram of a network switch;
FIG. 2 illustrates the structure of an input queue;
FIG. 3 illustrates the structure of a scheduling list;
FIG. 4 shows the standard data bus format of a data cell;
FIG. 5 shows the internal switch data cell format of a converted data cell;
FIG. 6 shows the format of an input queue descriptor;
FIG. 7 shows the format of an output queue descriptor;
FIG. 8 contains a table indicating the different echo field codes and the corresponding output port processor functions associated with those codes.
FIG. 9 shows a "No Echo" multipoint-to-multipoint switching scenario.





DETAILED DESCRIPTION OF THE PRESENT INVENTION
Referring to FIG. 1, there is shown a network switch 1 comprising a Data Crossbar 10, a Bandwidth Arbiter (BA) 12, a plurality of input port processors 14, a plurality of output port processors 16, and a plurality of Multipoint Topology Controllers (MTC) 18. The Data Crossbar 10, which may be an N.times.N crosspoint switch, is used for data cell transport and, in this particular embodiment, yields N.times.670 Mbps throughput. The BA 12 controls switch interconnections, dynamically schedules momentarily unused bandwidth, and resolves multipoint-to-point bandwidth contention. Each input port processor 14 schedules the transmission of data cells to the Data Crossbar 10 from multiple connections. Each output port processor 16 receives data cells from the Data Crossbar 10 and organizes those data cells onto output links.
In order to traverse the switch 1, a data cell 22 first enters the switch 1 on a link 24 to an input port processor 14 and is buffered in a queue 26 of input buffers. The data cell 22 is then transmitted from the queue 26 of input buffers through the Data Crossbar 10 to a queue 28 of output buffers in an output port processor 16. From the queue 28 of output buffers, the data cell 22 is transmitted onto a link 30 outside of the switch 1 to, for example, another switch.
To facilitate traversal of the switch 1, each input port processor 14 includes a cell buffer RAM 32 and each output port processor 16 includes a cell buffer RAM 34. The cell buffer RAM's 32 and 34 are organized into the respective input and output queues 26 and 28. All data cells 22 in a connection pass must through a unique input queue 26 and a unique output queue 28 for the life of the connection. The queues 26 and 28 thus preserve cell ordering. This strategy also allows quality of service ("QoS") guarantees on a per connection basis.
Three communication paths are used to facilitate traversal of the switch 1 via probe and feedback messages: a Probe Crossbar 42, an XOFF Crossbar 44, and an XON Crossbar 46. The Probe Crossbar 42, which in this particular embodiment is an N.times.N crosspoint switch, is used to transmit a multiqueue number from an MTC 18 to an output port processor 16. Each input port processor 14 includes a plurality of scheduling lists 47, each of which is a circular list containing input queue numbers for a particular connection. Each multiqueue number is derived from information provided to the MTC 18 from a scheduling list 47 in an input port processor 14. A multiqueue number identifies one or more output queues 28 to which a data cell may be transmitted when making a connection. An output port processor 16 uses the multiqueue number to direct a request message probe to the appropriate output queue or queues 28 and thereby determine if there are enough output buffers available in the output queue or queues 28 for the data cell.
The XOFF Crossbar 44, which in this particular embodiment is an N.times.N crosspoint switch, is used to communicate "DO NOT SEND" type feedback messages from an output port processor 16 to an input port processor 14. The XOFF feedback messages are asserted to halt the transmission of request message probes through the Probe Crossbar 42 from an input port processor 14 to an output port processor 16, and thus put a scheduling list 47 within the receiving input port processor 14 in an XOFF state, meaning that the scheduling list 47 cannot be used to provide a multiqueue number. The scheduling list 47 remains in an XOFF state until receiving an XON message from the output port processor 16, as described below. An input port processor 14 responds to an asserted XOFF feedback message by modifying XOFF state bits in a descriptor of the scheduling list 47. The XOFF state bits prevent the input port processor 14 from attempting to send a request message probe from the input port processor 14 to the output port processor 16 until notified by the output port processor 16 that output buffers are available for a corresponding connection.
The "DO NOT SEND" type feedback messages also halt the transmission of data cells from an input port processor 14 to an output port processor 16 when sufficient buffer space is not available to receive data cells in the output port processor 16. In such a case, an input port processor 14 will not transmit any data cells through the Data Crossbar 10. An idle cell, containing a complemented cyclic redundancy check (CRC) calculation, is transmitted instead.
The XON Crossbar 46, which in this particular embodiment is an N.times.N crosspoint switch, is used to communicate "ENABLE SEND" type feedback messages from an output port processor 16 to an input port processor 16. More particularly, the XON Crossbar 46 communicates an XON feedback message from an output port processor 16 to an input port processor 14. When an XOFF feedback message has been asserted by an output port processor 16 in response to a request probe message from an input port processor 14, the output port processor 16 sets a state bit in a queue descriptor of a corresponding output queue 28. When the number of data cells in that output queue 28 drops below an XON threshold, an XON message is sent from that output port processor 16 to the input port processor 14. The XON message enables the scheduling list 47 in the input port processor 14 to be used in the sending of request probe messages, and hence data cells.
The Probe & XOFF communication paths operate in a pipelined fashion. First, an input port processor 14 selects a scheduling list 47, and information associated with that scheduling list 47 is used to determine the output port processor 16, or the output queue 28, to which a data cell will be transmitted. More particularly, a multiqueue number, which is derived from information provided to an MTC 18 from a scheduling list 47 in an input port processor 14, is transmitted from the MTC 18 to one or more output port processors 16 using the Probe Crossbar 42. Each output port processor 16 then tests for buffer availability and asserts a "DO NOT SEND" type feedback message through the XOFF Crossbar 44 if output buffering is not available for that connection. If output buffering is available for that connection, the input port processor 14 transmits a data cell to one or more output queues 28 through the Data Crossbar 10.
Each input port processor 14 within the switch 1 also includes a Switch Allocation Table (SAT) 20 for mapping bandwidth allocation. SAT's 20 are the basic mechanism behind the scheduling of data cells. Each SAT 20 includes a plurality of sequentially ordered cell time slots 50 and a pointer 52 which is always directed to one of the cell time slots 50. All of the pointers 52 in the switch 1 are synchronized such that at any given point in time each of the pointers 52 is directed to the same cell time slot 50 in the respective SAT 20 with which the pointer 52 is associated, e.g., the first cell time slot. In operation, the pointers 52 are advanced in lock-step, with each cell time slot 50 being active for 32 clock cycles at 50 MHz. When a pointer 52 is directed toward a cell time slot 50, an input port processor 14 uses the corresponding entry 51 in the cell time slot 50 to obtain a data cell for launching into the Data Crossbar 10.
If valid, the contents of each SAT entry 51 point to a scheduling list 47. The contents of each (non-empty) entry in a scheduling list 47 consists of an input queue number. Each input queue number points to a input queue descriptor which contains state information that is specific to a particular connection. Each input queue descriptor, in turn, points to the head and the tail of a corresponding input queue 26, which contains data cells for transmission through the Data Crossbar 10.
If a SAT entry 51 does not contain a pointer to a scheduling list, i.e. the SAT entry 51 is set to zero, then the corresponding cell time slot 50 in the SAT 20 has not been allocated and that cell time slot 50 is available for dynamic bandwidth. Also, if a SAT entry 51 does contain a pointer to a scheduling list 47 but no input queue number is listed in scheduling list 47, then there are no data cells presently available for transmission and the corresponding cell time slot 50 is also available for dynamic bandwidth. Any bandwidth that has not been allocated is referred to as dynamic bandwidth, which is granted to certain types of connections by the BA 12 so as to increase the efficiency of the switch 1.
The switch 1 is configured to allow connections having different quality of service attributes to be managed in such a way that there is no interference between the characteristics of any connection with any other connection. In order to achieve this capability, an input port processor 14 manages each connection with a set of data structures that are unique for each connection.
There are two major data structures used by an input port processor 14 for managing different resources. One data structure is the input queue 26 and the other data structure is the scheduling list 47. In general, an input queue 26 is used to manage buffers. An input queue 26 consists of a group of one or more buffers organized as a FIFO and manipulated as a linked list structure using pointers. Incoming data cells 22 are added (enqueued) to the tail of an input queue 26. Data cells which are sent to the Data Crossbar 10 are removed (dequeued) from the head of an input queue 26. The ordering of data cells is always maintained. For a given connection, the sequence of data cells that are sent to the Data Crossbar 10 is identical to that in which they arrived at an input port processor 14, although the time interval between departing data cells may be different than the time interval between arriving data cells. FIG. 2 illustrates the structure of an input queue 26.
A scheduling list 47 is used to manage bandwidth. A scheduling list 47 consists of one or more input queue numbers organized as a circular list. As with input queues 26, scheduling lists 47 are manipulated as a linked list structure using pointers. Input queue numbers are added to the tail of a scheduling list 47 and removed from the head of a scheduling list 47. An input queue number can only appear once on any given scheduling list 47. In addition to being added and removed, input queue numbers can be recirculated on a scheduling list 47 by removing the input queue number from the head of the scheduling list 47 and then adding the removed input queue number back onto the tail of the scheduling list 47. This results in round-robin servicing of input queues 26 on a particular scheduling list 47. FIG. 3 illustrates the structure of a scheduling list 47.
When a data cell 22 is received at an input port processor 14, the first action performed by the input port processor 14 is to check the header of the data cell for errors and then to check that the data cell is associated with a valid connection. Cell header integrity is verified by computing a Header Error Check (HEC) on bytes in the header of a received data cell and then comparing the computed HEC to the HEC field in the header of the received data cell. If the computed HEC and the HEC field do not match, then there is a header error and the data cell will be dropped.
For each incoming data cell, an input port processor 14 will use VPI/VCI fields specified in the header of the data cell as an index into a translation table in the input port processor 14. The translation table correlates valid connections and input queue numbers. The input port processor 14 first checks to see if the data cell belongs to a valid connection; i.e. one that has been set up by switch control software. If the connection is valid, then the data cell will be assigned an input queue number from the translation table. If the connection is not valid, then the data cell will either be dropped or be assigned an exception input queue number from the translation table, which results in further processing of the data cell.
While the data cell is being checked, it is converted from a standard data bus format into a internal switch data cell format. FIG. 4 shows the standard data bus format of a data cell. FIG. 5 shows the internal switch data cell format of a converted data cell.
As previously described, an input queue number is used to point to a queue descriptor, which is a data structure containing state information that is unique to a particular connection. There is a queue descriptor for each queue in the switch 1; i.e. for both the input queues 26 in the input port processor 14 and the output queues 28 in the output port processor 16. The queue descriptors are maintained by switch control software. FIG. 6 shows the format of an input queue descriptor. FIG. 7 shows the format of an output queue descriptor.
After a data cell is assigned an input queue number, the input port processor 14 will look at the corresponding queue descriptor for further information on how to process the data cell. The input port processor 14 will first try to assign a buffer for the data cell. If a buffer is available, then the data cell buffer number is enqueued to the tail of the queue and the data cell is written out to the cell buffer RAM 32. If there is no buffer available, the data cell is dropped and a statistic is updated.
In addition to processing and buffering incoming streams of data cells, the input port processor 14 must transfer data cells from a cell buffer to one or more output port processors 16 through the Data Crossbar 10. The transfer of the data cells is performed through the use of the Probe Crossbar 20, the XOFF Crossbar 24, and the Data Crossbar 10, as previously described. Specifically, a multiqueue number, which is derived from information provided to an MTC 18 from a scheduling list 47 in an input port processor 14, is transmitted from the MTC 18 to one or more output port processors 16 using the Probe Crossbar 42. Each output port processor 16 then tests for buffer availability and asserts a "DO NOT SEND" type feedback message through the XOFF Crossbar 44 if output buffering is not available for that connection. If output buffering is available for that connection, the input port processor 14 transmits a data cell to one or more output queues 28 through the Data Crossbar 10. However, before any data cells are enqueued into any output queue 28, the output port processor 16 processes each data cell based on information contained in the trailer of the converted data cell.
Referring particularly to FIGS. 6 and 7, the input queue descriptor and the output queue descriptor both include a connection identification (Conn ID) field 60. This field 60 contains an arbitrary code that is assigned by the switch control software indicating 1 of 8 possible data flow paths upon which to perform a cell mask. When processing a data cell, an input port processor 14 will insert the code from the connection identification field 60 of the input queue descriptor into a similar connection identification (Conn ID) field 62 in the converted data cell (see FIG. 6). The converted data cell also includes an ingress port number field 64, indicating the number of the input port processor 14 where the data cell 22 was received, and an ingress link number field 66, indicating the number of the input link 24 that the data cell 22 arrived on. Note that the output queue descriptor also contains an "echo" field 68 for a 2-bit code which indicates what action that an output port processor 16 should take when processing a data cell transmitted from an input port processor 14, as will be described in detail below. The code in the echo field 68 is also assigned by the switch control software.
For every data cell transmitted from an input port processor 14, an output port processor 16 processes the data cell by comparing its own port number, link number, and connection identification code to the port number, link number, and connection identification code of the converted data cell. In conjunction with the 2-bit code in the echo field 68 of the output queue descriptor, this comparison is used to decide whether or not to enqueue the data cell arriving at a corresponding output queue 28.
Referring to FIG. 8, there is shown a table indicating the different echo field codes and the corresponding output port processor functions associated with those codes. For example, if the echo field 68 in the output queue descriptor is set to "00", the output port processor 16 will always enqueue the data cell. In contrast, if the echo field 68 in the output queue descriptor is set to "11", the output port processor 16 will never enqueue the data cell. More importantly, however, are the actions of the output port processor 16 when the echo field 68 in the output queue descriptor is set to "01" or "10". More specifically, echo processing of data cells received by an output port processor 16 conserves resources in the switch 1 in a multipoint-to-multipoint switching scenario.
To illustrate the aforementioned conservation of switching resources in a multipoint-to-multipoint switching scenario, it must be understood that a switch 1 is often used within a network of similar switches wherein data cells are routed through the network. In a multipoint-to-multipoint switching scenario, data cells from a variety of sources are transferred from multiple input queues 26 to multiple output queues 28 within a switch 1. In such a scenario, it is often beneficial to eliminate duplicate processing of data cells or to otherwise prevent the flow of certain data cells through a switch 1 so as to free up valuable switching resources. Echo processing of data cells received by an output port processor 16 achieves this objective by essentially screening converted data cells according to the port number, link number, and connection identification code contained in the converted data cells.
When the echo field 68 in the output queue descriptor is set to "01", a "No Echo" situation, the output port processor 16 will always enqueue the data cell unless the port number, the link number, and the connection identification code of the data cell match the port number, link number, and connection identification code of the output port processor 16. Alternatively, when the echo field 68 in the output queue descriptor is set to "10", an "Only Echo" situation, the output port processor 16 will enqueue the data cell only if the port number, the link number, and the connection identification code of the data cell match the port number, link number, and connection identification code of the output port processor 16.
Referring to FIG. 9, there is shown an example of a "No Echo" multipoint-to-multipoint switching scenario, wherein a plurality of data cells (A, B, C, and D) are being transmitted from a corresponding plurality of sources (T1, T2, T3, and S1) to a plurality of destinations (R1, R2, R3, and S2). More specifically, T1, T2, and T3 denote end station transmitters, R1, R2, and R3 denote end station receivers, and S1 and S2 denote other switching elements within a network. The data cells are received by input port processors 14a and 14b, where they are processed and enqueued in input queues 26a, 26b, 26c, and 26d. Input port processor 14a and output port processor 16a have the same port number, and input port processor 14b and output port processor 16b have the same port number. Links 24a, 24c, 30a, and 30c all have the same link number, and links 24b, 24d, 30b, and 30d all have the same link number. All of the input queues 26 and output queues 28 have been assigned an arbitrary connection identification code of 6.
As previously described, the processing of the data cells includes amending the trailer in each data cell to include an arbitrary connection identification code, a link number, and a port number. In this particular example, data cell A is assigned an arbitrary connection identification code of 6, a link number of 24a, and a port number of 14a. Similarly, data cell B has been assigned an arbitrary connection identification code of 6, a link number of 24b, and a port number of 14a, data cell C has been assigned an arbitrary connection identification code of 6, a link number of 24c, and a port number of 14b; data cell D has been assigned an arbitrary connection identification code of 6, a link number of 24d, and a port number of 14b. For each data cell, multiqueue numbers are transmitted simultaneously to the output port processors 16a and 16b, whereby each output port processor 16a and 16b tests for buffer availability, i.e. output port processor 16a tests output queues 28a and 28b for buffer availability, and output port processor 16b tests output queues 28c and 28d for buffer availability. If sufficient buffering is available, the data cells are then transmitted through the Data Crossbar 10 and the data cells are processed by the corresponding output port processors 16a and 16b.
In the "No Echo" scenario, the output port processors 16a and 16b will enqueue the data cells unless the port number, the link number, and the connection identification code of the data cells match the port number, link number, and connection identification code of the output port processors 16a and 16b. Thus, output queue 28a will enqueue data cells B, C, and D, output queue 28b will enqueue data cells A, C, and D, output queue 28c will enqueue data cells A, B, and D, and output queue 28d will enqueue data cells A, B, and C.
Connection identification codes provide another control to screen sources or destinations from transmitting or receiving, respectively. This augments the physical port and link number screening.
As illustrated above, echo processing allows each output queue 28a-d to receive data cells from a different set of sources while utilizing a single set connection resources, namely input queues 26a-d, scheduling lists 47, and output queues 28a-d. Echo processing thus allows valuable switching resources to become unencumbered in a network switch involved in a multipoint-to-multipoint switching scenario.
It will be understood that various changes and modifications to the above described method and apparatus may be made without departing from the inventive concepts disclosed herein. Accordingly, the present invention is not to be viewed as limited to the embodiment described herein.
Claims
  • 1. A method for unencumbering valuable switching resources in a network switch, wherein said network switch has an input processing port and an output processing port connected to a plurality of links and having a plurality of data buffering queues, wherein each of said data buffering queues has a connection identification code, and wherein said data buffering queues in said output processing port have a data cell processing code, said method comprising the steps of:
  • receiving a data cell at an output processing port, said data cell containing a link number indicating an input link where said data cell arrived, a port number indicating an input processing port where said data cell was received, and a connection identification code associated with a data buffering queue in said input processing port where said data cell was buffered;
  • comparing said link number to a link number of a link connected to said output processing port, said port number to a port number of said output processing port, and said connection identification code to a connection identification code associated with a data buffering queue in said output processing port; and
  • storing said data cell in said data buffering queue in said output processing port according to a matching scheme between said link numbers, said port numbers, and said connection identification codes as dictated by the value of said data cell processing code.
  • 2. The method as defined in claim 1, wherein said matching scheme requires that said link numbers, said port numbers, and said connection identification codes match in order for said data cell to be stored in said data buffering queue.
  • 3. The method as defined in claim 1, wherein said matching scheme requires that said link numbers, said port numbers, and said connection identification codes do not match in order for said data cell to be stored in said data buffering queue.
  • 4. A method for unencumbering valuable switching resources in a network switch, wherein said network switch has an input processing port and an output processing port connected to a plurality of links and having a plurality of data buffering queues, wherein each of said data buffering queues has a connection identification code, and wherein said data buffering queues in said output processing port have a data cell processing code, said method comprising the steps of:
  • receiving a data cell at an output processing port, said data cell containing a link number indicating an input link where said data cell arrived, a port number indicating an input processing port where said data cell was received, and a connection identification code associated with a data buffering queue in said input processing port where said data cell was buffered; and
  • processing said data cell according to a matching scheme, as dictated by the value of said data cell processing code, between said link number and a link number of a link connected to said output processing port, said port number and a port number of said output processing port, and said connection identification code and a connection identification code associated with a data buffering queue in said output processing port.
  • 5. The method as defined in claim 4, wherein said step of processing said data cell comprises storing said data cell in said data buffering queue in said output processing port when said link numbers, said port numbers, and said connection identification codes match.
  • 6. The method as defined in claim 4, wherein said step of processing said data cell comprises storing said data cell in said data buffering queue in said output processing port when said link numbers, said port numbers, and said connection identification codes do not match.
  • 7. A network switch that is capable of unencumbering valuable switching resources within the network switch, said network switch comprising:
  • a switch fabric;
  • an input processing port connected between a plurality of input links and said switch fabric and having a plurality of data buffering queues, wherein each of said data buffering queues has a connection identification code, wherein said input processing port processes a data cell received on one of said plurality of input links by appending to said data cell a link number indicating an input link where said data cell arrived, a port number indicating said input processing port, and a connection identification code associated with a data buffering queue in said input processing port where said data cell will be buffered; and
  • an output processing port connected between said switch fabric and a plurality of output links and having a plurality of data buffering queues, wherein each of said data buffering queues has a connection identification code, wherein said data buffering queues in said output processing port have a data cell processing code, wherein said output processing port processes a data cell processed by said input processing port and transferred to said output processing port through said switch fabric by comparing said link number to a link number of a link connected to said output processing port, said port number to a port number of said output processing port, and said connection identification code to a connection identification code associated with a data buffering queue in said output processing port, and then storing said data cell in said data buffering queue in said output processing port according to a matching scheme between said link numbers, said port numbers, and said connection identification codes as dictated by the value of said data cell processing code.
  • 8. The network switch as defined in claim 7, wherein said output processing port matching scheme requires that said link numbers, said port numbers, and said connection identification codes match in order for said data cell to be stored in said data buffering queue.
  • 9. The network switch as defined in claim 7, wherein said output processing port matching scheme requires that said link numbers, said port numbers, and said connection identification codes do not match in order for said data cell to be stored in said data buffering queue.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of application Ser. No. 08/683,335 filed Jul. 18, 1996 now abandoned. A claim of priority is made to provisional application 60/001,498, entitled COMMUNICATION METHOD AND APPARATUS, filed Jul. 19, 1995.

US Referenced Citations (245)
Number Name Date Kind
3804991 Hammond et al. Apr 1974
3974343 Cheney et al. Aug 1976
4069399 Barrett et al. Jan 1978
4603382 Cole et al. Jul 1986
4715030 Koch et al. Dec 1987
4727537 Nichols Feb 1988
4737953 Koch et al. Apr 1988
4797881 Ben-Artzi Jan 1989
4821034 Anderson et al. Apr 1989
4837761 Isono et al. Jun 1989
4849968 Turner Jul 1989
4870641 Pattavina Sep 1989
4872159 Hemmady et al. Oct 1989
4872160 Hemmady et al. Oct 1989
4878216 Yunoki Oct 1989
4893302 Hemmady et al. Jan 1990
4893307 McKay et al. Jan 1990
4894824 Hemmady et al. Jan 1990
4897841 Gang, Jr. Jan 1990
4899333 Roediger Feb 1990
4920531 Isono et al. Apr 1990
4922503 Leone May 1990
4933938 Sheehy Jun 1990
4947390 Sheehy Aug 1990
4953157 Franklin et al. Aug 1990
4956839 Torii et al. Sep 1990
4958341 Hemmady et al. Sep 1990
4979100 Makris et al. Dec 1990
4993018 Hajikano et al. Feb 1991
5021949 Morten et al. Jun 1991
5029164 Goldstein et al. Jul 1991
5060228 Tsutsui et al. Oct 1991
5067123 Hyodo et al. Nov 1991
5070498 Kakuma et al. Dec 1991
5083269 Syobatake et al. Jan 1992
5084867 Tachibana et al. Jan 1992
5084871 Carn et al. Jan 1992
5090011 Fukuta et al. Feb 1992
5090024 Vander Mey et al. Feb 1992
5093912 Dong et al. Mar 1992
5115429 Hluchyj et al. May 1992
5119369 Tanabe et al. Jun 1992
5119372 Verbeek Jun 1992
5128932 Li Jul 1992
5130975 Akata Jul 1992
5130982 Ash et al. Jul 1992
5132966 Hayano et al. Jul 1992
5146474 Nagler et al. Sep 1992
5146560 Goldberg et al. Sep 1992
5150358 Punj et al. Sep 1992
5151897 Suzuki Sep 1992
5157657 Potter et al. Oct 1992
5163045 Caram et al. Nov 1992
5163046 Hahne et al. Nov 1992
5179556 Turner Jan 1993
5179558 Thacker et al. Jan 1993
5185743 Murayama et al. Feb 1993
5191582 Upp Mar 1993
5191652 Dias et al. Mar 1993
5193151 Jain Mar 1993
5197067 Fujimoto et al. Mar 1993
5198808 Kudo Mar 1993
5199027 Barri Mar 1993
5239539 Uchida et al. Aug 1993
5253247 Hirose et al. Oct 1993
5253248 Dravida et al. Oct 1993
5255264 Cotton et al. Oct 1993
5255266 Watanabe et al. Oct 1993
5257311 Naito et al. Oct 1993
5258979 Oomuro et al. Nov 1993
5265088 Takigawa et al. Nov 1993
5267232 Katsube et al. Nov 1993
5268897 Komine et al. Dec 1993
5271010 Miyake et al. Dec 1993
5272697 Fraser et al. Dec 1993
5274641 Shobatake et al. Dec 1993
5274768 Traw et al. Dec 1993
5280469 Taniguchi et al. Jan 1994
5280470 Buhrke et al. Jan 1994
5282201 Frank et al. Jan 1994
5283788 Morita et al. Feb 1994
5285446 Yonehara Feb 1994
5287349 Hyodo et al. Feb 1994
5287535 Sakagawa et al. Feb 1994
5289462 Ahmadi et al. Feb 1994
5289463 Mobasser Feb 1994
5289470 Chang et al. Feb 1994
5291481 Doshi et al. Mar 1994
5291482 McHarg et al. Mar 1994
5295134 Yoshimura et al. Mar 1994
5301055 Bagchi et al. Apr 1994
5301184 Uriu et al. Apr 1994
5301190 Tsukuda et al. Apr 1994
5301193 Toyofuku et al. Apr 1994
5303232 Proctor et al. Jun 1994
5305311 Lyles Apr 1994
5309431 Tominaga et al. May 1994
5309438 Nakajima May 1994
5311586 Bogart et al. May 1994
5313454 Bustini et al. May 1994
5313458 Suzuki May 1994
5315586 Charvillat May 1994
5319638 Lin Jun 1994
5321695 Faulk, Jr. Apr 1994
5323389 Bitz et al. Jun 1994
5333131 Tanabe et al. Jul 1994
5333134 Ishibashi et al. Jul 1994
5335222 Kamoi et al. Aug 1994
5335325 Frank et al. Aug 1994
5339310 Taniguchi Aug 1994
5339317 Tanaka et al. Aug 1994
5339318 Tanaka et al. Aug 1994
5341366 Soumiya et al. Aug 1994
5341373 Ishibashi et al. Aug 1994
5341376 Yamashita Aug 1994
5345229 Olnowich et al. Sep 1994
5350906 Brody et al. Sep 1994
5355372 Sengupta et al. Oct 1994
5357506 Sugawara Oct 1994
5357507 Hughes et al. Oct 1994
5357508 Le Boudec et al. Oct 1994
5357510 Norizuki et al. Oct 1994
5359600 Ueda et al. Oct 1994
5361251 Aihara et al. Nov 1994
5361372 Rege et al. Nov 1994
5363433 Isono Nov 1994
5365514 Hershey et al. Nov 1994
5371893 Price et al. Dec 1994
5373504 Tanaka et al. Dec 1994
5375117 Morita et al. Dec 1994
5377262 Bales et al. Dec 1994
5377327 Jain et al. Dec 1994
5379297 Glover et al. Jan 1995
5379418 Shimazaki et al. Jan 1995
5390170 Sawant et al. Feb 1995
5390174 Jugel Feb 1995
5390175 Hiller et al. Feb 1995
5392280 Zheng Feb 1995
5392402 Robrock, II Feb 1995
5394396 Yoshimura et al. Feb 1995
5394397 Yanagi et al. Feb 1995
5398235 Tsuzuki et al. Mar 1995
5400337 Munter Mar 1995
5402415 Turner Mar 1995
5412648 Fan May 1995
5416703 Sakaue et al. May 1995
5420858 Marshall et al. May 1995
5420988 Elliott May 1995
5422879 Parsons et al. Jun 1995
5425021 Derby et al. Jun 1995
5425026 Mori Jun 1995
5432713 Takeo et al. Jul 1995
5432784 Ozveren Jul 1995
5432785 Ahmed et al. Jul 1995
5432908 Heddes et al. Jul 1995
5436886 McGill Jul 1995
5436893 Barnett Jul 1995
5440547 Easki et al. Aug 1995
5444702 Burnett et al. Aug 1995
5446733 Tsuruoka Aug 1995
5446737 Cidon et al. Aug 1995
5446738 Kim et al. Aug 1995
5448559 Hayter et al. Sep 1995
5450406 Esaki et al. Sep 1995
5452296 Shimizu Sep 1995
5455820 Yamada Oct 1995
5455825 Lauer et al. Oct 1995
5457687 Newman Oct 1995
5459743 Fukuda et al. Oct 1995
5461611 Drake, Jr. et al. Oct 1995
5463620 Sriram Oct 1995
5465331 Yang et al. Nov 1995
5475679 Munter Dec 1995
5479401 Bitz et al. Dec 1995
5479402 Hata et al. Dec 1995
5483526 Ben-Nun et al. Jan 1996
5485453 Wahlman et al. Jan 1996
5485455 Dobbins et al. Jan 1996
5487063 Kakuma et al. Jan 1996
5488606 Kakuma et al. Jan 1996
5491691 Shrayer et al. Feb 1996
5491694 Oliver et al. Feb 1996
5493566 Ljungberg et al. Feb 1996
5497369 Wainwright Mar 1996
5499238 Shon Mar 1996
5504741 Yamanaka et al. Apr 1996
5504742 Kakuma et al. Apr 1996
5506834 Sekihata et al. Apr 1996
5506839 Hatta Apr 1996
5506956 Cohen Apr 1996
5509001 Tachibana et al. Apr 1996
5509007 Takashima et al. Apr 1996
5513134 Cooperman et al. Apr 1996
5513178 Tanaka Apr 1996
5513180 Miyake et al. Apr 1996
5515359 Zheng May 1996
5517495 Lund et al. May 1996
5519690 Suzuka et al. May 1996
5521905 Oda et al. May 1996
5521915 Dieudonne et al. May 1996
5521916 Choudhury et al. May 1996
5521917 Watanabe et al. May 1996
5521923 Willmann et al. May 1996
5523999 Takano et al. Jun 1996
5524113 Gaddis Jun 1996
5526344 Diaz et al. Jun 1996
5528588 Bennett et al. Jun 1996
5528590 Iidaka et al. Jun 1996
5528591 Lauer Jun 1996
5530695 Dighe et al. Jun 1996
5533009 Chen Jul 1996
5533020 Byrn et al. Jul 1996
5535196 Aihara et al. Jul 1996
5535197 Cotton Jul 1996
5537394 Abe et al. Jul 1996
5541912 Choudhury et al. Jul 1996
5544168 Jeffrey et al. Aug 1996
5544169 Norizuki et al. Aug 1996
5544170 Kasahara Aug 1996
5546389 Wippenback et al. Aug 1996
5546391 Hochschild et al. Aug 1996
5546392 Boal et al. Aug 1996
5550821 Akiyoshi Aug 1996
5550823 Irie et al. Aug 1996
5553057 Nakayama Sep 1996
5553068 Aso et al. Sep 1996
5555243 Kakuma et al. Sep 1996
5555265 Kakuma et al. Sep 1996
5557607 Holden Sep 1996
5568479 Watanabe et al. Oct 1996
5570348 Holden Oct 1996
5570361 Norizuki et al. Oct 1996
5570362 Nishimura Oct 1996
5572522 Calamvokis et al. Nov 1996
5577032 Sone et al. Nov 1996
5577035 Hayter et al. Nov 1996
5583857 Soumiya et al. Dec 1996
5583858 Hanaoka Dec 1996
5583861 Holden Dec 1996
5590132 Ishibashi et al. Dec 1996
5602829 Nie et al. Feb 1997
5610913 Tomonaga et al. Mar 1997
5623405 Isono Apr 1997
5625846 Kobayakawa et al. Apr 1997
5633861 Hanson et al. May 1997
Foreign Referenced Citations (1)
Number Date Country
484943 Mar 1992 JPX
Non-Patent Literature Citations (15)
Entry
H.T. Kung and K. Chang, Receiver-Oriented Adaptive Buffer Allocation in a Credit-Based Flow Control for ATM Networks, Proceedings of INFOCOM '95, Apr. 2-6, 1995, pp. 1-14.
H.T. Kung et al., Credit-Based Flow Control for ATM Networks: Credit Update Protocol, Adaptive Credit Allocation and Statistical Multiplexing, Proceedings of ACM SIGCOMM '94 Symposium on Communications Architectures, Protocols and Applications, Aug. 31-Sep. 2, 1994, pp. 1-14.
Head of Line Arbitration in ATM Switches with Input-Output Buffering and Backpressure Control. By Hosein F. Badran and H. T. Mouftah, GLOBECOM '91, pp. 0347-0351.
An Ascom Timeplex White Paper, Meeting Critical Requirements with Scalable Enterprise Networking Solutions Based on a Unified ATM Foundation, pp. 1-12, Apr. 1994.
Douglas H. Hunt, ATM Traffic Management -Another Perspective, Business Communications Review, Jul. 1994.
Richard Bubenik et al., Leaf Initiated Join Extensions, Technical Committee, Signalling Subworking Group, ATM Forum/94-0325RI, Jul. 1, 1994.
Douglas H. Hunt et al., Flow Controlled Virtual Connections Proposal for ATM Traffic Management (Revision R2), Traffic Management Subworking Group, ATM.sub.-- Forum/94-0632R2, Aug. 1994.
Flavio Bonomi et al., The Rate-Based Flow Control Framework for the Available Bit Rate ATM Service, IEEE Network, Mar./Apr. 1995, pp. 25-39.
R. Jain, Myths About Congestion Management in High Speed Networks, Internetworking Research and Experience, vol. 3, 101-113 (1992).
Douglas H. Hunt et al., Credit-Based FCVC Proposal for ATM Traffic Management (Revision R1), ATM Forum Technical Committee Traffic Management Subworking Group, ATM.sub.-- Forum/94-0168R1, Apr. 28, 1994.
Douglas H. Hunt et al., Action Item Status for Credit-Based FCVC Proposal, ATM Forum Technical Committee Traffic Management Subworking Group, ATM.sub.-- Forum/94-0439, Apr. 28, 1994.
Timothy P. Donahue et al., Arguments in Favor of Continuing Phase 1 as the Initial ATM Forum P-NNI Routing Protocol Implementation, ATM Forum Technical Committee, ATM Forum/94-0460, Apr. 28, 1994.
Rob Coltun et al., PRP: A-NNI Routing Protocol Proposal, ATM Forum Technical Committee, ATM.sub.-- Forum/94-0492, Apr. 28, 1994.
Richard Bubenik et al., Requirements For Phase 2 Signaling Protocol, ATM Forum Technical Committee, Signalling Subworking Group, ATM Forum 94-1078, Jan. 1, 1994.
SITA, ATM RFP: C-Overall Technical Requirements, Sep. 1994.
Continuations (1)
Number Date Country
Parent 683335 Jul 1996