Point-to-multipoint arbitration

Information

  • Patent Grant
  • 5862137
  • Patent Number
    5,862,137
  • Date Filed
    Thursday, July 18, 1996
    28 years ago
  • Date Issued
    Tuesday, January 19, 1999
    25 years ago
Abstract
An Asynchronous Transfer Mode switch and method which facilitate point-to-multipoint transmission are disclosed. The switch includes a bandwidth arbiter, and each input port within the switch includes a switch allocation table ("SAT") which controls bandwidth allocation between input and output ports. Each SAT includes a plurality of sequentially ordered cell time slots and a synchronized pointer which is directed to one of the slots such that at any given point in time each of the pointers is directed to the same slot location in the respective SAT with which the pointer is associated. To execute point-to-multipoint operation the bandwidth arbiter maintains a list of connections and bit vectors indicating the designated destination ports for a point-to-multipoint cell. The list maintained by the bandwidth arbiter is then compared to an unassigned output port bit vector generated from the SATs to determine matches therebetween, at which point-to-multipoint transmission may be made by utilizing instantaneously unused bandwidth within the switch while arbitration distributes bandwidth among competing point-to-multipoint connections. The bandwidth arbiter may also assign priority to connections in the list.
Description

FIELD OF THE INVENTION
The present invention is generally related to telecommunications networks, and more particularly to point-to-multipoint arbitration, bandwidth allocation and delay management within an asynchronous transfer mode switch.
RELATED APPLICATION
A claim of priority is made to provisional application 60/001,498 entitled COMMUNICATION METHOD AND APPARATUS, filed Jul. 19, 1995.
BACKGROUND OF THE INVENTION
Telecommunications networks such as asynchronous transfer mode ("ATM") networks are used for transfer of audio, video and other data. ATM networks deliver data by routing data units such as ATM cells from source to destination through switches. Switches include input/output ("I/O") ports through which ATM cells are received and transmitted. The appropriate output port for transmission of the cell is determined based on the cell header.
One problem associated with ATM networks is loss of cells. Cells are buffered within each switch before being routed and transmitted from the switch. More particularly, switches typically have buffers at either the inputs or outputs of the switch for temporarily storing cells prior to transmission. As network traffic increases, there is an increasing possibility that buffer space may be inadequate and data lost. If the buffer size is insufficient, cells are lost. Cell loss causes undesirable interruptions in audio and video data transmissions, and may cause more serious damage to other types of data transmissions.
In point-to-multipoint transmission a cell is transmitted from a single input to multiple outputs across the switch fabric. In order to execute such a transmission, each of the designated outputs must be available to receive the cell from the transmitting input, i.e., have adequate buffer space. However, the likelihood that each of the designated outputs will be simultaneously prepared to receive the cell when the cell is enqueued decreases as traffic within the switch increases. In some circumstances this may result in delayed transmission. In the worst case, cells will be delayed indefinitely and incoming cells for that connection are discarded. It would therefore be desirable to facilitate point-to-multipoint transmission by reducing or eliminating delays and cell loss.
SUMMARY OF THE INVENTION
An Asynchronous Transfer Mode ("ATM") switch and method which facilitate point-to-multipoint transmission is disclosed. The ATM switch includes a bandwidth arbiter, a plurality of input ports and a plurality of output ports. Each input port within the switch includes a switch allocation table ("SAT") which grants bandwidth to connections. Each SAT includes a plurality of sequentially ordered cell time slots and a pointer which is directed to one of the slots. The SAT pointers at each input port are synchronized such that, at any given point in time, each of the pointers is directed to the same slot location in the respective SAT with which the pointer is associated.
Each connection is assigned bandwidth types based on the traffic type associated with the connection. There are two types of bandwidth to grant within the switch: allocated and dynamic. Allocated bandwidth is bandwidth which is "reserved" for use by the connection to which the bandwidth is allocated. Generally, a connection with allocated bandwidth is guaranteed access to the full amount of bandwidth allocated to that connection. As such, traffic types that need deterministic control of delay are assigned allocated bandwidth. Dynamic bandwidth is bandwidth which is "shared" by any of various competing connections. Because dynamic bandwidth is a shared resource, there is generally no guarantee that any particular connection will have access to a particular amount of bandwidth. For this reason dynamic bandwidth is typically assigned to connections with larger delay bounds. Other connections may be assigned a combination of dynamic and allocated bandwidth. Any cell time where the SAT entry is not valid or where the scheduling list does not contain a cell thus represents an unassigned bandwidth opportunity.
To execute point-to-multipoint operation the bandwidth arbiter maintains a list of connections and bit vectors indicating the designated destination ports for a point-to-multipoint cell. The list maintained by the bandwidth arbiter is then compared to an unassigned output port bit vector generated from the SATs to determine matches there between at which point-to-multipoint transmission may be made by utilizing the instantaneously unused bandwidth within the switch. The bandwidth arbiter may also assign priority to connections in the list.
Switch efficiency is increased by utilizing instantaneously unused bandwidth. When switch traffic increases, available bandwidth decreases and collisions become more frequent. Nevertheless, unutilized bandwidth will be present from time to time, and such bandwidth is wasted if not utilized. Therefore, point-to-multipoint transmissions which would otherwise be dropped are made using the otherwise unutilized bandwidth, and switch efficiency is increased.





BRIEF DESCRIPTION OF THE DRAWING
These and other features and advantages of the present invention will become apparent from the following detailed description of the drawing in which:
FIG. 1 is a block diagram of a switch which facilitates point-to-multipoint operation;
FIG. 2 is a block diagram which illustrates operation of the switch allocation tables of FIG. 1;
FIG. 3 is a block diagram which illustrates operation of the list maintained by the bandwidth arbiter;
FIG. 4 is a flow diagram which illustrates matching between the request bit vectors and the unassigned output port bit vector;
FIG. 5 is a block diagram which illustrates round-robin allocation of bandwidth to TSPP requests; and
FIG. 6 is a flow diagram which illustrates a method of point-to-multipoint bandwidth arbitration.





DETAILED DESCRIPTION OF THE DRAWING
Referring now to FIG. 1, the switch includes an N.times.N switch fabric 10, a bandwidth arbiter 12, a plurality of to switch port processor subsystems ("TSPP") 14, a plurality of To Switch Port Processor ASICs 15, a plurality of from switch port processor subsystems ("FSPP") 16, a plurality of To Switch Port Processor ASICs 17, a plurality of multipoint topology controllers ("MTC") 18 and a plurality of switch allocation tables ("SAT") 20. The N.times.N switch fabric, which may be an ECL crosspoint switch fabric, is used for cell data transport, and yields N.times.670 Mbps throughput. The bandwidth arbiter controls switch fabric interconnection dynamically schedules momentarily unused bandwidth and resolves multipoint-to-point bandwidth contention. Each TSPP 14 schedules transmission of cells 22 to the switch fabric from multiple connections. Not shown are the physical line interfaces between the input link and the TSPP 14. The FSPP 16 receives cells from the switch fabric and organizes those cells onto output links. Not shown are the physical line interfaces between the output link and the FSPP 16. The switch allocation table controls crossbar input to output mapping, connection bandwidth and the maximum delay through the switch fabric.
In order to traverse the switch, a cell 22 first enters the switch through an input port 24 and is buffered in a queue 26 of input buffers. The cell is then transmitted from the input buffers to a queue 28 of output buffers in an output port 30. From the output port 30, the cell is transmitted outside of the switch, for example, to another switch. To facilitate traversal of the switch, each input port 24 includes a TSPP 14, and each output port 30 includes an FSPP 16. The TSPPs and FSPPs each include cell buffer RAM 32 which is organized into respective queues 26, 28. All cells in a connection pass through a unique queue at each port, one at the TSPP and one at the FSPP, for the life of the connection. The queues thus preserve cell ordering. This strategy also allows quality of service ("QoS") guarantees on a per connection basis.
Request and feedback messages are transmitted between the TSPP and FSPP to implement flow control. Flow control prevents cell loss within the switch, and is performed after arbitration, but before transmission of the data cell. Flow control is implemented on a per connection basis.
Referring now to FIGS. 1 & 2, each TSPP within the switch includes an SAT 20 which maps bandwidth allocation. The SAT is the basic mechanism behind cell scheduling. Each SAT 20 includes a plurality of sequentially ordered cell time slots 50 and a pointer 52 which is directed to one of the slots. All of the pointers in the switch are synchronized such that at any given point in time each of the pointers is directed to the same slot location in the respective SAT with which the pointer is associated, e.g., the first slot. In operation, the pointers are advanced in lock-step, each slot being active for 32 clock cycles at 50 MHz. When the pointer is directed toward a slot, the TSPP uses the corresponding entry 51 in the SAT to obtain a cell for launching into the switch fabric 10 and to begin flow control.
Each of the counters is incremented once for each cell time, and the pointer returns to the first slot after reaching the last slot. Hence, given an SAT depth of 8 k, which defines a frame, the pointers scan the SATs approximately every 6 msec, thereby providing a maximum delay for transmission opportunity of approximately 6 msec. The delay can be decreased by duplicating a given entry at a plurality of slots within the SAT. The maximum delay that an incoming cell will experience corresponds to the number of slots between the pointer and the slot containing the entry which specifies the destination of the cell. When multiple entries are made in order to decrease the maximum possible number of separating slots, the duplicate entries are therefore preferably spaced equidistantly within the SAT. Maximum delay for transmission opportunity therefore corresponds to the frequency and spacing of duplicate entries within the SAT.
The amount of bandwidth allocated to a particular connection corresponds to the frequency at which a given entry appears in the SAT. Each slot 50 provides 64 Kbps of bandwidth. Since the pointers cycle through the SATs at a constant rate, the total bandwidth allocated to a particular connection is equal to the product of 64 Kbps and the number of occurrences of that entry. For example, connection identifier "g (4,6)," which occurs in five slots, is allocated 320 Kbps of bandwidth.
Significantly, instantaneously unused bandwidth 60 will become available in the switch during operation. Such instantaneously unused bandwidth may occur because that bandwidth, i.e., that entry in the SAT, has not been allocated to any connection. Such bandwidth is referred to as "unallocated bandwidth." Unused bandwidth may also occur when the SAT entry is allocated to a connection, but the connection does not have a cell enqueued for transmission across the switch. Such bandwidth is referred to as "unused-allocated" bandwidth. Both types of unused bandwidth are collectively referred to as "dynamic" bandwidth, and some connections, such as connections assigned an Available Bit Rate ("ABR") QoS level utilize such dynamic bandwidth. The bandwidth arbiter operates to increase efficiency within the switch by granting dynamic bandwidth to such connections.
Referring now to FIGS. 1-3, if a connection has no allocated bandwidth, or if the arriving cell rate is greater than the allocated rate as indicated by an input queue threshold, dynamic bandwidth may be employed. In either situation the point-to-multipoint transmission described in the SAT entry 51 is entered into a list 53 maintained by the bandwidth arbiter as a "request" in order that the point-to-multipoint transmission can be made at the next available opportunity.
The list 53 maintained by the bandwidth arbiter includes two fields for storing point-to-multipoint transmissions which utilize dynamic bandwidth. A connection identifier field 56 is employed to store the connection identifier, e.g., "a," and hence also indicates the port of origin. A bit vector field 58 is employed to indicate the designated output ports for transmission. The bit vector field is a bit mask which, in the case of an 8.times.8 switch, includes eight bits, each bit corresponding to a specific output port. Thus, for the exemplary SAT entry "a (2,3)" the list 53 contains "00000110" in the bit vector field (where the port identification numbers start from "1" rather than from "0"). The logic "1" values in the bit vector field indicate destination output ports "2" and "3," and the logic "0" values indicate non-destination output ports. The connections and bit vectors in the list 53 are entered sequentially in the order in which they are received.
To execute point-to-multipoint operation of cells described in the list maintained by the bandwidth arbiter the bandwidth arbiter tests for matches between the list and dynamic bandwidth. More particularly, the connection identifier 56 and bit vector 58 corresponding to "a (2,3)" is entered into the list 53 so that the cell will be transmitted when a dynamic bandwidth opportunity becomes available for simultaneous transmission to each output port designated by the request.
Referring now to FIGS. 1-4, to determine matches between the requests in the list maintained by the bandwidth arbiter and available bandwidth, the bandwidth arbiter first calculates an unassigned output port bit vector by ORing all allocated bit vectors from the SAT and toggling each resultant bit to provide a single unassigned output port bit vector. The unassigned output port bit vector is then matched against each request. For a particular input port the entered requests are tested in parallel for a match, and for simplification matching may be made against only the first four requests in the list. If all of the bits in a request match the unassigned bit vector, a match is made. When a match is made, the request is subtracted from the unassigned bit vector, and the result serves as the new unassigned bit vector which indicates remaining available output ports for matching against other input port request bit vectors in the list. After matching against each of the requests, the matched requests are transmitted and the transmitted requests are dequeued from the list.
A prioritization technique may be used in conjunction with the matching operation in the bandwidth arbiter in order to support switch traffic having different priority levels, such as QoS levels. To implement such prioritization each TSPP defines a priority level for each submitted request. Such priority levels could be HI and LO levels, or include greater than two levels. When prioritization is implemented the bandwidth arbiter attempts to match higher priority requests before attempting to match lower priority requests. Since the unassigned bit vector contains less unassigned bits as each subsequent match is made, the higher priority requests are then more likely to obtain a match and be transmitted than the lower priority requests. This higher likelihood for a match translates into a quicker response and greater bandwidth for such higher priority connections.
Referring now to FIG. 5, the bandwidth arbiter may grant bandwidth to requesting TSPPs by attempting to match available bandwidth on a round-robin basis. Matching is done in parallel, and granting is then attempted. A pointer 67 is employed to select a TSPP with which a grant opportunity is first attempted, e.g., TSPP i+1. After providing granting opportunities to TSPP i+1, granting opportunities are next provided to TSPP i+2, and so on ending with TSPP i such that granting opportunities are provided to each TSPP. If the first TSPP (here TSPP i+1) is able to transmit the cell in the oldest entry (here described by connection "a") then the pointer 67 begins with the next TSPP (here TSPP i+2) at the next cell time. However, if the first TSPP is not able to transmit the cell in the oldest entry then the pointer 67 begins with the same TSPP (here TSPP i+1) at the next cell time. When multiple matches are determined for a single TSPP the oldest match is selected for transmission. Thus, every point-to-multipoint connection is guaranteed to receive bandwidth.
When HI and LO prioritization is employed, separate HI and LO round-robin operations are executed to grant bandwidth. Each of the round-robin operations operates in the same fashion, but matching is not attempted on the LO priority requests until a match has been attempted with each of the HI priority requests. Hence, a separate round robin operation is executed for each priority level.
To further insure that there will be opportunities for point-to-multipoint connections to transmit, a portion of unassigned bandwidth, i.e., unallocated SAT entries, may be put aside for dedication to point-to-multipoint transmissions. This technique provides increased opportunity for point-to-multipoint connections which specify a greater number of output ports to be matched and transmitted, and prevents connections from becoming stuck or starved for bandwidth.
FIG. 6 illustrates a method of point-to-multipoint arbitration. In a first step a bit vector representation of the SAT entry is entered 68 into the list as a connection identifier and output bit vector. In the next cell time, the allocated bit vectors are ORed and used to generate 70 the unassigned bit vector. An attempt is then made to match 72 the unassigned bit vector with request N in the list, where N is the oldest request in the list. If no match is made, N is incremented 74 and an attempt is made to match the unassigned bit vector with request N+1, i.e., the next oldest request in the list. If a match is made, the bit vector of the matched request is subtracted 76 from the unassigned bit vector to provide an updated unassigned bit vector. The cell corresponding to the matched request is then transmitted 78, and a determination 80 is made as to whether the end of the list maintained by the bandwidth arbiter has been reached. Flow ends if the end of the list has been reached, i.e., an attempt has been made to match the unassigned bit vector with each request in the list. If the end of the list has not been reached, N is incremented, and an attempt is made to match the next oldest request in the list with the unassigned bit vector.
Having described the preferred embodiments of the invention, it will now become apparent to one of skill in the art that other embodiments incorporating their concepts may be used. It is felt therefore that the invention should not be limited to disclosed embodiments, but rather should be limited only by the spirit and scope of the appended claims.
Claims
  • 1. A network switch for facilitating transmission of a unit of data in a connection from an input port to a plurality of output ports, comprising:
  • a first map operative to store a representation of dynamic bandwidth that represents both unallocated bandwidth and unused-allocated bandwidth within the switch;
  • a second map operative to store a representation which identifies the output ports to which said unit of data is enqueued for transmission; and
  • a matching operator which functions to match dynamic bandwidth to at least one representation of said unit of data enqueued for transmission by utilizing said first map and said second map,
  • whereby said unit of data enqueued for transmission is transmitted to said plurality of output ports as matching dynamic bandwidth becomes available.
  • 2. The network switch of claim 1 wherein said second map has a plurality of fields, including a first field operative to store a representation of the connection associated with said unit of data enqueued for transmission, including an identification of an input port of origin.
  • 3. The network switch of claim 2 wherein said second map further includes a second field operative to store a representation which provides an identification of the output ports to which said unit of data enqueued for transmission is to be transmitted.
  • 4. The network switch of claim 3 wherein said first map includes a table operative to store a bitmask which represents dynamic bandwidth within the switch, said table indexed by an index identifier.
  • 5. The network switch of claim 1 wherein a plurality of units of data are enqueued and said matching operator functions to match dynamic bandwidth to a first occurring, matching unit of data of the plurality of units of data enqueued for transmission by sequentially providing opportunities to utilize such dynamic bandwidth starting from a first enqueued unit of data.
  • 6. The network switch of claim 5 wherein said plurality of units of data are prioritized into at least two separate groups and said matching operator functions to match dynamic bandwidth to higher priority groups prior to matching dynamic bandwidth to lower priority groups.
  • 7. The network switch of claim 4 wherein said matching operator generates an unassigned output port vector from said table, said unassigned output port vector being compared to said second field to match dynamic bandwidth with said enqueued unit of data.
  • 8. The network switch of claim 7 wherein a plurality of units of data are enqueued and said second map includes a pointer for indicating the order in which enqueued units of data are provided with opportunities to transmit via dynamic bandwidth, said pointer being incremented to provide ordering of opportunities to transmit via dynamic bandwidth in a round-robin manner.
  • 9. A method for point-to-multipoint transmission of a unit of data in a network switch comprising the steps of:
  • in a first storing step, storing a representation of said unit of data, including an identification of output ports designated for receipt of said unit of data, in a request list;
  • in a second storing step, storing a representation of dynamic bandwidth that represents both unallocated bandwidth and unused-allocated bandwidth in an allocation list;
  • comparing the request list to the allocation list to determine if a match therebetween exists; and
  • transmitting said unit of data when a match is determined to exist.
  • 10. The method of claim 9 wherein said first storing step includes the further step of storing a request map having a plurality of fields and including a connection field operative to store a representation of the connection associated with said unit of data enqueued for transmission, said representation including an identification of the port of origin of said unit of data.
  • 11. The method of claim 10 wherein said first storing step includes the further step of storing a request bit vector representation having an identification of each output port to which said unit of data is to be transmitted.
  • 12. The method of claim 11 wherein said second storing step includes the further step of storing a bitmask which represents dynamic bandwidth within the switch.
  • 13. The method of claim 12 wherein said comparing step includes the further step of comparing the bitmask which represents dynamic bandwidth to the request bit vector.
  • 14. The method of claim 13 wherein a plurality of bit request bit vectors are stored and said comparing step includes the further step of prioritizing the request bit vectors into at least two separate groups arranged from high priority to low priority, said comparing step being performed upon higher priority request bit vectors prior to said lower priority request bit vectors.
  • 15. The method of claim 13 wherein the request bitmask field includes a pointer for indicating the order in which enqueued, matched units of data are first provided with opportunities to transmit via dynamic bandwidth, said comparing step including the further step of incrementing the pointer to provide ordering of transmission opportunities in a round-robin manner.
  • 16. An asynchronous transfer mode ("ATM") switch for controlling a plurality of connections and transmitting an enqueued cell associated with one of said plurality of connections from a single input port to a plurality of designated output ports, comprising:
  • a first memory for temporarily storing said enqueued cell, a first portion of said first memory being reserved for use by at least one connection of the plurality of connections and a second portion of said first memory being shared by a portion of the plurality of connections;
  • a second memory containing an identification of unassigned output ports at specified time intervals at which dynamically available memory occurs in the reserved portion of memory and the shared portion of memory; and
  • an arbiter circuit which controls allocation of the dynamically available memory to the plurality of connections, said arbiter circuit maintaining a record of the designated output ports and being operative to compare said identification of unassigned output ports to said record of designated output ports to identify a match therebetween, corresponding to a time at which said enqueued cell may be transmitted to a selected one of said plurality of designated output ports, said switch being operative to transmit said enqueued cell upon identification of said match.
  • 17. The ATM switch of claim 16 wherein said record maintained by said arbiter circuit includes a connection identification field which provides an indication of the input port of origin associated with said enqueued cell.
  • 18. The ATM switch of claim 17 wherein said record maintained by said arbiter circuit includes a bit vector field which provides an indication of output ports designated by the enqueued cell.
  • 19. The ATM switch of claim 18 wherein said record maintained by said arbiter circuit includes a bit vector which provides an indication of input port of origin and designated output ports for said enqueued cell.
  • 20. The ATM switch of claim 19 wherein a plurality of cells are enqueued, each having a bit vector associated therewith providing an indication of input port of origin and designated output ports, and said arbiter circuit performs a logical OR operation on said plurality of bit vectors to provide an unassigned output port bit vector indicative of dynamically available memory.
  • 21. The ATM switch of claim 20 wherein each enqueued cell of the plurality of cells is transmitted only when each respective output port designated by such cell is matched to dynamically available memory.
  • 22. The ATM switch of claim 21 wherein said bit vector field includes first and second sections, said first section being a high priority section and said second section being a low priority section, said arbiter circuit implementing prioritization by providing transmission opportunities to enqueued cells represented in said high priority section prior to providing transmission opportunities to enqueued cells represented in said low priority section.
  • 23. The ATM switch of claim 22 wherein transmission opportunities are provided within said high priority section of said bit vector field in a round-robin manner.
  • 24. The ATM switch of claim 23 wherein transmission opportunities are provided within said low priority section of said bit vector field in a round-robin manner.
  • 25. The ATM switch of claim 24 wherein a portion of said second portion of said first memory is dedicated to point-to-multipoint transmission.
US Referenced Citations (246)
Number Name Date Kind
3804991 Hammond et al. Apr 1974
3974343 Cheney et al. Aug 1976
4069399 Barrett et al. Jan 1978
4603382 Cole et al. Jul 1986
4715030 Koch et al. Dec 1987
4727537 Nichols Feb 1988
4737953 Koch et al. Apr 1988
4797881 Ben-Artzi Jan 1989
4821034 Anderson et al. Apr 1989
4837761 Isono et al. Jun 1989
4849968 Turner Jul 1989
4870641 Pattavina Sep 1989
4872159 Hemmady et al. Oct 1989
4872160 Hemmady et al. Oct 1989
4878216 Yunoki Oct 1989
4893302 Hemmady et al. Jan 1990
4893307 McKay et al. Jan 1990
4894824 Hemmady et al. Jan 1990
4897841 Gang, Jr. Jan 1990
4899333 Roediger Feb 1990
4920531 Isono et al. Apr 1990
4922503 Leone May 1990
4933935 Adams Jun 1990
4933938 Sheehy Jun 1990
4947390 Sheehy Aug 1990
4953157 Franklin et al. Aug 1990
4956839 Torii et al. Sep 1990
4958341 Hemmady et al. Sep 1990
4979100 Makris et al. Dec 1990
4993018 Hajikano et al. Feb 1991
5021949 Morten et al. Jun 1991
5029164 Goldstein et al. Jul 1991
5060228 Tsutsui et al. Oct 1991
5067123 Hyodo et al. Nov 1991
5070498 Kakuma et al. Dec 1991
5083269 Syobatake et al. Jan 1992
5084867 Tachibana et al. Jan 1992
5084871 Carn et al. Jan 1992
5090011 Fukuta et al. Feb 1992
5090024 Vander Mey et al. Feb 1992
5093912 Dong et al. Mar 1992
5115429 Hluchyj et al. May 1992
5119369 Tanabe et al. Jun 1992
5119372 Verbeek Jun 1992
5128932 Li Jul 1992
5130975 Akata Jul 1992
5130982 Ash et al. Jul 1992
5132966 Hayano et al. Jul 1992
5146474 Nagler et al. Sep 1992
5146560 Goldberg et al. Sep 1992
5150358 Punj et al. Sep 1992
5151897 Suzuki Sep 1992
5157657 Potter et al. Oct 1992
5163045 Caram et al. Nov 1992
5163046 Hahne et al. Nov 1992
5179556 Turner Jan 1993
5179558 Thacker et al. Jan 1993
5185743 Murayama et al. Feb 1993
5191582 Upp Mar 1993
5191652 Dias et al. Mar 1993
5193151 Jain Mar 1993
5197067 Fujimoto et al. Mar 1993
5198808 Kudo Mar 1993
5199027 Barri Mar 1993
5239539 Uchida et al. Aug 1993
5253247 Hirose et al. Oct 1993
5253248 Dravida et al. Oct 1993
5255264 Cotton et al. Oct 1993
5255266 Watanabe et al. Oct 1993
5257311 Naito et al. Oct 1993
5258979 Oomuro et al. Nov 1993
5265088 Takigawa et al. Nov 1993
5267232 Katsube et al. Nov 1993
5268897 Komine et al. Dec 1993
5271010 Miyake et al. Dec 1993
5272697 Fraser et al. Dec 1993
5274641 Shobatake et al. Dec 1993
5274768 Traw et al. Dec 1993
5280469 Taniguchi et al. Jan 1994
5280470 Buhrke et al. Jan 1994
5282201 Frank et al. Jan 1994
5283788 Morita et al. Feb 1994
5285446 Yonehara Feb 1994
5287349 Hyodo et al. Feb 1994
5287535 Sakagawa et al. Feb 1994
5289462 Ahmadi et al. Feb 1994
5289463 Mobasser Feb 1994
5289470 Chang et al. Feb 1994
5291481 Doshi et al. Mar 1994
5291482 McHarg et al. Mar 1994
5295134 Yoshimura et al. Mar 1994
5301055 Bagchi et al. Apr 1994
5301184 Uriu et al. Apr 1994
5301190 Tsukuda et al. Apr 1994
5301193 Toyofuku et al. Apr 1994
5303232 Faulk, Jr. Apr 1994
5305311 Lyles Apr 1994
5309431 Tominaga et al. May 1994
5309438 Nakajima May 1994
5311586 Bogart et al. May 1994
5313454 Bustini et al. May 1994
5313458 Suzuki May 1994
5315586 Charvillat May 1994
5315591 Brent et al. May 1994
5319638 Lin Jun 1994
5321695 Proctor et al. Jun 1994
5323389 Bitz et al. Jun 1994
5333131 Tanabe et al. Jul 1994
5333134 Ishibashi et al. Jul 1994
5335222 Kamoi et al. Aug 1994
5335325 Frank et al. Aug 1994
5339310 Taniguchi Aug 1994
5339317 Tanaka et al. Aug 1994
5339318 Tanaka et al. Aug 1994
5341366 Soumiya et al. Aug 1994
5341373 Ishibashi et al. Aug 1994
5341376 Yamashita Aug 1994
5345229 Olnowich et al. Sep 1994
5350906 Brody et al. Sep 1994
5355372 Sengupta et al. Oct 1994
5357506 Sugawara Oct 1994
5357507 Hughes et al. Oct 1994
5357508 Le Boudec et al. Oct 1994
5357510 Norizuki et al. Oct 1994
5359600 Ueda et al. Oct 1994
5361251 Aihara et al. Nov 1994
5361372 Rege et al. Nov 1994
5363433 Isono et al. Nov 1994
5365514 Hershey et al. Nov 1994
5371893 Price et al. Dec 1994
5373504 Tanaka et al. Dec 1994
5375117 Morita et al. Dec 1994
5377262 Bales et al. Dec 1994
5377327 Jain et al. Dec 1994
5379297 Glover et al. Jan 1995
5379418 Shimazaki et al. Jan 1995
5390170 Sawant et al. Feb 1995
5390174 Jugel Feb 1995
5390175 Hiller et al. Feb 1995
5392280 Zheng Feb 1995
5392402 Robrock, II Feb 1995
5394396 Yoshimura et al. Feb 1995
5394397 Yanagi et al. Feb 1995
5398235 Tsuzuki et al. Mar 1995
5400337 Munter Mar 1995
5402415 Turner Mar 1995
5412648 Fan May 1995
5414703 Sakaue et al. May 1995
5420858 Marshall et al. May 1995
5420988 Elliott May 1995
5422879 Parsons et al. Jun 1995
5425021 Derby et al. Jun 1995
5425026 Mori Jun 1995
5432713 Takeo et al. Jul 1995
5432784 Ozveren Jul 1995
5432785 Ahmed et al. Jul 1995
5432908 Heddes et al. Jul 1995
5436886 McGill Jul 1995
5436893 Barnett Jul 1995
5440547 Easki et al. Aug 1995
5444702 Burnett et al. Aug 1995
5446733 Tsuruoka Aug 1995
5446737 Cidon et al. Aug 1995
5446738 Kim et al. Aug 1995
5448559 Hayter et al. Sep 1995
5450406 Esaki et al. Sep 1995
5452296 Shizimu Sep 1995
5455820 Yamada Oct 1995
5455825 Lauer et al. Oct 1995
5457687 Newman Oct 1995
5459743 Fukuda et al. Oct 1995
5461611 Drake, Jr. et al. Oct 1995
5463620 Sriram Oct 1995
5465331 Yang et al. Nov 1995
5475679 Munter Dec 1995
5479401 Bitz et al. Dec 1995
5479402 Hata et al. Dec 1995
5483526 Ben-Nun et al. Jan 1996
5485453 Wahlman et al. Jan 1996
5485455 Dobbins et al. Jan 1996
5487063 Kakuma et al. Jan 1996
5488606 Kakuma et al. Jan 1996
5491691 Shtayer et al. Feb 1996
5491694 Oliver et al. Feb 1996
5493566 Ljungberg et al. Feb 1996
5497369 Wainwright Mar 1996
5499238 Shon Mar 1996
5504741 Yamanaka et al. Apr 1996
5504742 Kakuma et al. Apr 1996
5506834 Sekihata et al. Apr 1996
5506839 Hatta Apr 1996
5506956 Cohen Apr 1996
5509001 Tachibana et al. Apr 1996
5509007 Takashima et al. Apr 1996
5513134 Cooperman et al. Apr 1996
5513178 Tanaka Apr 1996
5513180 Miyake et al. Apr 1996
5515359 Zheng May 1996
5517495 Lund et al. May 1996
5519690 Suzuka et al. May 1996
5521905 Oda et al. May 1996
5521915 Dieudonne et al. May 1996
5521916 Choudhury et al. May 1996
5521917 Watanabe et al. May 1996
5521923 Willmann et al. May 1996
5523999 Takano et al. Jun 1996
5524113 Gaddis Jun 1996
5526344 Diaz et al. Jun 1996
5528588 Bennett et al. Jun 1996
5528590 Iidaka et al. Jun 1996
5528591 Lauer Jun 1996
5530695 Dighe et al. Jun 1996
5533009 Chen Jul 1996
5533020 Byrn et al. Jul 1996
5535196 Aihara et al. Jul 1996
5535197 Cotton Jul 1996
5537394 Abe et al. Jul 1996
5541912 Choudhury et al. Jul 1996
5544168 Jeffrey et al. Aug 1996
5544169 Norizuki et al. Aug 1996
5544170 Kasahara Aug 1996
5546389 Wippenbeck et al. Aug 1996
5546391 Hochschild et al. Aug 1996
5546392 Boal et al. Aug 1996
5550821 Akiyoshi Aug 1996
5550823 Irie et al. Aug 1996
5553057 Nakayama Sep 1996
5553068 Aso et al. Sep 1996
5555243 Kakuma et al. Sep 1996
5555265 Kakuma et al. Sep 1996
5557607 Holden Sep 1996
5568479 Watanabe et al. Oct 1996
5570361 Norizuki et al. Oct 1996
5570362 Nishimura Oct 1996
5572522 Calamvokis et al. Nov 1996
5577032 Sone et al. Nov 1996
5577035 Hayter et al. Nov 1996
5583857 Soumiya et al. Dec 1996
5583858 Hanaoka Dec 1996
5583861 Holden Dec 1996
5590132 Ishibashi et al. Dec 1996
5602829 Nie et al. Feb 1997
5610913 Tomonaga et al. Mar 1997
5623405 Isono Apr 1997
5625846 Kobayakawa et al. Apr 1997
5633861 Hanson et al. May 1997
Non-Patent Literature Citations (17)
Entry
As Ascom Timeplex White Paper, Meeting Critical Requirements with Scalable Enterprise Networking Solutions Based on a Unified ATM Foundation, pp. 1-12, Apr. 1994.
Douglas H. Hunt, ATM Traffic Management--Another Perspective, Business Communications Review, Jul. 1994.
Richard Bubenik et al., Leaf Initiated Join Extensions, Technical Committee, Signalling Subworking Group, ATM Forum/94-0325R1, Jul. 1, 1994.
Douglas H. Hunt et al., Flow Controlled Virtual Connections Proposal for ATM Traffic Management (Revision R2), Traffic Management Subworking Group, ATM.sub.-- Forum/94-0632R2, Aug. 1994.
Flavio Bonomi et al., The Rate-Based Flow Control Framework for the Available Bit Rate ATM Service, IEEE Network, Mar./Apr. 1995, pp. 25-39.
R. Jain, Myths About Congestion Management in High Speed Networks, Internetworking Research and Experience, vol. 3, 101-113 (1992).
Douglas H. Hunt et al., Credit-Based FCVC Proposal for ATM Traffic Management (Revision R1), ATM Forum Technical Committee Traffic Management Subworking Group, ATM.sub.-- Forum/94-0168R1, Apr. 28, 1994.
Douglas H. Hunt et al., Action Item Status for Credit-Based FCVC Proposal, ATM Forum Technical Committee Traffic Management Subworking Group, ATM.sub.-- Forum/94-0439, Apr. 28, 1994.
Timothy P. Donahue et al., Arguments in Favor of Continuing Phase 1 as the Initial ATM Forum P-NNI Routing Protocol Implementation, ATM Forum Technical Committee, ATM Forum/94-0460, Apr. 28, 1994.
Richard Bubenick et al., Leaf Initiated Join Extensions, Technical Committee, Signalling Subworking Group, ATM Forum/94-0325, Apr. 28, 1994.
Rob Coltun et al., PRP: A P-NNI Routing Protocol Proposal, ATM Forum Technical Committee, ATM.sub.-- Forum/94-0492, Apr. 28, 1994.
Richard Bubenik et al., Leaf Initiated Join Extensions, ATM Forum Technical Committee, Signalling Subworking Group, ATM Forum 94-0325, Apr. 28, 1994.
Richard Bubenik et al., Requirements For Phase 2 Signalling Protocol, ATM Forum Technical Committee, Signalling Subworking Group, ATM Forum 94-1078, Jan. 1, 1994.
SITA, ATM RFP: C-Overall Technical Requirements, Sep. 1994.
Head of Line Arbitration in ATM Switches with Input-Output Buffering and Backpressure Control. by Hosein F. Badran and H. T. Mouftah, Globecom '91, pp. 0347-0351.
H.T. Kung and K. Chang, Receiver-Oriented Adaptive Buffer Allocation in Credit-Based Flow Control for ATM Networks, Proceedings of InfoCom '95, Apr. 2-6, 1995, pp. 1-14.
H.T. Kung, et al., Credit-Based Flow Control for ATM Networks: Credit Update Protocol, Adaptive Credit Allocation, and Statistical Multiplexing, Proceedings of ACM SigComm '94 Symposium on Communications Architectures, Protocols and Applications, Aug. 31-Sep. 2, 1994, pp. 1-14.