Reliable and flexible multicast mechanism for ATM networks

Information

  • Patent Grant
  • 5991298
  • Patent Number
    5,991,298
  • Date Filed
    Friday, January 10, 1997
    27 years ago
  • Date Issued
    Tuesday, November 23, 1999
    24 years ago
Abstract
A method is disclosed for facilitating multicast operation in a network in which a data unit is multicast from a root node to a plurality of leaves via a plurality of branching point nodes in response to feedback processed at each branching point node. At least one cell forwarding technique is selected from a plurality of cell forwarding techniques at the respective branching point nodes. The cell forwarding techniques facilitate multicast operation by controlling forwarding and discard of multicast cells. The forwarding techniques are realized via use of a buffer ring in which cells are stored prior to forwarding. Manipulating head and tail pointers associated with the buffer ring allows for a plurality of desirable forwarding techniques.
Description

FIELD OF THE INVENTION
The present invention is generally related to Asynchronous Transfer Mode networks, and more particularly to multicasting within such networks.
BACKGROUND OF THE INVENTION
The advent of high-speed, cell-based, connection-oriented Asynchronous Transfer Mode ("ATM") networks creates a need for a reliable and flexible multicast mechanism that can support traditional LAN-based applications. Multicast functionality is required for implementation of "webcasting," routing, address-resolution and other inter-networking protocols. One of the early contributions to the ATM forum, "LAN Emulation's Needs For Traffic Management" by Keith McCloghrie, ATM Forum 94-0533, described multipoint connections in support of multicasting as one of the high-level requirements for LAN emulation. Such requirements may be viewed as including at least the same level of performance from an emulated LAN as from a traditional LAN in all respects, including multicast capability.
Known techniques for implementing multicast generally fall into two categories: "slowest-leaf wins" and "best-effort delivery." Slowest-leaf wins implies that the slowest leaf of the multicast connection determines the progress of the entire connection. While this technique prevents cell loss, it may be undesirable from the point of view of public-carrier networks where it is important to avoid allowing an arbitrary end-system from controlling the performance of the network. "Best-effort delivery" implies that cells are dropped for leaves that are unable to maintain a predetermined pace. While this technique prevents an arbitrary leaf from controlling the performance of the network, dropping cells in order to maintain performance may also be undesirable, as for example with loss sensitive transfers such as computer data transmission. While these techniques might be suitable for some multicast applications, neither technique provides a satisfactory multicast mechanism for the broad range of applications encountered in high-speed networks.
SUMMARY OF THE INVENTION
In a network where a data unit is multicast in a connection from a root node to a plurality of leaves via a plurality of branching point nodes, at least one forwarding technique selected from a plurality of forwarding techniques is implemented at each branching point node. The forwarding techniques facilitate multicast operation by controlling forwarding of multicast data units, and different connections may employ different forwarding techniques. Possible forwarding techniques that may be employed include a Prevent-Loss (PL) technique, a Prevent-Loss for Distinguished Subsets (PL(n)) technique, a Prevent-Loss for Variable Subset (p/n) technique, a K-Lag technique and a K-Lead technique.
Each branching point node includes a forwarding buffer which may be modeled as a ring buffer with two pointers: a head pointer and a tail pointer. A buffer system is also provided and data units are stored in the buffer system upon receipt before entering the ring buffer. The tail pointer points to the first received and buffered data unit in the series of the most recently received data units from upstream that has not yet been forwarded to any branch. The head pointer points to the oldest data unit that needs to be forwarded downstream to any branch. Thus, the data units stored in the ring buffer between the head and tail pointers need to be forwarded to one or more branches. The tail pointer advances by one buffer in a counter-clockwise direction with the forwarding of the most recent data unit in the ring and the arrival of the next most recent data unit that has not yet been forwarded to any branch. The head pointer advances as needed to indicate the current oldest data unit in the ring buffer. The advancement of the head pointer depends on the particular forwarding technique chosen from a range of possible techniques, each of which provides a different service guarantee. Each buffer, i, in the ring has an associated counter called a reference count, r(i), that counts the number of branches to which the data unit in the buffer must be forwarded. If the index (i) increases from head pointer to tail pointer, then one may observe that r(i) is a monotonically increasing function. Each time the tail pointer moves to a new buffer, the corresponding reference count is set to n, the number of downstream branches being serviced by that reference counter. Each time the data unit is forwarded to one of those branches, the reference counter is decremented by one. When the reference counter corresponding to the buffer at the position of the head pointer reaches 0,the head pointer is advanced in the counter-clockwise direction to the next buffer with a non-zero reference counter. Under no circumstance is the head pointer advanced beyond the tail pointer. Alternatively, there can be multiple instances of head and tail pointers with their associated reference counters where each instance can service a different disjoint subset of branches each with a possibly different service guarantee. In this case, it is important to advance the tail pointer only in relation to the head pointers from other subsets to avoid violating the service guarantees associated with those subsets of branches. In another alternative embodiment there can be a per-branch counter rather than a per-data unit buffer counter. The general principles of the mechanism remain the same. Different service guarantees can be supported by the above mechanism such as a Prevent-Loss (PL) technique, a Prevent-Loss for Distinguished Subsets (PL(n)) technique, a Prevent-Loss for Variable Subset (p/n) technique, a K-Lag technique, a K-Lead technique and other guarantees.
The Prevent-Loss (PL) technique prevents data loss within a connection. In particular, the Prevent-Loss technique ensures that each data unit that is received is forwarded to each branch. However, if a branch performs poorly, the effect of this poor performance may eventually propagate towards the root node and thereby affect all of the branches.
The Prevent-Loss (PL) technique is realized by ensuring that the tail pointer never overtakes the head pointer in all circumstances. The head pointer can advance only after its data unit has been forwarded to each branch, i.e., only when the reference count has decreased to zero. This implements the Prevent-Loss technique.
The Prevent-Loss for Distinguished Subsets technique guarantees delivery of the multicast data unit to predetermined subsets of branches in the multicast connection. More particularly, transmission to a distinguished subset of branches is made according to the Prevent-Loss technique described above. Hence, there is no data loss in this distinguished subset. A non-distinguished subset of branches consisting of the remaining branches in the connection, is not guaranteed delivery of the multicast data unit. Hence, the distinguished subsets of branches are insulated from possible poor performance by branches in the non-distinguished subset of branches.
The Prevent-Loss for Distinguished Subsets technique is implemented by using a separate set of reference counters for each distinguished subset of branches. The head pointer is advanced only after all members of the distinguished subset have been served. This makes it possible, for example, to provide a lossless service to the members of the distinguished subset of branches.
The K-Lag technique is realized by ensuring that the head pointer is advanced together with the tail pointer such that it is no more than a distance of K from the tail pointer. If the tail pointer is K data unit buffer positions ahead of the head pointer in the counter-clockwise direction, the data unit at the tail pointer is forwarded to any branch and the tail pointer advanced along with the head pointer one buffer position counter-clockwise, i.e., the data unit in the buffer position of the original head pointer is no longer guaranteed to be forwarded. It is possible that the data unit at the previous head pointer positions may yet be forwarded before being overwritten by that at the tail pointer. In an alternative embodiment, such a data unit can be deleted and not forwarded any more. It should also be noted that K must be at least 1. Different values of K give rise to different levels of service.
In a variation of the K-Lag technique, K-EPD (early packet discard), each branch is allowed to lag by up to K data units, as set by an upper memory bound associated with the branch. If the tail pointer is K buffer positions ahead of the head pointer in the counter-clockwise direction and the head pointer is pointing to a data unit in the middle of the frame, then the head pointer is advanced up to the End Of Frame data unit or one position before the tail pointer, to prevent forwarding of all data units associated with the respective frame and thus avoid the waste of network bandwidth and resources.
In the K-Lead technique, the fastest branches cannot be ahead of any other branches by more than K data units. The K-Lead technique is realized by ensuring that the tail pointer is not advanced more that a distance of K from the head pointer. If the tail pointer is K buffer positions ahead of the head pointer in the counter-clockwise direction, the data unit at the tail pointer may not be forwarded to any branch.
In the Prevent-Loss for Variable Subset (p/n) forwarding technique there is no data loss for p out of n branches. This is achieved as a variant to the PL technique. In this technique, the tail pointer is allowed to advance beyond the head pointer, as long as the corresponding reference counter is less than n-p. This ensures that a given data unit copy is delivered to at least p branches. It is possible that the unincluded branch may overflow at some point. However, the performance of the possibly overflowing branch does not affect the performance of branches in the subset of p branches. Hence, the technique provides protection for a subset of branches where the members of the subset are determined contemporaneously with the forwarding calculation.





BRIEF DESCRIPTION OF THE DRAWING
The invention will be more fully understood in view of the following Detailed Description of the Invention and Drawing, of which:
FIG. 1 is a block diagram of an ATM switch for providing reliable and flexible multicast;
FIG. 2 is a block diagram of a network topology which illustrates forwarding techniques;
FIG. 3 illustrates a ring buffer;
FIG. 4 is a flow diagram which illustrates a Prevent-Loss for Variable Subset (p/n) technique; and
FIG. 5 is a diagram which illustrates the K-EPD technique.





DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 illustrates a switch 10 for achieving reliable and flexible multicast functionality for an Asynchronous Transfer Mode ("ATM") network. The switch 10 has a plurality of input ports 12 and output ports 14, each of which may include an associated buffer such as a First-In First-Out ("FIFO") memory 16, 17, respectively. The input ports 12 and output ports 14 are interconnected by a switch fabric 18 such that a data unit 19, as for example a cell, frame or packet, entering any of the input ports 12 may be transmitted through the switch fabric 18 to any of the output ports 14. In particular, when one input port is using one or more output ports, any other input can use any unused output ports. The output FIFOs 17 may be sized to cope with latency such that the switch is non-blocking, i.e., each FIFO 16 at the input side may be sized to achieve a target bandwidth utilization given the round trip latencies affecting the control loop, and each output FIFO 17 may be sized to cope with such latencies in the data forwarding path. Further, the switch is provided with hardware multicast capability.
Referring now to FIG. 2, the multicast forwarding techniques will be described with regard to the illustrated tree network topology. The tree topology includes a root node 20, a plurality of branching point nodes ("branches") 22, 24 and a plurality of end-nodes, end-systems or leaves 26, 28, 30, 32. The end-systems may be hosts, routers, or switches that terminate the tree. The branching nodes 22, 24, which stem from the root node, are "parent" nodes having a plurality of "child" nodes stemming therefrom in the multicast tree. Data 34, such as cells, packets or frames, flow from the root node 20 to the branching point nodes 22, 24, and then to the leaves 26-32. Feedback updates 36 flow from the leaves to the branching point nodes in accordance with a point-to-point flow control technique as is known in the art. While illustrated with a single level of branching point nodes, the actual implementation may have multiple levels of branching point nodes. It should also be understood that the branching point nodes may feed either leaves, as illustrated, or other branching point nodes or combinations thereof depending upon the topology of the multicast tree.
Referring to FIGS. 2 and 3, at each branching point node, the forwarding buffer may be modeled as a ring buffer 35 with two pointers: a head pointer 37 and a tail pointer 39. An order retaining buffer system such as the FIFO 16 is also provided and data units are stored in the buffer system upon receipt before entering the ring buffer 35. The tail pointer 39 points to the first received and buffered data unit in the series of the most recently received data units from upstream that has not yet been forwarded to any branch. The head pointer 37 points to the oldest data unit that needs to be forwarded downstream to any branch. Thus, the data units stored in the ring buffer between the head and tail pointers need to be forwarded to one or more branches. The tail pointer advances by one buffer in a counter-clockwise direction with the forwarding of the most recent data unit in the ring and the arrival of the next most recent data unit that has not yet been forwarded to any branch. The head pointer advances as needed to indicate the current oldest data unit in the ring buffer. The advancement of the head pointer depends on the particular forwarding technique chosen from a range of possible techniques, each of which provides a different service guarantee. Each buffer, i, in the ring has an associated counter called a reference count, r(i), that counts the number of branches to which the data unit in the buffer must be forwarded. If the index (i) increases from head pointer to tail pointer, then one may observe that r(i) is a monotonically increasing function. Each time the tail pointer moves to a new buffer, the corresponding reference count is set to n, the number of downstream branches being serviced by that reference counter. Each time the data unit is forwarded to one of those branches, the reference counter is decremented by one. When the reference counter corresponding to the data unit buffer at the position of the head pointer reaches 0, the head pointer is advanced in the counter-clockwise direction to the next data unit buffer with a non-zero reference counter. Under no circumstance is the head pointer advanced beyond the tail pointer.
In an alternative embodiment, there can be multiple instances of head and tail pointers with their associated reference counters where each instance can service a different disjoint subset of branches each with a possibly different service guarantee. In this case, it is important to only advance the tail pointer in relation to the head pointers from other subsets to avoid violating the service guarantee associated with those subsets of branches.
In yet another embodiment, there can be a per-branch counter rather than a per-data unit buffer counter. The general principles of the mechanism remain the same.
The root node 20 as well as the branching point nodes 22, 24 execute the multicast forwarding technique. Any one of a plurality of forwarding techniques may be employed at the branching point nodes to support service guarantees for the branches of the multipoint connection. The forwarding techniques function to control the forwarding of multicast data units, and may protect portions of the multicast tree from the effects of poorer performing portions. Forwarding techniques may include a Prevent-Loss (PL) technique, a Prevent-Loss for Distinguished Subsets (PL(n)) technique, a Prevent-Loss for Variable Subset (p/n) technique, a K-Lag technique, a K-Lead technique and techniques offering other service guarantees.
Prevent-Loss (PL) is a forwarding technique where data unit loss is prevented by adjusting transmission of multicast data units so that each branch receives a copy of each multicast data unit. Prevent-Loss is realized by ensuring that the tail pointer never overtakes the head pointer in all circumstances. If the tail pointer is one, data unit buffer position behind the head pointer in the counter-clockwise direction (the ring is full), the data unit at the tail pointer may not be forwarded to any branch. This data unit may be forwarded only when the head pointer advances, i.e., when the reference count of the data unit at the head pointer becomes zero. Depending on the duration of such poor performance, the effect may propagate towards the root of the multicast tree, eventually affecting all leaves.
The Prevent-Loss for Distinguished Subsets technique guarantees delivery of the multicast data unit to a predetermined subsets of branches in the multicast connection. More particularly, transmission to a distinguished subset of branches is made according to the Prevent-Loss technique described above. Hence, there is no data loss in this distinguished subset. A non-distinguished subset of branches consisting of the remaining branches in the connection, is not guaranteed delivery of the multicast data unit. Hence, the distinguished subsets of branches are insulated from possible poor performance by branches in the non-distinguished subset of branches.
The Prevent-Loss for Distinguished subsets technique is implemented by using a separate set of reference counters for each distinguished subset of branches. The head pointer is advanced only after all members of the distinguished subset have been served. This makes it possible, for example, to provide a lossless service to the members of the distinguished subset of branches. Further, it should be understood that there may be more than one distinguished subset.
In the Prevent-Loss for Variable Subset (p/n) forwarding technique there is no data loss for p out of n branches. This is achieved as a variant to the PL technique. In this technique, the tail pointer is allowed to advance beyond the head pointer, as long as the corresponding reference counter is less than n-p. This ensures that a given data unit copy is delivered to at least p branches. It is possible that the unincluded branch may overflow at some point. However, the performance of the possibly overflowing branch does not affect the performance of branches in the subset of p branches. Hence, the technique provides protection for a subset of branches where the members of the subset are determined contemporaneously with the forwarding calculation.
One embodiment of the Variable Subset technique is illustrated in FIG. 4. In a first step 40 an integer p is entered, where p is a selectable input. At iteration n=0, inquiry is made whether p feedback updates have been received from at least p branches as determined in step 42. Fewer feedback updates may be processed if a timeout occurs in step 44 before p feedback updates are collected. More feedback updates may be processed if a group of feedback updates pushing the total above p arrives contemporaneously. A processed update X(0) is then calculated in step 46 as the median of the gathered feedback updates. A variance v(0) is then calculated in step 50 for use as described below.
A FIFO queue 51 is maintained in the switch for each branch in the multicast connection. The fullness of such FIFO queues is indicative of absence of feedback updates from the associated branches. At an iteration n=m, feedback updates are obtained from p leaves. If at least one leaf fails to deliver a feedback update in the previous iteration (n-1) as determined in step 52, fewer than p leaves are utilized by adjusting the number of leaves required in step 54. Given the following definitions:
q(i)=fullness of forwarding FIFO queue i branch;
q(j,m)=fullness of forwarding FIFO queue j branch at iteration m;
x(m)=median of q(i,m) at iteration m;
v(m)=variance of q(i,m) at iteration m;
x(m,m-1)=median of q(i,m) over iterations m and m-1; and
v(m,m-1)=variance of q(i,m) over iterations m and m-1, then
an outlier (j) at time (n=m) is removed from the pool and is not to be waited for at n=m+l if the following is true:
[q(j,m)-x(m)].sup.2 >v(m) or
[q(j,m)-x(m,m-1).sup.2 >v(m,m-1), where
v(m)=.SIGMA..sub.i [((q(i)-x(m))/p].sup.2,
v(m-1,m-2)=.SIGMA..sub.i .SIGMA..sub.j [((q(i)q(j)-x(m-1, m-2))/p].sup.2,
x(m)=median of p updates, and
x(m-1, m-2)=median of p updates at both time instances.
However, if a feedback update is received from the previously silent node while updates are being gathered, then that feedback update is included in the calculation. Hence, v(m) is calculated in step 50, and any branches which have not provided a feedback update in the previous iteration should not be considered in the next iteration are detected in step 52.
If all leaves were unresponsive in the previous "y" iterations, where "y" is a small number like 1, 2 or 3, then the processed update is computed with fewer than p updates. If an update is received from a specified leaf, that update is added to the pool for processing. Further, if the outbound FIFO for a specified leaf or branch is stale (i.e., if that FIFO corresponds to a branch that is not being considered), then four steps may be executed: (A) drop the p earliest data units in the FIFO; or (B) forward the p earliest data units in the FIFO; or (C) drop the p latest data units in the FIFO; or (D) forward the p latest data units and drop the remaining data units in the FIFO. In cases C and D, the FIFO for the specified leaf is considered to be RESTARTED.
Referring to FIGS. 2 and 5, in the K-Lag service, the slowest branches are guaranteed to not lag the fastest by more than K data units. The K-Lag service is realized by ensuring that the head pointer is advanced together with the tail pointer such that it is no more than a distance of K from the tail pointer. If the tail pointer is K data unit buffer positions ahead of the head pointer in the counter-clockwise direction and if the data unit at the tail pointer has been forwarded to any branch, then the tail pointer is advanced along with the head pointer one data unit buffer position counter-clockwise. In this circumstance the data unit in the buffer position of the original head pointer is no longer guaranteed to be forwarded. It is possible that the data unit at the previous head pointer positions may yet be forwarded before being overwritten by that at the tail pointer. In an alternative embodiment, such a data unit can be deleted and not forwarded any more. It should also be noted that K must be at least 1. Different values of K give rise to different levels of service.
In a variation of the K-Lag technique, K-EPD, each branch is allowed to lag by up to K data units, as set by an upper memory bound associated with the leaf. If the tail pointer is K data unit buffer positions ahead of the head pointer in the counter-clockwise direction and the head pointer is pointing to a data unit in the middle of the frame, then the head pointer is advanced up to the end of the frame data unit or one position before the tail pointer, to prevent forwarding of all data units associated with the respective frame and thus avoid the waste of network bandwidth and resources.
In the K-Lead technique, the fastest branches cannot be ahead of any other branches by more that K data units. The K-Lead technique is realized by ensuring that the tail pointer is not advanced more that a distance of K from the head pointer. If the tail pointer is K data unit buffer positions ahead of the head pointer in the counter-clockwise direction, the data unit at the tail pointer may not be forwarded to any branch.
Having described the preferred embodiments of the invention, it will now become apparent to one of skill in the art that other embodiments incorporating the presently disclosed method and apparatus may be used. Accordingly, the invention should not be viewed as limited to the disclosed embodiments, but rather should be viewed as limited only by the spirit and scope of the appended claims.
Claims
  • 1. A method for forwarding a multicast data unit from a branching node to at least one of a plurality of coupled downstream nodes, comprising the steps of:
  • determining, the eligibility of respective nodes to receive said multicast data unit based upon a specified multicast data unit forwarding technique selected from a plurality of multicast data unit forwarding techniques; and
  • forwarding said data unit to selected ones of said coupled downstream nodes based upon said specified data unit forwarding technique, wherein the selected ones of the coupled downstream nodes at least at some times form a subset of the coupled downstream nodes.
  • 2. The method of claim 1 wherein said eligibility determining step includes analyzing feedback updates received from respective downstream nodes.
  • 3. The method of claim 1 including the further step of employing a data unit forwarding ring buffer to execute said specified data unit forwarding technique.
  • 4. The method of claim 1 wherein the forwarding step includes the further step of employing a data unit forwarding technique in which no downstream node is allowed to lead ahead of another downstream node by more than K data units.
  • 5. The method of claim 1 wherein the forwarding step includes the further step of employing a data unit forwarding technique in which loss of data is prevented between the branching node and each downstream node in the multicast connection.
  • 6. A method for forwarding a multicast data unit from a branching node to at least one of a plurality of coupled downstream nodes, comprising the steps of:
  • determining, the eligibility of respective nodes to receive said multicast data unit based upon a specified multicast data unit forwarding technique selected from a plurality of multicast data unit forwarding techniques; and
  • forwarding said data unit to selected ones of said coupled downstream nodes based upon said specified data unit forwarding technique, wherein the forwarding step includes the further step of employing a data unit forwarding technique in which loss of data units is prevented between the branching point node and a distinguished subset of downstream nodes in the multicast connection.
  • 7. The method of claim 6 including the further step of designating a non-distinguished subset of downstream nodes in the multicast connection which are not entirely protected from data loss.
  • 8. A method for forwarding a multicast data unit from a branching node to at least one of a plurality of coupled downstream nodes, comprising the steps of:
  • determining, the eligibility of respective nodes to receive said multicast data unit based upon a specified multicast data unit forwarding technique selected from a plurality of multicast data unit forwarding techniques; and
  • forwarding said data unit to selected ones of said coupled downstream nodes based upon said specified data unit forwarding technique, wherein the forwarding step includes the further step of employing a technique in which loss of data is prevented between the branching node and a variable subset of (p) downstream nodes in the multicast connection having most recently provided a feedback update to the branching node.
  • 9. The method of claim 8 including the further step of including at least the (p) most recent feedback updates to determine the downstream nodes in the distinguished subset, where (p) is an input.
  • 10. A method for forwarding a multicast data unit from a branching node to at least one of a plurality of coupled downstream nodes, comprising the steps of:
  • determining, the eligibility of respective nodes to receive said multicast data unit based upon a specified multicast data unit forwarding technique selected from a plurality of multicast data unit forwarding techniques; and
  • forwarding said data unit to selected ones of said coupled downstream nodes based upon said specified data unit forwarding technique, wherein the forwarding step includes the further step of employing a data unit forwarding technique in which no downstream node is allowed to lag behind another downstream node by more than K data units.
  • 11. The method of claim 10 wherein the data units are grouped into frames, and including the further step of discarding any remaining portion of a frame if a downstream node lags behind another downstream node by K data units while the frame is being transmitted.
  • 12. A multicast apparatus for transmitting multicast data units in a multicast connection from a branching node to a plurality of downstream nodes, comprising:
  • a memory in each downstream node for receiving multicast data units;
  • a circuit for providing information indicating fullness of the downstream node memory, the circuit providing such fullness information to the branching node in accordance with a point-to-point flow control technique; and
  • a circuit in each branching node for processing the fullness information in accordance with a data unit forwarding technique selectable from a plurality of data unit forwarding techniques; and
  • a forwarding circuit in each branching node for forwarding the multicast data unit to selected ones of the plurality of downstream nodes responsive to the selected data unit forwarding technique, wherein the selected ones of the downstream nodes at least at some times form a subset of the plurality of downstream nodes.
  • 13. The apparatus of claim 12 further including a circuit in each downstream node for providing feedback updates indicating memory fullness to the branching node.
  • 14. The apparatus of claim 12 wherein the branching point node includes a data unit forwarding ring buffer having a head pointer and a tail pointer.
  • 15. The apparatus of claim 12 wherein the branching point node further includes a buffer system for storing incoming data units prior to forwarding of such incoming data units from said buffer system to the ring buffer.
  • 16. The apparatus of claim 12 wherein the selected data unit forwarding technique prevents any downstream node from leading ahead of any other downstream node by more than K data units.
  • 17. The apparatus of claim 12 wherein the selected data unit forwarding technique prevents loss of data units between the branching node and each downstream node connected thereto.
  • 18. A multicast apparatus for transmitting multicast data units in a multicast connection from a branching node to a plurality of downstream nodes, comprising:
  • a memory in each downstream node for receiving multicast data units;
  • a circuit for providing information indicating fullness of the downstream node memory, the circuit providing such fullness information to the branching node in accordance with a point-to-point flow control technique; and
  • a circuit in each branching node for processing the fullness information in accordance with a data unit forwarding technique selectable from a plurality of data unit forwarding techniques, wherein the selected data unit forwarding technique prevents loss of data units between the branching point node and a distinguished subset of downstream nodes connected thereto.
  • 19. The apparatus of claim 18 wherein the selected data unit forwarding technique does not prevent loss for a non-distinguished subset of downstream nodes connected thereto.
  • 20. A multicast apparatus for transmitting multicast data units in a multicast connection from a branching node to a plurality of downstream nodes, comprising:
  • a memory in each downstream node for receiving multicast data units;
  • a circuit for providing information indicating fullness of the downstream node memory, the circuit providing such fullness information to the branching node in accordance with a point-to-point flow control technique; and
  • a circuit in each branching node for processing the fullness information in accordance with a data unit forwarding technique selectable from a plurality of data unit forwarding techniques, wherein the selected data unit forwarding technique prevents data unit loss for a variable subset (p) of downstream nodes connected thereto.
  • 21. The apparatus of claim 20 wherein the members of the variable subset (p) are the (p) downstream nodes most recently having provided memory fullness information to the branching node, where (p) is an input.
  • 22. A multicast apparatus for transmitting multicast data units in a multicast connection from a branching node to a plurality of downstream nodes, comprising:
  • a memory in each downstream node for receiving multicast data units;
  • a circuit for providing information indicating fullness of the downstream node memory, the circuit providing such fullness information to the branching node in accordance with a point-to-point flow control technique; and
  • a circuit in each branching node for processing the fullness information in accordance with a data unit forwarding technique selectable from a plurality of data unit forwarding techniques, wherein the selected data unit forwarding technique prevents any downstream node from lagging behind any other downstream node by more than K data units.
  • 23. The apparatus of claim 22 wherein the data units are grouped into frames, and wherein any portion of a frame remaining to be transmitted is discarded when a downstream node lags behind another downstream node by K data units while the frame is being transmitted.
CROSS-REFERENCE TO RELATED APPLICATIONS

A claim of priority is made to U.S. Provisional Patent Application No. 60/009,919 entitled A RELIABLE AND FLEXIBLE MULTICAST MECHANISM FOR ABR SERVICE IN ATM NETWORKS, filed Jan. 12, 1996.

US Referenced Citations (275)
Number Name Date Kind
3804991 Hammond et al. Apr 1974
3974343 Cheney et al. Aug 1976
4069399 Barrett et al. Jan 1978
4084228 Dufond et al. Apr 1978
4240143 Bessemer et al. Dec 1980
4603382 Cole et al. Jul 1986
4715030 Koch et al. Dec 1987
4727537 Nichols Feb 1988
4737953 Koch et al. Apr 1988
4748658 Gopal et al. May 1988
4797881 Ben-Artzi Jan 1989
4821034 Anderson et al. Apr 1989
4837761 Isono et al. Jun 1989
4849968 Turner Jul 1989
4870641 Pattavina Sep 1989
4872157 Hemmady et al. Oct 1989
4872159 Hemmady et al. Oct 1989
4872160 Hemmady et al. Oct 1989
4872197 Pemmaraju Oct 1989
4878216 Yunoki Oct 1989
4893302 Hemmady et al. Jan 1990
4893307 McKay et al. Jan 1990
4894824 Hemmady et al. Jan 1990
4897833 Kent et al. Jan 1990
4897841 Gang, Jr. Jan 1990
4899333 Roediger Feb 1990
4920531 Isono et al. Apr 1990
4922503 Leone May 1990
4933938 Sheehy Jun 1990
4942574 Zelle Jul 1990
4947390 Sheehy Aug 1990
4953157 Franklin et al. Aug 1990
4956839 Torii et al. Sep 1990
4958341 Hemmady et al. Sep 1990
4979100 Makris et al. Dec 1990
4993018 Hajikano et al. Feb 1991
5014192 Mansfield et al. May 1991
5021949 Morten et al. Jun 1991
5029164 Goldstein et al. Jul 1991
5060228 Tsutsui et al. Oct 1991
5067123 Hyodo et al. Nov 1991
5070498 Kakuma et al. Dec 1991
5083269 Syobatake et al. Jan 1992
5084867 Tachibana et al. Jan 1992
5084871 Carn et al. Jan 1992
5090011 Fukuta et al. Feb 1992
5090024 Vander Mey et al. Feb 1992
5093827 Franklin et al. Mar 1992
5093912 Dong et al. Mar 1992
5115429 Hluchyj et al. May 1992
5119369 Tanabe et al. Jun 1992
5119372 Verbeek Jun 1992
5128932 Li Jul 1992
5130975 Akata Jul 1992
5130982 Ash et al. Jul 1992
5132966 Hayano et al. Jul 1992
5146474 Nagler et al. Sep 1992
5146560 Goldberg et al. Sep 1992
5150358 Punj et al. Sep 1992
5151897 Suzuki Sep 1992
5157657 Potter et al. Oct 1992
5163045 Caram et al. Nov 1992
5163046 Hahne et al. Nov 1992
5179556 Turner Jan 1993
5179558 Thacker et al. Jan 1993
5185743 Murayama et al. Feb 1993
5191582 Upp Mar 1993
5191652 Dias et al. Mar 1993
5193151 Jain Mar 1993
5197067 Fujimoto et al. Mar 1993
5198808 Kudo Mar 1993
5199027 Barri Mar 1993
5239539 Uchida et al. Aug 1993
5253247 Hirose et al. Oct 1993
5253248 Dravida et al. Oct 1993
5255264 Cotton et al. Oct 1993
5255266 Watanabe et al. Oct 1993
5257311 Naito et al. Oct 1993
5258979 Oomuro et al. Nov 1993
5265088 Takigawa et al. Nov 1993
5267232 Katsube et al. Nov 1993
5268897 Komine et al. Dec 1993
5271010 Miyake et al. Dec 1993
5272697 Fraser et al. Dec 1993
5274641 Shobatake et al. Dec 1993
5274768 Traw et al. Dec 1993
5280469 Taniguchi et al. Jan 1994
5280470 Buhrke et al. Jan 1994
5282201 Frank et al. Jan 1994
5283788 Morita et al. Feb 1994
5285446 Yonehara Feb 1994
5287349 Hyodo et al. Feb 1994
5287535 Sakagawa et al. Feb 1994
5289462 Ahmadi et al. Feb 1994
5289463 Mobasser Feb 1994
5289470 Chang et al. Feb 1994
5291481 Doshi et al. Mar 1994
5291482 McNarg et al. Mar 1994
5295134 Yoshimura et al. Mar 1994
5301055 Bagchi et al. Apr 1994
5301184 Uriu et al. Apr 1994
5301190 Tsukuda et al. Apr 1994
5301193 Toyofuku et al. Apr 1994
5303232 Faulk, Jr. Apr 1994
5305311 Lyles Apr 1994
5309431 Tominaga et al. May 1994
5309438 Nakajima May 1994
5311586 Bogart et al. May 1994
5313454 Bustini et al. May 1994
5313458 Suzuki May 1994
5315586 Charvillat May 1994
5319638 Lin Jun 1994
5321695 Proctor et al. Jun 1994
5323389 Bitz et al. Jun 1994
5325356 Lyles Jun 1994
5327420 Lyles Jul 1994
5327424 Perlman Jul 1994
5333131 Tanabe et al. Jul 1994
5333134 Ishibashi et al. Jul 1994
5335222 Kamoi et al. Aug 1994
5335325 Frank et al. Aug 1994
5339310 Taniguchi Aug 1994
5339317 Tanaka et al. Aug 1994
5339318 Tanaka et al. Aug 1994
5341366 Soumiya et al. Aug 1994
5341373 Ishibashi et al. Aug 1994
5341376 Yamashita et al. Aug 1994
5341483 Frank et al. Aug 1994
5345229 Olnowich et al. Sep 1994
5350906 Brody et al. Sep 1994
5351146 Chan et al. Sep 1994
5355372 Sengupta et al. Oct 1994
5357506 Sugawara Oct 1994
5357507 Hughes et al. Oct 1994
5357508 Le Boudec et al. Oct 1994
5357510 Norizuki et al. Oct 1994
5359600 Ueda et al. Oct 1994
5361251 Aihara et al. Nov 1994
5361372 Rege et al. Nov 1994
5363433 Isono Nov 1994
5363497 Baker et al. Nov 1994
5365514 Hershey et al. Nov 1994
5369570 Parad Nov 1994
5371893 Price et al. Dec 1994
5373504 Tanaka et al. Dec 1994
5375117 Morita et al. Dec 1994
5377262 Bales et al. Dec 1994
5377327 Jain et al. Dec 1994
5379297 Glover et al. Jan 1995
5379418 Shimazaki et al. Jan 1995
5390170 Sawant et al. Feb 1995
5390174 Jugel Feb 1995
5390175 Hiller et al. Feb 1995
5392280 Zheng Feb 1995
5392402 Robrock, II Feb 1995
5394396 Yoshimura et al. Feb 1995
5398235 Tsuzuki et al. Mar 1995
5398242 Perlman Mar 1995
5400333 Perlman Mar 1995
5400337 Munter Mar 1995
5402415 Turner Mar 1995
5412648 Fan May 1995
5414703 Sakaue et al. May 1995
5418942 Krawchuk et al. May 1995
5420858 Marshall et al. May 1995
5420988 Elliott May 1995
5422879 Parsons et al. Jun 1995
5425021 Derby et al. Jun 1995
5425026 Mori Jun 1995
5426635 Mitra et al. Jun 1995
5432713 Takeo et al. Jul 1995
5432784 Ozveren Jul 1995
5432785 Ahmed et al. Jul 1995
5432908 Heddes et al. Jul 1995
5436886 McGill Jul 1995
5436893 Barnett Jul 1995
5440547 Easki et al. Aug 1995
5444702 Burnett et al. Aug 1995
5446733 Tsuruoka Aug 1995
5446737 Cidon et al. Aug 1995
5446738 Kim et al. Aug 1995
5448559 Hayter et al. Sep 1995
5448621 Knudsen Sep 1995
5450406 Esaki et al. Sep 1995
5452296 Shimizu Sep 1995
5452330 Goldstein Sep 1995
5454299 Thessin et al. Oct 1995
5455820 Yamada Oct 1995
5455825 Lauer et al. Oct 1995
5457687 Newman Oct 1995
5459743 Fukuda et al. Oct 1995
5461611 Drake, Jr. et al. Oct 1995
5463629 Ko Oct 1995
5463775 DeWitt et al. Oct 1995
5465331 Yang et al. Nov 1995
5465365 Winterbottom Nov 1995
5469003 Kean Nov 1995
5473608 Gagne et al. Dec 1995
5475679 Munter Dec 1995
5479401 Bitz et al. Dec 1995
5479402 Hata et al. Dec 1995
5483526 Ben-Nun et al. Jan 1996
5485453 Wahlman et al. Jan 1996
5485455 Dobbins et al. Jan 1996
5487063 Kakuma et al. Jan 1996
5488606 Kakuma et al. Jan 1996
5491691 Shtayer et al. Feb 1996
5491694 Oliver et al. Feb 1996
5493566 Ljungberg et al. Feb 1996
5497369 Wainwright Mar 1996
5499238 Shon Mar 1996
5504741 Yamanaka et al. Apr 1996
5504742 Kakuma et al. Apr 1996
5506834 Sekihata et al. Apr 1996
5506839 Hatta Apr 1996
5506956 Cohen Apr 1996
5509001 Tachibana et al. Apr 1996
5509007 Takashima et al. Apr 1996
5511070 Lyles Apr 1996
5513134 Cooperman et al. Apr 1996
5513178 Tanaka Apr 1996
5513180 Miyake et al. Apr 1996
5515359 Zheng May 1996
5517615 Sefidvash et al. May 1996
5519690 Suzuka et al. May 1996
5519698 Lyles et al. May 1996
5521905 Oda et al. May 1996
5521915 Dieudonne et al. May 1996
5521916 Choudhury et al. May 1996
5521917 Watanabe et al. May 1996
5521923 Willmann et al. May 1996
5523999 Takano et al. Jun 1996
5524113 Gaddis Jun 1996
5526344 Diaz et al. Jun 1996
5528588 Bennett et al. Jun 1996
5528590 Iidaka et al. Jun 1996
5528591 Lauer Jun 1996
5530695 Digne et al. Jun 1996
5533009 Chen Jul 1996
5533020 Byrn et al. Jul 1996
5535196 Aihara et al. Jul 1996
5535197 Cotton Jul 1996
5537394 Abe et al. Jul 1996
5541912 Choudhury et al. Jul 1996
5544168 Jeffrey et al. Aug 1996
5544169 Norizuki et al. Aug 1996
5544170 Kasahara Aug 1996
5546389 Wippenbeck et al. Aug 1996
5546391 Hochschild et al. Aug 1996
5546392 Boal et al. Aug 1996
5550821 Akiyoshi Aug 1996
5553057 Nakayama Sep 1996
5553068 Aso et al. Sep 1996
5555243 Kakuma et al. Sep 1996
5555265 Kakuma et al. Sep 1996
5557607 Holden Sep 1996
5559798 Clarkson et al. Sep 1996
5561791 Mendelson et al. Oct 1996
5568479 Watanabe et al. Oct 1996
5570348 Holden Oct 1996
5570361 Norizuki et al. Oct 1996
5570362 Nishimura Oct 1996
5572522 Calamvokis et al. Nov 1996
5577032 Sone et al. Nov 1996
5577035 Hayter et al. Nov 1996
5583857 Soumiya et al. Dec 1996
5583858 Hanaoka Dec 1996
5583861 Holden Dec 1996
5584037 Papworth et al. Dec 1996
5590132 Ishibashi et al. Dec 1996
5602829 Nie et al. Feb 1997
5610913 Tomonaga et al. Mar 1997
5623405 Isono Apr 1997
5625846 Kobayakawa et al. Apr 1997
5724351 Chao et al. Mar 1998
Foreign Referenced Citations (1)
Number Date Country
484943 Mar 1992 JPX
Non-Patent Literature Citations (17)
Entry
An Ascom Timeplex White Paper, Meeting Critical Requirements with Scalable Enterprise Networking Solutions Based on a Unified ATM Foundation, pp. 1-12, Apr. 1994-1995?.
Douglas H. Hunt, ATM Traffic Management--Another Perspective, Business Communications Review, Jul. 1994.
Richard Bubenik et al., Leaf Initiated Join Extensions, Technical Committee, Signalling Subworking Group, ATM Forum/94-0325R1, Jul. 1, 1994.
Douglas H. Hunt et al., Flow Controlled Virtual Connections Proposal for ATM Traffic Management (Revision R2), Traffic Management Subworking Group, ATM.sub.-- Forum/94-0632R2, Aug. 1994.
Flavio Bonomi et al., The Rate-Based Flow Control Framework for the Available Bit Rate ATM Service, IEEE Network, Mar./Apr. 1995, pp. 25-39.
R. Jain, Myths About Congestion Management in High Speed Networks, Internetworking Research and Experience, vol. 3, 101-113 (1992).
Douglas H. Hunt et al., Credit-Based FCVC Proposal for ATM Traffic Management (Revision R1), ATM Forum Technical Committee Traffic Management Subworking Group, ATM.sub.- Forum/94-0168R1, Apr. 28, 1994.
Douglas H. Hunt et al., Action Item Status for Credit-Based FCVC Proposal, ATM Forum Technical Committee Traffic Management Subworking Group, ATM.sub.- Forum/94-0439, Apr. 28, 1994.
Timothy P. Donahue et al., Arguments in Favor of Continuing Phase 1 as the Initial ATM Forum P-NNI Routing Protocol Implementation, ATM Forum Technical Committee, ATM Forum/94-0460, Apr. 28, 1994.
Richard Bubenick et al., Leaf Initiated Join Extensions, Technical Committee, Signalling Subworking Group, ATM Forum/94-0325, Apr. 28, 1994.
Rob Coltun et al., PRP: A P-NNI Routing Protocol Proposal, ATM Forum Technical Committee, ATM.sub.- Forum/94-0492, Apr. 28, 1994.
Richard Bubenik et al., Leaf Initiated Join Extensions, ATM Forum Technical Committee, Signalling Subworking Group, ATM Forum 94-0325, Apr. 28, 1994.
Richard Bubenik et al., Requirements For Phase 2 Signaling Protocol, ATM Forum Technical Committee, Signalling Subworking Group, ATM Forum 94-1078, Jan. 1, 1994.
H.T. Kung and K. Chang, Receiver-Oriented Adaptive Buffer Allocation in Credit-Based Flow Control for ATM Networks, Proceedings of INFOCOM '95, Apr. 2-6, 1995, pp. 1-14.
H.T. Kung et al., Credit-Based Flow Control for ATM Networks: Credit Update Protocol, Adaptive Credit Allocation, and Statistical Multiplexing, Proceedings of ACM SIGCOMM '94 Symposium on Communications Architectures, Protocols and Applications, Aug. 31-Sep. 2, 1994, pp. 1-14.
SITA, ATM RFP: C-Overall Technical Requirements, Sep. 1994.
Hosein F. Badran et al., Head of Line Arbitration in ATM Switches with Input-Output Buffering and Backpressure Control, Globecom, pp. 347-351, 1991.