Load sharing across flows

Information

  • Patent Grant
  • 6111877
  • Patent Number
    6,111,877
  • Date Filed
    Wednesday, December 31, 1997
    26 years ago
  • Date Issued
    Tuesday, August 29, 2000
    24 years ago
Abstract
The invention provides a system and system for sharing packet traffic load among a plurality of possible paths. Each packet is associated with a flow, and a hash value is determined for each flow, so as to distribute the sequence of packets into a set of hash buckets. The hash value has a relatively large number of bits, but is divided by the number of possible paths so as to achieve a relatively small modulus value; the modulus value is used to index into a relatively small table associating one selected path with each entry. The modulus value is determined by a relatively small amount of circuitry, simultaneously for a plurality of moduli, and one such modulus value is selected in response to the number of possible paths.
Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to network routing.
2. Related Art
In routing packets in a network, a router sometimes has a choice of more than one path to a selected destination. When there is more than one path, there is a possibility that the router can distribute packet traffic among the paths, so as to reduce the aggregate packet traffic load on any one individual path. This concept is known in the art of network routing as "load sharing."
One problem that has arisen in the art is that sharing packet traffic among more than one such path can result in out-of-order arrival of packets at the destination device (or at an intermediate device on both paths to the destination device). Out-of-order arrival of packets is generally undesirable, as some protocols rely on packets arriving in the order they were sent.
Accordingly, it would be desirable to share packet traffic load among more than one such path, while maintaining the order in which the packets were sent in all cases where order matters. The invention provides load-sharing that is preferably performed on a per-flow basis, but possibly on a per-packet basis. A "flow" is a sequence of packets transmitted between a selected source and a selected destination, generally representing a single session using a known protocol. Each packet in a flow is expected to have identical routing and access control characteristics.
Flows are further described in detail in the following patent applications:
U.S. application Ser. No. 08/581,134, titled "Method For Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a Datagram Computer Network", filed Dec. 29, 1995, in the name of inventors David R. Cheriton and Andreas V. Bechtolsheim, assigned to Cisco Technology, Inc., attorney docket number CIS-019;
U.S. application Ser. No. 08/655,429, titled "Network Flow Switching and Flow Data Export", filed May 28, 1996, in the name of inventors Darren Kerr and Barry Bruins, and assigned to Cisco Technology, Inc., attorney docket number CIS-016; and
U.S. application Ser. No. 08/771,438, titled "Network Flow Switching and Flow Data Export", filed Dec. 20, 1996, in the name of inventors Darren Kerr and Barry Bruins, assigned to Cisco Technology, Inc., attorney docket number CIS-017.
PCT International Application PCT/US 96/20205, titled "Method For Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a Datagram Computer Network", filed Dec. 18, 1996, in the name of inventors David R. Cheriton and Andreas V. Bechtolsheim, and assigned to Cisco Technology, Inc., attorney docket number CIS-019 PCT; and
U.S. application Ser. No. 08/655,429, Express Mail Mailing No. EM053698725US, titled "Network Flow Switching and Flow Data Export", filed Jul. 2, 1997, in the name of inventors Darren Kerr and Barry Bruins, assigned to Cisco Technology, Inc.
These patent applications are collectively referred to herein as the "Netflow Switching Disclosures." Each of these applications is hereby incorporated by reference as if fully set forth herein.
However, one problem with sharing packet traffic load among more than one such path, whether on a per-packet basis or on a per-flow basis, is that the number of packets or the number of flows may not be evenly divisible by the number of such paths. In fact, with the number of packets or the number of flows continually changing, it would be difficult at best to maintain an even distribution of packets or flows into the number of such paths.
One response to this problem is to provide a hash function, to pseudo-randomly assign each packet or each flow to a hash value, and to share the packet traffic load among the paths in response to the hash value (such as by associating each hash table entry with a selected path). While this technique achieves the purpose of sharing the packet traffic load among more than one path to the destination, it has the drawback that packet traffic load is typically not evenly divided, particularly when the number of such paths is not a power of two.
For example, if there are three bits of hash value, thus providing eight possible hash values in all, but there are only five paths to the destination (or the weighted sum of desirable path loads is a multiple of five), the first five hash values would be evenly distributed among the paths, but the remaining three hash values would be unevenly distributed to three of the five possible paths.
One response to this problem is to select a hash value with more bits, and thus with more possible values, so as to more evenly distribute packets or flows among the possible paths. While this method achieves the purpose of evenly distributing packet traffic load, it has the drawback of requiring a relatively large amount of memory for the associated hash table, an amount of memory which is relatively larger as the amount of desired load imbalance is reduced.
Accordingly, it would be advantageous to provide a method and system in which packet traffic can be relatively evenly divided among a plurality of possible paths, without requiring a relatively large amount of memory. This advantage is achieved in an embodiment of the invention which provides a hash value with a relatively large number of bits, but which provides for processing that hash value using the number of possible paths so as to associate that hash value with a selected path using a table having a relatively small number of entries. The processing can be performed rapidly in hardware using a relatively small amount of circuitry.
SUMMARY OF THE INVENTION
The invention provides a method and system for sharing packet traffic load among a plurality of possible paths. Each packet is associated with a flow, and a hash value is determined for each flow, so as to distribute the sequence of packets into a set of hash buckets. The hash value has a relatively large number of bits, but is divided by the number of possible paths so as to achieve a relatively small modulus value; the modulus value is used to index into a relatively small table associating one selected path with each entry.
In a preferred embodiment, the modulus value is determined by a relatively small amount of circuitry, simultaneously for a plurality of modulii, and one such modulus value is selected in response to the number of possible paths.





BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a block diagram of a system for sharing packet traffic load among a plurality of possible paths.
FIG. 2A shows a block diagram of a first distribution function for sharing packet traffic load.
FIG. 2B shows a block diagram of a computing element for the first distribution function.
FIG. 3A shows a block diagram of a second distribution function for sharing packet traffic load.
FIG. 3B shows a block diagram of a computing element for the second distribution function.
FIG. 4 shows a block diagram of a computing element for the modulus part of the first or second distribution function.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
In the following description, a preferred embodiment of the invention is described with regard to preferred process steps and data structures. Those skilled in the art would recognize after perusal of this application that embodiments of the invention can be implemented using circuits adapted to particular process steps and data structures described herein, and that implementation of the process steps and data structures described herein would not require undue experimentation or further invention.
Load-Sharing System Elements
FIG. 1 shows a block diagram of a system for sharing packet traffic load among a plurality of possible paths.
A system 100 for sharing packet traffic load includes a packet routing information source 110, a distribution function generator 120, a load-sharing table 130, and a set of output routing queues 140.
The packet routing information source 110 provides a set of routing information for an associated packet, to cause packets to be distributed for load-sharing in response to that routing information about the packet.
In a preferred embodiment, the routing information is responsive to a flow to which the associated packet belongs. Determining the flow to which a packet belongs is further described in the Netflow Switching Disclosures, hereby incorporated by reference. One problem with load-sharing is that some load-shared routes are relatively quicker or relatively slower than others, with the possible result that packets may arrive at their destinations out of the order in which they arrived at the router. Providing load-sharing responsive to the flow to which the packet belongs has the advantage that there is no negative consequence for packets to arrive out of order, because packet order is preserved within each flow.
The distribution function generator 120 is coupled to the information source 110, and provides an index 121 into the load-sharing table 130, responsive to the information from the information source 110.
Table 1-1 shows a load-sharing error function, responsive to a number of paths to be load-shared and a number of entries in a pseudo-random distribution function.
TABLE 1-1__________________________________________________________________________Error Function for Load Sharing Using Pseudo-Random DistributionFunction(* = Less Than 0.05%)Number of Paths for Load-Sharing3 4 5 6 7 8 9 10 11 12 13 14 15 16__________________________________________________________________________4 16.7 08 8.3 0 15.0 16.7 10.7 016 4.2 0 5.0 8.3 8.9 0 9.7 15.0 17.0 16.7 14.4 10.7 5.8 032 2.1 0 3.8 4.2 5.4 0 6.9 5.0 2.8 8.3 10.1 8.9 5.4 064 1.0 0 1.2 2.1 1.3 0 1.4 3.8 2.6 4.2 1.4 5.4 4.6 0128 .5 0 .9 1.0 1.1 0 1.2 1.2 2.0 2.1 1.3 1.3 2.9 0256 .3 0 .3 .5 .7 0 .9 .9 .9 1.0 1.1 1.1 .4 0512 .1 0 .2 .3 .2 0 .2 .3 .5 .5 .6 .7 .3 01024 .1 0 .1 .1 .1 0 .2 .2 .1 .3 .2 .2 .3 02048 * 0 .1 .1 .1 0 .1 .1 .1 .1 .2 .1 .2 04096 * 0 * * * 0 * .1 .1 .1 * .1 * 08192 * 0 * * * 0 * * * * * * * 016384 * 0 * * * 0 * * * * * * * 032768 * 0 * * * 0 * * * * * * * 065536 * 0 * * * 0 * * * * * * * 0__________________________________________________________________________
Table 1-1 cross-indexes the number of entries in the load-sharing table 130 against the number of output routing queues 140.
Because the number of output routing queues 140 does not exceed the number of entries in the load-sharing table 130, some entries in the upper right of table 1-1 are blank.
Numeric entries in table 1-1 show the fraction of traffic that is sent to the "wrong" output routing queue 140. For example, in the case there are eight entries in the load-sharing table 130 and five output routing queues 140, each of the first three output routing queues 140 receives 25% (2/8), rather than 20% (1/5), of outgoing traffic. Each such output routing queue 140 is therefore 5% overused, for a total of 15%. This value is shown as the error function in table 1-1.
Table 1-1 shows that only about 4096 (2.sup.12) entries in the load-sharing table 130 are needed to reduce the error function to 0.1% or less for all cases for number of output routing queues 140. Accordingly, in a preferred embodiment, the distribution function generator 120 provides about 12 bits of pseudo-random output.
In a preferred embodiment, the distribution function generator 120 includes a hash function that provides 12 bits of pseudo-random output.
Because there are no more than about 16 output routing queues 140, the index 121 can be about no more than four bits. Accordingly, in a preferred embodiment, the distribution function generator 120 includes a modulus element responsive to the hash function that provides three or four bits of output as the index 121.
The load-sharing table 130 is coupled to the index 121, and provides a pointer 131 to one of the output routing queues 140, responsive to the index 121.
Four-Bit Index Values
FIG. 2A shows a block diagram of a first distribution function generator 120 for sharing packet traffic load. FIG. 2B shows a block diagram of a computing element for the first distribution function generator 120.
In a first preferred embodiment, the distribution function generator 120 includes a hash function 210 that provides a 12-bit hash function output value 211. The hash function output value includes three 4-bit bytes 212, which are coupled to a plurality of remainder elements 220 as shown in FIG. 2A.
At a first stage of the distribution function generator 120, a most significant byte 212 and a second-most significant byte 212 of the output value 211 are coupled to eight input bits of a first remainder element 220. A size value 213 is also coupled as a selector input to the first remainder element 220, for selecting the divisor for which the remainder is calculated.
At a second stage of the distribution function generator 120, an output byte 212 from the first remainder element 220 and a least significant byte 212 of the output value 211 are coupled to eight input bits of a second remainder element 220. The size value 213 is also coupled as the divisor selector input to the second remainder element 220.
The index 121 is output from the second remainder element 220.
The remainder element 220 includes an input port 221, a plurality of remainder circuits 222, and a multiplexer 223.
The input port 221 is coupled to the 8-bit input for the remainder element 220.
The plurality of remainder circuits 222 includes one remainder circuit 222 for each possible divisor. In this first preferred embodiment where the hash function output value includes three 4-bit bytes 212, there are eight possible divisors from nine to 16. Divisors less than nine are emulated by doubling the divisor until it falls within the range nine to 16. Each remainder circuit 222 computes and outputs a remainder after division by its particular divisor.
The multiplexer 223 selects one of the outputs from the plurality of remainder circuits 222, responsive to the size value 213 input to the remainder element 220, and outputs its selection as the index 121.
Table 2-1 shows a set of measured size and speed values for synthesized logic for computing the modulus function for 4-bit index values.
These values were obtained by synthesizing logic for each remainder element 222 using the "G10P Cell-Based ASIC" product, available from LSI Logic of Milpitas, Calif.
TABLE 2-1______________________________________Size and Speed for Synthesized Modulus Function LogicFunction Time in Nanoseconds Number of Gates______________________________________mod 9 2.42 126mod 10 2.27 73mod 11 2.44 159mod 12 1.04 45mod 13 2.50 191mod 14 2.28 92mod 15 1.42 82mod 16 .16 5______________________________________
As shown in table 2-1, the time in nanoseconds and the number of gates for each remainder circuit 222 is quite small.
Three-Bit Index Values
FIG. 3A shows a block diagram of a second distribution function for sharing packet traffic load. FIG. 3B shows a block diagram of a computing element for the second distribution function.
In a second preferred embodiment, the distribution function generator 120 includes a hash function 310 that provides a 12-bit hash function output value 311. The hash function output value includes four 3-bit bytes 312, which are coupled to a plurality of remainder elements 320 as shown in FIG. 3A.
At a first stage of the distribution function generator 120, a most significant byte 312 and a second-most significant byte 312 of the output value 311 are coupled to six input bits of a first remainder element 320. A size value 313 is also coupled as a divisor input to the first remainder element 320.
At a second stage of the distribution function generator 120, an output byte 312 from the first remainder element 320 and a next-most significant byte 312 of the output value 311 are coupled to six input bits of a second remainder element 320. The size value 313 is also coupled as the divisor input to the second remainder element 320.
At a third stage of the distribution function generator 120, an output byte 312 from the second remainder element 320 and a least significant byte 312 of the output value 311 are coupled to six input bits of a third remainder element 320. The size value 313 is also coupled as the divisor input to the third remainder element 320.
The index 121 is output from the third remainder element 320.
Similar to the remainder element 220, the remainder element 320 includes an input port 321, a plurality of remainder circuits 322, and a multiplexer 323.
Similar to the input port 221, the input port 321 is coupled to the 6-bit input for the remainder element 320.
Similar to the plurality of remainder circuits 222, the plurality of remainder circuits 322 includes one remainder circuit 322 for each possible divisor. In this second preferred embodiment where the hash function output value includes four 3-bit bytes 312, there are four possible divisors from five to eight. Divisors less than five are emulated by doubling the divisor until it falls within the range five to eight. Each remainder circuit 322 computes and outputs a remainder after division by its particular divisor.
Similar to the multiplexer 223, the multiplexer 323 selects one of the outputs from the plurality of remainder circuits 322, responsive to the size value 313 input to the remainder element 320, and outputs its selection as the index 121.
Table 3-1 shows a set of measured size and speed values for synthesized logic for computing the modulus function for 3-bit index values.
Similar to table 2-1, these values were obtained by synthesizing logic for each remainder element 322 using the "G10P Cell-Based ASIC" product, available from LSI Logic of Milpitas, Calif.
TABLE 3-1______________________________________Size and Speed for Synthesized Modulus Function LogicFunction Time in Nanoseconds Number of Gates______________________________________mod 5 1.99 57mod 6 1.52 31mod 7 1.10 50mod 8 .16 4______________________________________
As shown in table 3-1, the time in nanoseconds and the number of gates for each remainder circuit 322 is quite small.
Software Implementation
In a software implementation, in place of each remainder element 222 or remainder element 322, a processor performs a lookup into a modulus table having the modulus values resulting from the appropriate division. For example, to compute the modulus value for the remainder element 322 for division by six, the modulus table would have the values 0, 1, 2, 3, 4, and 5, repeated as many times as necessary to completely fill the table.
Non-Equal-Cost Paths
When different data paths have unequal associated costs, some data paths can be associated with more than one entry in the load-sharing table 130. Each entry in the load-sharing table 130 can therefore be assigned an equivalent amount of load. For example, if three output data paths are OC-12 links, while one output data path is an OC-48 link, the OC-48 data path can be assigned four entries in the load-sharing table 130 because it has four times the capacity of the OC-12 data paths. Therefore, in this example, there would be seven entries in the load-sharing table 130 for just four different output data paths.
Modulus Element Using Free-Running Counter
FIG. 4 shows a block diagram of an alternative embodiment of a system for sharing packet traffic load among a plurality of possible paths.
A system 400 includes a packet routing information source 110, a distribution function generator 120, a load-sharing table 130, and a set of output routing queues 140. The distribution function generator 120 includes a hash function element 421, a free-running counter 422, a flow/packet multiplexer 423, and a modulus function element 424.
The flow/packet multiplexer 423 is coupled to a flow/packet select input 425 for selecting whether load-sharing is performed per-flow or per-packet. One of two operations is performed:
If the flow/packet select input 425 indicates load-sharing is performed per-flow, the flow/packet multiplexer 423 selects the output of the hash function element 421, and the modulus function element 424 distributes packets to the load-sharing table 130, and ultimately to the output routing queues 140, responsive to what flow the packet is associated with. Thus, all packets in the same flow are distributed to the same output routing queue 140.
If the flow/packet select input 425 indicates load-sharing is performed per-packet, the flow/packet multiplexer 423 selects the output of the free-running counter 422, and the modulus function element 424 distributes packets to the load-sharing table 130, and ultimately to the output routing queues 140, responsive to the raw order in which packets arrive. Thus, packets are effectively distributed uniformly in a round-robin manner among the possible output routing queues 140.
In a preferred embodiment, the free running counter 422 produces a 12-bit unsigned integer output, and recycles back to zero when the maximum value is reached.
Alternative Embodiments
Although preferred embodiments are disclosed herein, many variations are possible which remain within the concept, scope, and spirit of the invention, and these variations would become clear to those skilled in the art after perusal of this application.
Claims
  • 1. A method for distributing a sequence of packets among a number N of data paths, said method including steps for
  • for each packet in said sequence, associating a distribution value therewith, said distribution value having a number of possible values well in excess of said number N;
  • for each said distribution value, determining a modulus of said distribution value with regard to said number N; and
  • sharing packet traffic load among a plurality of outgoing data paths in response to said modulus.
  • 2. A method as in claim 1, including steps for partitioning said sequence of packets into a plurality of flows; wherein said steps for associating include steps for associating a single distribution value for substantially all packets in one of said flows.
  • 3. A method as in claim 1, wherein said distribution value for each said packet is responsive to an address for a sender for said packet and an address for a destination for said packet.
  • 4. A method as in claim 1, wherein said distribution value for each said packet is responsive to a port for a sender for said packet and a port for a destination for said packet.
  • 5. A method as in claim 1, wherein said distribution value for each said packet is responsive to a protocol type for said packet.
  • 6. A method as in claim 1, wherein said distribution value includes a hash function value.
  • 7. A method as in claim 1, wherein said distribution value includes a pseudo-random value.
  • 8. A method as in claim 1, wherein said distribution value includes at least twice as many bits as needed to enumerate said number N.
  • 9. A method as in claim 1, wherein said steps for determining a modulus include steps for
  • determining a first modulus of a first portion of said distribution value;
  • determining a second modulus of a combination of said first modulus and a second portion of said distribution value.
  • 10. A method as in claim 1, wherein said steps for determining a modulus include steps for
  • determining a first modulus of said distribution value with regard to a first divisor;
  • determining a second modulus of said distribution value with regard to a second divisor; and
  • selecting between said first modulus and said second modulus in response to said number N.
  • 11. A method as in claim 1, wherein said steps for determining a modulus include steps for
  • simultaneously determining a plurality of modulus values for said distribution value; and
  • selecting among said plurality of modulus values in response to said number N.
  • 12. A method as in claim 1, wherein said steps for sharing packet traffic load include steps for indexing into a table in response to said modulus.
  • 13. A method as in claim 1, wherein said steps for sharing packet traffic load include steps for indexing into a table in response to said modulus, said table having fewer entries than twice said number N.
  • 14. A system for distributing a sequence of packets among a number N of data paths, said system including
  • a distribution value element coupled to each packet in said sequence, an output of said distribution value element having a number of possible values well in excess of said number N;
  • a modulus element coupled to said distribution value element; and
  • a load-sharing element responsive to an output of said modulus element.
  • 15. A system as in claim 14, wherein
  • said sequence of packets forms a plurality of flows; and
  • said distribution value element is operative to assign a single distribution value for substantially all packets in one of said flows.
  • 16. A system as in claim 14, wherein said distribution value element is responsive to an address for a sender for each said packet and an address for a destination for each said packet.
  • 17. A system as in claim 14, wherein said distribution value element is responsive to a port for a sender for each said packet and a port for a destination for each said packet.
  • 18. A system as in claim 14, wherein said distribution value element is responsive to a protocol type for each said packet.
  • 19. A system as in claim 14, wherein said distribution value element includes a hash function.
  • 20. A system as in claim 14, wherein said distribution value element includes a uniform distribution element.
  • 21. A system as in claim 14, wherein an output of said distribution value element includes at least twice as many bits as needed to enumerate said number N.
  • 22. A system as in claim 14, wherein said modulus element includes
  • a first modulus element coupled to a first portion of said distribution value;
  • a second modulus element coupled an output of said first modulus element and to a second portion of said distribution value.
  • 23. A system as in claim 14, wherein said modulus element includes
  • a first modulus element coupled to said distribution value and to a first divisor;
  • a second modulus element coupled to said distribution value and to a second divisor; and
  • a selector coupled to said first modulus element and to said second modulus element.
  • 24. A system as in claim 14, wherein modulus element includes
  • a plurality of modulus elements each coupled to said distribution value; and
  • a selector coupled to said plurality of modulus elements and to said number N.
  • 25. A system as in claim 14, wherein said load-sharing element includes an indexed table.
  • 26. A system as in claim 14, wherein said load-sharing element includes an indexed table, said table having fewer entries than twice said number N.
US Referenced Citations (194)
Number Name Date Kind
RE33900 Howson Apr 1992
4131767 Weinstein Dec 1978
4161719 Parikh et al. Jul 1979
4316284 Howson Feb 1982
4397020 Howson Aug 1983
4419728 Larson Dec 1983
4424565 Larson Jan 1984
4437087 Petr Mar 1984
4438511 Baran Mar 1984
4439763 Limb Mar 1984
4445213 Baugh et al. Apr 1984
4446555 Devault et al. May 1984
4456957 Schieltz Jun 1984
4464658 Thelen Aug 1984
4499576 Fraser Feb 1985
4506358 Montgomery Mar 1985
4507760 Fraser Mar 1985
4532626 Flores et al. Jul 1985
4644532 George et al. Feb 1987
4646287 Larson et al. Feb 1987
4677423 Benvenuto et al. Jun 1987
4679189 Olson et al. Jul 1987
4679227 Hughes-Hartogs Jul 1987
4713806 Oberlander et al. Dec 1987
4723267 Jones et al. Feb 1988
4731816 Hughes-Hartogs Mar 1988
4750136 Arpin et al. Jun 1988
4757495 Decker et al. Jul 1988
4763191 Gordon et al. Aug 1988
4769810 Eckberg, Jr. et al. Sep 1988
4769811 Eckberg, Jr. et al. Sep 1988
4771425 Baran et al. Sep 1988
4819228 Baran et al. Apr 1989
4827411 Arrowood et al. May 1989
4833706 Hughes-Hartogs May 1989
4835737 Herrig et al. May 1989
4879551 Georgiou et al. Nov 1989
4893306 Chao et al. Jan 1990
4903261 Baran et al. Feb 1990
4905233 Cain et al. Feb 1990
4922486 Lidinsky et al. May 1990
4933937 Konishi Jun 1990
4960310 Cushing Oct 1990
4962497 Ferenc et al. Oct 1990
4962532 Kasirai et al. Oct 1990
4965767 Kinoshita et al. Oct 1990
4965772 Daniel et al. Oct 1990
4970678 Sladowski et al. Nov 1990
4979118 Kheradpir Dec 1990
4980897 Decker et al. Dec 1990
4991169 Davis et al. Feb 1991
5003595 Collins et al. Mar 1991
5014265 Hahne et al. May 1991
5020058 Holden et al. May 1991
5033076 Jones et al. Jul 1991
5034919 Sassai et al. Jul 1991
5054034 Hughes-Hartogs Oct 1991
5059925 Weisbloom Oct 1991
5072449 Enns et al. Dec 1991
5088032 Bosack Feb 1992
5095480 Fenner Mar 1992
5115431 Williams et al. May 1992
5128945 Enns et al. Jul 1992
5136580 Videlock et al. Aug 1992
5166930 Braff et al. Nov 1992
5199049 Wilson Mar 1993
5206886 Bingham Apr 1993
5208811 Kashio et al. May 1993
5212686 Joy et al. May 1993
5224099 Corbalis et al. Jun 1993
5226120 Brown et al. Jul 1993
5228062 Bingham Jul 1993
5229994 Balzano et al. Jul 1993
5237564 Lespagnol et al. Aug 1993
5241682 Bryant et al. Aug 1993
5243342 Kattemalalavadi et al. Sep 1993
5243596 Port et al. Sep 1993
5247516 Bernstein et al. Sep 1993
5249178 Kurano et al. Sep 1993
5253251 Aramaki Oct 1993
5255291 Holden et al. Oct 1993
5260933 Rouse Nov 1993
5260978 Fleischer et al. Nov 1993
5268592 Bellamy et al. Dec 1993
5268900 Hluchyj et al. Dec 1993
5271004 Proctor et al. Dec 1993
5274631 Bhardwaj Dec 1993
5274635 Rahman et al. Dec 1993
5274643 Fisk Dec 1993
5280470 Buhrke et al. Jan 1994
5280480 Pitt et al. Jan 1994
5280500 Mazzola et al. Jan 1994
5283783 Nguyen et al. Feb 1994
5287103 Kasprzyk et al. Feb 1994
5287453 Roberts Feb 1994
5291482 McHarg et al. Mar 1994
5305311 Lyles Apr 1994
5307343 Bostica et al. Apr 1994
5309437 Perlman et al. May 1994
5311509 Heddes et al. May 1994
5313454 Bustini et al. May 1994
5313582 Hendel et al. May 1994
5317562 Nardin et al. May 1994
5319644 Liang Jun 1994
5327421 Hiller et al. Jul 1994
5331637 Francis et al. Jul 1994
5345445 Hiller et al. Sep 1994
5345446 Hiller et al. Sep 1994
5359592 Corbalis et al. Oct 1994
5361250 Nguyen et al. Nov 1994
5361256 Doeringer et al. Nov 1994
5361259 Hunt et al. Nov 1994
5365524 Hiller et al. Nov 1994
5367517 Cidon et al. Nov 1994
5371852 Attanasio et al. Dec 1994
5386567 Lien et al. Jan 1995
5390170 Sawant et al. Feb 1995
5390175 Hiller et al. Feb 1995
5394394 Crowther et al. Feb 1995
5394402 Ross Feb 1995
5400325 Chatwani et al. Mar 1995
5408469 Opher et al. Apr 1995
5416842 Aziz May 1995
5422880 Heitkamp et al. Jun 1995
5422882 Hiller et al. Jun 1995
5423002 Hart Jun 1995
5426636 Hiller et al. Jun 1995
5428607 Hiller et al. Jun 1995
5430715 Corbalis et al. Jul 1995
5430729 Rahnema Jul 1995
5442457 Najafi Aug 1995
5442630 Gagliardi et al. Aug 1995
5452297 Hiller et al. Sep 1995
5473599 Li et al. Dec 1995
5473607 Hausman et al. Dec 1995
5477541 White et al. Dec 1995
5485455 Dobbins et al. Jan 1996
5490140 Abensour et al. Feb 1996
5490258 Fenner Feb 1996
5491687 Christensen et al. Feb 1996
5491804 Heath et al. Feb 1996
5497368 Reijnierse et al. Mar 1996
5504747 Sweasey Apr 1996
5509006 Wilford et al. Apr 1996
5517494 Green May 1996
5519704 Farinacci et al. May 1996
5519858 Walton et al. May 1996
5526489 Nilakantan et al. Jun 1996
5530963 Moore et al. Jun 1996
5535195 Lee Jul 1996
5539734 Burwell et al. Jul 1996
5541911 Nilakantan et al. Jul 1996
5546370 Ishikawa Aug 1996
5555244 Gupta et al. Sep 1996
5561669 Lenney et al. Oct 1996
5583862 Callon Dec 1996
5592470 Rudrapatna et al. Jan 1997
5598581 Daines et al. Jan 1997
5600798 Cherukuri et al. Feb 1997
5602770 Ohira Feb 1997
5604868 Komine et al. Feb 1997
5608726 Virgile Mar 1997
5617417 Sathe et al. Apr 1997
5617421 Chin et al. Apr 1997
5630125 Zellweger May 1997
5631908 Saxe May 1997
5632021 Jennings et al. May 1997
5634010 Ciscon et al. May 1997
5638359 Peltola et al. Jun 1997
5644718 Belove et al. Jul 1997
5659684 Giovannoni et al. Aug 1997
5666353 Klausmeier et al. Sep 1997
5673265 Gupta et al. Sep 1997
5678006 Valizadeh et al. Oct 1997
5680116 Hashimoto et al. Oct 1997
5684797 Aznar et al. Nov 1997
5687324 Green et al. Nov 1997
5689506 Chiussi et al. Nov 1997
5694390 Yamato et al. Dec 1997
5724351 Chao et al. Mar 1998
5740097 Satche Apr 1998
5748186 Raman May 1998
5748617 McLain, Jr. May 1998
5754547 Nakazawa May 1998
5802054 Bellenger Sep 1998
5835710 Nagami et al. Nov 1998
5841874 Kempke et al. Nov 1998
5854903 Morrison et al. Dec 1998
5856981 Voelker Jan 1999
5892924 Lyon et al. Apr 1999
5898686 Virgile Apr 1999
5903559 Acharya et al. May 1999
5914953 Krause et al. Jun 1999
6011780 Vaman et al. Jan 2000
Foreign Referenced Citations (7)
Number Date Country
0 384 758 A2 Aug 1990 EPX
0 431 751 A1 Jun 1991 EPX
0 567 217 A2 Oct 1993 EPX
WO9307692 Apr 1993 WOX
WO9307569 Apr 1993 WOX
WO9401828 Jan 1994 WOX
WO9520850 Aug 1995 WOX
Non-Patent Literature Citations (12)
Entry
William Stallings, Data and Computer Communications, PP: 329-333, Prentice Hall, Upper Saddle River, New Jersey 07458.
Allen, M., "Novell IPX Over Various WAN Media (IPXW AN)," Network Working Group, RFC 1551, Dec. 1993, pp. 1-22.
Becker, D., "3c589.c: A 3c589 EtherLink3 ethernet driver for linux," becker@CESDIS.gsfc.nasa.gov, May 3, 1994, pp. 1-13.
Chowdhury, et al., "Alternative Bandwidth Allocation Algorithms for Packet Video in ATM Networks," INFOCOM 1992, pp. 1061-1068.
Doeringer, W., "Routing on Longest-Matching Prefixes," IEEE/ACM Transactions in Networking, vol. 4, No. 1, Feb. 1996, pp. 86-97.
Esaki, et al., "Datagram Delivery in an ATM-Internet," 2334b IEICE Transactions on Communications, Mar. 1994, No. 3, Tokyo, Japan.
IBM Corporation, "Method and Apparatus for the Statistical Multiplexing of Voice, Data and Image Signals," IBM Technical Disclosure Bulletin, No. 6, Nov. 1992, pp. 409-411.
Pei, et al., "Putting Routing Tables in Silicon," IEEE Network Magazine, Jan. 1992, pp. 42-50.
Perkins, D., "Requirements for an Internet Standard Point-to-Point Protocol," Network Working Group, RFC 1547, Dec. 1993, pp. 1-19.
Simpson, W., "The Point-to-Point Protocol (PPP)," Network Working Group, RFC 1548, Dec. 1993, pp. 1-53.
Tsuchiya, P.F., "A Search Algorithm for Table Entries with Non-Contiguous Wildcarding," Abstract, Bellcore.
Zhang, et al., "Rate-Controlled Static-Priority Queueing," INFOCOM 1993, pp. 227-236.