Load sharing across flows

Information

  • Patent Grant
  • 6603765
  • Patent Number
    6,603,765
  • Date Filed
    Friday, July 21, 2000
    24 years ago
  • Date Issued
    Tuesday, August 5, 2003
    21 years ago
Abstract
The invention provides a system and method for sharing packet traffic load among a plurality of possible paths. Each packet is associated with a flow, and a hash value is determined for each flow, so as to distribute the sequence of packets into a set of hash buckets. The hash value has a relatively large number of bits, but is divided by the number of possible paths so as to achieve a relatively small modulus value; the modulus value is used to index into a relatively small table associating one selected path with each entry. The modulus value is determined by a relatively small amount of circuitry, simultaneously for a plurality of moduli, and one such modulus value is selected in response to the number of possible paths.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




This invention relates to network routing.




2. Related Art




In routing packets in a network, a router sometimes has a choice of more than one path to a selected destination. When there is more than one path, there is a possibility that the router can distribute packet traffic among the paths, so as to reduce the aggregate packet traffic load on any one individual path. This concept is known in the art of network routing as “load sharing.”




One problem that has arisen in the art is that sharing packet traffic among more than one such path can result in out-of-order arrival of packets at the destination device (or at an intermediate device on both paths to the destination device). Out-of-order arrival of packets is generally undesirable, as some protocols rely on packets arriving in the order they were sent.




Accordingly, it would be desirable to share packet traffic load among more than one such path, while maintaining the order in which the packets were sent in all cases where order matters. The invention provides load-sharing that is preferably performed on a per-flow basis, but possibly on a per-packet basis. A “flow” is a sequence of packets transmitted between a selected source and a selected destination, generally representing a single session using a known protocol. Each packet in a flow is expected to have identical routing and access control characteristics.




Flows are further described in detail in the following patent applications:




U.S. Application Ser. No. 08/581,134, titled “Method For Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a Datagram Computer Network”, filed Dec. 29, 1995, in the name of inventors David R. Cheriton and Andreas V. Bechtolsheim, assigned to Cisco Technology, Inc;.




U.S. Application Ser. No. 08/655,429, titled “Network Flow Switching and Flow Data Export”, filed May 28, 1996, in the name of inventors Darren Kerr and Barry Bruins, and assigned to Cisco Technology, Inc.; and




U.S. Application Ser. No. 08/771,438, titled “Network Flow Switching and Flow Data Export”, filed Dec. 20, 1996, in the name of inventors Darren Kerr and Barry Bruins, assigned to Cisco Technology, Inc.,




PCT International Application PCT/US 96/20205, titled “Method For Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a Datagram Computer Network”, filed Dec. 18, 1996, in the name of inventors David R. Cheriton and Andreas V. Bechtolsheim, and assigned to Cisco Technology, Inc;, and




Ser. No. 08/0655,429 Express Mail Mailing No. EM053698725US, titled “Network Flow Switching and Flow Data Export”, filed Jul. 2, 1997, in the name of inventors Darren Kerr and Barry Bruins, assigned to Cisco Technology, Inc.




These patent applications are collectively referred to herein as the “Netflow Switching Disclosures.” Each of these applications is hereby incorporated by reference as if fully set forth herein.




However, one problem with sharing packet traffic load among more than one such path, whether on a per-packet basis or on a per-flow basis, is that the number of packets or the number of flows may not be evenly divisible by the number of such paths. In fact, with the number of packets or the number of flows continually changing, it would be difficult at best to maintain an even distribution of packets or flows into the number of such paths.




One response to this problem is to provide a hash function, to pseudo-randomly assign each packet or each flow to a hash value, and to share the packet traffic load among the paths in response to the hash value (such as by associating each hash table entry with a selected path). While this technique achieves the purpose of sharing the packet traffic load among more than one path to the destination, it has the drawback that packet traffic load is typically not evenly divided, particularly when the number of such paths is not a power of two.




For example, if there are three bits of hash value, thus providing eight possible hash values in all, but there are only five paths to the destination (or the weighted sum of desirable path loads is a multiple of five), the first five hash values would be evenly distributed among the paths, but the remaining three hash values would be unevenly distributed to three of the five possible paths.




One response to this problem is to select a hash value with more bits, and thus with more possible values, so as to more evenly distribute packets or flows among the possible paths. While this method achieves the purpose of evenly distributing packet traffic load, it has the drawback of requiring a relatively large amount of memory for the associated hash table, an amount of memory which is relatively larger as the amount of desired load imbalance is reduced.




Accordingly, it would be advantageous to provide a method and system in which packet traffic can be relatively evenly divided among a plurality of possible paths, without requiring a relatively large amount of memory. This advantage is achieved in an embodiment of the invention which provides a hash value with a relatively large number of bits, but which provides for processing that hash value using the number of possible paths so as to associate that hash value with a selected path using a table having a relatively small number of entries. The processing can be performed rapidly in hardware using a relatively small amount of circuitry.




SUMMARY OF THE INVENTION




The invention provides a method and system for sharing packet traffic load among a plurality of possible paths. Each packet is associated with a flow, and a hash value is determined for each flow, so as to distribute the sequence of packets into a set of hash buckets. The hash value has a relatively large number of bits, but is divided by the number of possible paths so as to achieve a relatively small modulus value; the modulus value is used to index into a relatively small table associating one selected path with each entry.




In a preferred embodiment, the modulus value is determined by a relatively small amount of circuitry, simultaneously for a plurality of modulii, and one such modulus value is selected in response to the number of possible paths.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

shows a block diagram of a system for sharing packet traffic load among a plurality of possible paths.





FIG. 2A

shows a block diagram of a first distribution function for sharing packet traffic load.

FIG. 2B

shows a block diagram of a computing element for the first distribution function.





FIG. 3A

shows a block diagram of a second distribution function for sharing packet traffic load.

FIG. 3B

shows a block diagram of a computing element for the second distribution function.





FIG. 4

shows a block diagram of a computing element for the modulus part of the first or second distribution function.











DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT




In the following description, a preferred embodiment of the invention is described with regard to preferred process steps and data structures. Those skilled in the art would recognize after perusal of this application that embodiments of the invention can be implemented using circuits adapted to particular process steps and data structures described herein, and that implementation of the process steps and data structures described herein would not require undue experimentation or further invention.




Load-Sharing System Elements





FIG. 1

shows a block diagram of a system for sharing packet traffic load among a plurality of possible paths.




A system


100


for sharing packet traffic load includes a packet routing information source


110


, a distribution function generator


120


, a load-sharing table


130


, and a set of output routing queues


140


.




The packet routing information source


110


provides a set of routing information for an associated packet, to cause packets to be distributed for load-sharing in response to that routing information about the packet.




In a preferred embodiment, the routing information is responsive to a flow to which the associated packet belongs. Determining the flow to which a packet belongs is further described in the Netflow Switching Disclosures, hereby incorporated by reference. One problems with load-sharing is that some load-shared routes are relatively quicker or relatively slower than others, with the possible result that packets may arrive at their destinations out of the order in which they arrived at the router. Providing load-sharing responsive to the flow to which the packet belongs has the advantage that there is no negative consequence for packets to arrive out of order, because packet order is preserved within each flow.




The distribution function generator


120


is coupled to the information source


110


, and provides an index


121


into the load-sharing table


130


, responsive to the information from the information source


110


.




Table 1-1 shows a load-sharing error function, responsive to a number of paths to be load-shared and a number of entries in a pseudo-random distribution function.












TABLE 1-1











Error Function for Load Sharing Using Pseudo-Random Distribution






Function












Number of







Entries in






Load-






Sharing




Number of Paths for Load-Sharing

























Table




3




4




5




6




7




8




9




10




11




12




13




14




15




16




























4




16.7




0


















8




8.3




0




15.0




16.7




10.7




0






16




4.2




0




5.0




8.3




8.9




0




9.7




15.0




17.0




16.7




14.4




10.7




5.8




0






32




2.1




0




3.8




4.2




5.4




0




6.9




5.0




2.8




8.3




10.1




8.9




5.4




0






64




1.0




0




1.2




2.1




1.3




0




1.4




3.8




2.6




4.2




1.4




5.4




4.6




0






128




.5




0




.9




1.0




1.1




0




1.2




1.2




2.0




2.1




1.3




1.3




2.9




0






256




.3




0




.3




.5




.7




0




.9




.9




.9




1.0




1.1




1.1




.4




0






512




.1




0




.2




.3




.2




0




.2




.3




.5




.5




.6




.7




.3




0






1024




.1




0




.1




.1




.1




0




.2




.2




.1




.3




.2




.2




.3




0






2048




*




0




.1




.1




.1




0




.1




.1




.1




.1




.2




.1




.2




0






4096




*




0




*




*




*




0




*




.1




.1




.1




*




.1




*




0






8192




*




0




*




*




*




0




*




*




*




*




*




*




*




0






16384




*




0




*




*




*




0




*




*




*




*




*




*




*




0






32768




*




0




*




*




*




0




*




*




*




*




*




*




*




0






65536




*




0




*




*




*




0




*




*




*




*




*




*




*




0











(* = Less Than 0.05%)













Table 1-1 cross-indexes the number of entries in the load-sharing table


130


against the number of output routing queues


140


.




Because the number of output routing queues


140


does not exceed the number of entries in the load-sharing table


130


, some entries in the upper right of table 1-1 are blank.




Numeric entries in table 1-1 show the fraction of traffic that is sent to the “wrong” output routing queue


140


. For example, in the case there are eight entries in the load-sharing table


130


and five output routing queues


140


, each of the first three output routing queues


140


receives 25% ({fraction (2/8)}), rather than 20% (⅕), of outgoing traffic. Each such output routing queue


140


is therefore 5% overused, for a total of 15%. This value is shown as the error function in table 1-1.




Table 1-1 shows that only about 4096 (2


12


) entries in the load-sharing table


130


are needed to reduce the error function to 0.1% or less for all cases for number of output routing queues


140


. Accordingly, in a preferred embodiment, the distribution function generator


120


provides about 12 bits of pseudo-random output.




In a preferred embodiment, the distribution function generator


120


includes a hash function that provides 12 bits of pseudo-random output.




Because there are no more than about 16 output routing queues


140


, the index


121


can be about no more than four bits. Accordingly, in a preferred embodiment, the distribution function generator


120


includes a modulus element responsive to the hash function that provides three or four bits of output as the index


121


.




The load-sharing table


130


is coupled to the index


121


, and provides a pointer


131


to one of the output routing queues


140


, responsive to the index


121


.




Four-Bit Index Values





FIG. 2A

shows a block diagram of a first distribution function generator


120


for sharing packet traffic load.

FIG. 2B

shows a block diagram of a computing element for the first distribution function generator


120


.




In a first preferred embodiment, the distribution function generator


120


includes a hash function


120


that provides a 12-bit hash function output value


211


. The hash function output value includes three 4-bit bytes


212


, which are coupled to a plurality of remainder elements


220


as shown in FIG.


2


A.




At a first stage of the distribution function generator


120


, a most significant byte


212


and a second-most significant byte


212


of the output value


211


are coupled to eight input bits of a first remainder element


220


. A size value


213


is also coupled as a selector input to the first remainder element


220


, for selecting the divisor for which the remainder is calculated.




At a second stage of the distribution function generator


120


, an output byte


212


from the first remainder element


220


and a least significant byte


212


of the output value


211


are coupled to eight input bits of a second remainder element


220


. The size value


213


is also coupled as the divisor selector input to the second remainder element


220


.




The index


121


is output from the second remainder element


220


.




The remainder element


220


includes an input port


221


, a plurality of remainder circuits


222


, and a multiplexer


223


.




The input port


221


is coupled to the 8-bit input for the remainder element


220


.




The plurality of remainder circuits


222


includes one remainder circuit


222


for each possible divisor. In this first preferred embodiment where the hash function output value includes three 4-bit bytes


212


, there are eight possible divisors from nine to 16. Divisors less than nine are emulated by doubling the divisor until it falls within the range nine to 16. Each remainder circuit


222


computes and outputs a remainder after division by its particular divisor.




The multiplexer


223


selects one of the outputs from the plurality of remainder circuits


222


, responsive to the size value


213


input to the remainder element


220


, and outputs its selection as the index


121


.




Table 2-1 shows a set of measured size and speed values for synthesized logic for computing the modulus function for 4-bit index values.




These values were obtained by synthesizing logic for each remainder element


222


using the “G10P Cell-Based ASIC” product, available from LSI Logic of Milpitas, Calif.












TABLE 2-1











Size and Speed for Synthesized Modulus Function Logic













Function




Time in Nanoseconds




Number of Gates
















mod 9 




2.42




126






mod 10




2.27




 73






mod 11




2.44




159






mod 12




1.04




 45






mod 13




2.50




191






mod 14




2.28




 92






mod 15




1.42




 82






mod 16




.16




 5














As shown in table 2-1, the time in nanoseconds and the number of gates for each remainder circuit


222


is quite small.




Three-Bit Index Values





FIG. 3A

shows a block diagram of a second distribution function for sharing packet traffic load.

FIG. 3B

shows a block diagram of a computing element for the second distribution function.




In a second preferred embodiment, the distribution function generator


120


includes a hash function


310


that provides a 12-bit hash function output value


311


. The hash function output value includes four 3-bit bytes


312


, which are coupled to a plurality of remainder elements


320


as shown in FIG.


3


A.




At a first stage of the distribution function generator


120


, a most significant byte


312


and a second-most significant byte


312


of the output value


311


are coupled to six input bits of a first remainder element


320


. A size value


313


is also coupled as a divisor input to the first remainder element


320


.




At a second stage of the distribution function generator


120


, an output byte


312


from the first remainder element


320


and a next-most significant byte


312


of the output value


311


are coupled to six input bits of a second remainder element


320


. The size value


313


is also coupled as the divisor input to the second remainder element


320


.




At a third stage of the distribution function generator


120


, an output byte


312


from the second remainder element


320


and a least significant byte


312


of the output value


311


are coupled to six input bits of a third remainder element


320


. The size value


313


is also coupled as the divisor input to the third remainder element


320


.




The index


121


is output from the third remainder element


320


.




Similar to the remainder element


220


, the remainder element


320


includes an input port


321


, a plurality of remainder circuits


322


, and a multiplexer


323


.




Similar to the input port


221


, the input port


321


is coupled to the 6-bit input for the remainder element


320


.




Similar to the plurality of remainder circuits


222


, the plurality of remainder circuits


322


includes one remainder circuit


322


for each possible divisor. In this second preferred embodiment where the hash function output value includes four 3-bit bytes


312


, there are four possible divisors from five to eight. Divisors less than five are emulated by doubling the divisor until it falls within the range five to eight. Each remainder circuit


322


computes and outputs a remainder after division by its particular divisor.




Similar to the multiplexer


223


, the multiplexer


323


selects one of the outputs from the plurality of remainder circuits


322


, responsive to the size value


313


input to the remainder element


320


, and outputs its selection as the index


121


.




Table 3-1 shows a set of measured size and speed values for synthesized logic for computing the modulus function for 3-bit index values.




Similar to table 2-1, these values were obtained by synthesizing logic for each remainder element


322


using the “G10P Cell-Based ASIC” product, available from LSI Logic of Milpitas, Calif.












TABLE 3-1











Size and Speed for Synthesized Modulus Function Logic













Function




Time in Nanoseconds




Number of Gates
















mod 5




1.99




57






mod 6




1.52




31






mod 7




1.10




50






mod 8




.16




 4














As shown in table 3-1, the time in nanoseconds and the number of gates for each remainder circuit


322


is quite small.




Software Implementation




In a software implementation, in place of each remainder element


222


or remainder element


322


, a processor performs a lookup into a modulus table having the modulus values resulting from the appropriate division. For example, to compute the modulus value for the remainder element


322


for division by six, the modulus table would have the values 0, 1, 2, 3, 4, and 5, repeated as many times as necessary to completely fill the table.




Non-Equal-Cost Paths




When different data paths have unequal associated costs, some data paths can be associated with more than one entry in the load-sharing table


130


. Each entry in the load-sharing table


130


can therefore be assigned an equivalent amount of load. For example, if three output data paths are OC-12 links, while one output data path is an OC-48 link, the OC-48 data path can be assigned four entries in the load-sharing table


130


because it has four times the capacity of the OC-12 data paths. Therefore, in this example, there would be seven entries in the load-sharing table


130


for just four different output data paths.




Modulus Element Using Free-Running Counter





FIG. 4

shows a block diagram of an alternative embodiment of a system for sharing packet traffic load among a plurality of possible paths.




A system


400


includes a packet routing information source


110


, a distribution function generator


120


, a load-sharing table


130


, and a set of output routing queues


140


. The distribution function generator


120


includes a hash function element


421


, a free-running counter


422


, a flow/packet multiplexer


423


, and a modulus function element


424


.




The flow/packet multiplexer


423


is coupled to a flow/packet select input


425


for selecting whether load-sharing is performed per-flow or per-packet. One of two operations is performed:




If the flow/packet select input


425


indicates load-sharing is performed per-flow, the flow/packet multiplexer


423


selects the output of the hash function element


421


, and the modulus function element


424


distributes packets to the load-sharing table


130


, and ultimately to the output routing queues


140


, responsive to what flow the packet is associated with. Thus, all packets in the same flow are distributed to the same output routing queue


140


.




If the flow/packet select input


425


indicates load-sharing is performed per-packet, the flow/packet multiplexer


423


selects the output of the free-running counter


422


, and the modulus function element


424


distributes packets to the load-sharing table


130


, and ultimately to the output routing queues


140


, responsive to the raw order in which packets arrive. Thus, packets are effectively distributed uniformly in a round-robin manner among the possible output routing queues


140


.




In a preferred embodiment, the free running counter


422


produces a 12-bit unsigned integer output, and recycles back to zero when the maximum value is reached.




ALTERNATIVE EMBODIMENTS




Although preferred embodiments are disclosed herein, many variations are possible which remain within the concept, scope, and spirit of the invention, and these variations would become clear to those skilled in the art after perusal of this application.



Claims
  • 1. A method for distributing respective pluralities of packets belonging to respective flows among a number N of outgoing data paths, said method comprising the following steps:for each packet, associating a respective distribution value therewith, said respective distribution value being based, at least in part, upon a respective hash value generated from packet network layer information; determining a modulus value of the distribution value, the distribution value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and sharing packet traffic load among the N outgoing data paths in response to the modulus value.
  • 2. A method as in claim 1, wherein said steps for associating include associating a single distribution value for substantially all packets in a respective one of said flows.
  • 3. A method as in claim 1, wherein said distribution value for each said packet is based upon, at least in part, a respective packet source address and a respective packet destination address.
  • 4. A method as in claim 1, wherein said distribution value for each said packet is based upon, at least in part, a respective packet source port and a respective packet destination port.
  • 5. A method as in claim 1, wherein said distribution value for each said packet is based upon, at least in part, a respective packet protocol type.
  • 6. A system for distributing respective pluralities of packets belonging to respective flows among a number N of outgoing data paths, said system comprising:a distribution value generator for associating with each packet a respective distribution value, the value being generated based, at least in part, upon a respective hash value generated from packet network layer information; determining a modulus value of the distribution value, the distribution value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and a load-sharing element that shares packet traffic load among the outgoing data paths in response to the modulus value.
  • 7. A system as in claim 6, wherein:said distribution value generator is operative to assign a single distribution value for substantially all packets in a respective one of said flows.
  • 8. A system as in claim 6, wherein said distribution value is based upon, at least in part, a respective packet source address and a respective packet destination address.
  • 9. A system as in claim 6, wherein said distribution value is based, at least in part, upon a respective packet source port and a respective packet destination port.
  • 10. A system as in claim 6, wherein said distribution value is based upon, at least in part, a respective packet protocol type.
  • 11. A system for distributing respective pluralities of packets belonging to respective flows among a number N of outgoing data paths, the system comprising:means for generating for each packet a respective distribution value, the value being generated based, at least in part, upon a respective hash value generated from packet network layer information; means for determining a modulus value of the distribution value, the distribution value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and means for sharing packet traffic load among the paths in response to the modulus value.
  • 12. A system as in claim 11, wherein the means for generating includes means for associating a single distribution value for substantially all packets in a respective one of said flows.
  • 13. A system as in claim 11, wherein the distribution value is based upon, at least in part, packet source and destination addresses.
  • 14. A system as in claim 11, wherein the distribution value is based upon, at least in part, packet source and destination ports.
  • 15. A system as in claim 11, wherein the distribution value is based upon, at least in part, packet protocol information.
  • 16. Computer-readable memory comprising computer-executable program instructions that when executed distribute respective pluralities of packets belonging to respective flows among a number N of outgoing data paths, the instructions when executed also causing:generating for each packet a respective distribution value, the value being generated based, at least in part, upon a respective hash value generated from packet network layer information; determining a modulus value of the distribution value, the distribution value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and sharing of packet traffic load among the paths in response to the modulus value.
  • 17. Memory as in claim 16, wherein the generation of the respective distribution value includes associating a single distribution value with substantially all packets in a respective one of said flows.
  • 18. Memory as in claim 16, wherein the distribution value is based upon, at least in part, packet source and destination addresses.
  • 19. Memory as in claim 16, wherein the distribution value is based upon, at least in part, packet source and destination ports.
  • 20. Memory as in claim 16, wherein the distribution value is based upon, at least in part, packet protocol information.
  • 21. A network device for distributing respective pluralities of packets belonging to respective flows among a number N of outgoing data paths, comprising a network interface and a processor configured to perform the steps of:generating for each packet a respective distribution value, the value being generated based, at least in part, upon a respective hash value generated from packet network layer information; determining a modulus value of the distribution value, the distribution value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and sharing packet traffic load among the paths in response to the modulus value.
  • 22. A device as in claim 21 wherein the step of generating includes associating a single distribution value for substantially all packets in a respective one of said flows.
  • 23. A device as in claim 21, wherein the distribution value is based upon, at least in part, packet source and destination addresses.
  • 24. A device as in claim 21, wherein the distribution value is based upon, at least in part, packet source and destination ports.
  • 25. A device as in claim 21, wherein the distribution value is based upon, at least in part, packet protocol information.
  • 26. A method for distributing packets belonging to different flows among a number N of outgoing data paths, comprising:for each packet determining a hash value generated from packet network layer information; determining a modulus value of the hash value, the hash value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and sharing packet traffic load among the N outgoing data paths in response to the modulus value.
  • 27. The method as in claim 26, further comprising:determining the modulus value by dividing the hash value by a divisor to obtain a remainder, and using the remainder as the modulus value.
  • 28. The method as in claim 27 further comprising:using as the divisor the number of outgoing data paths.
  • 29. The method as in claim 27 further comprising:using as a divisor a number which yields a desired range for the remainder, the range being comparable to the number of outgoing data paths.
  • 30. The method as in claim 26 further comprising:indexing into a load sharing table by the modulus.
  • 31. A system for distributing packets belonging to different flows among a number N of outgoing data paths, comprising:a hash value generator for associating with each packet a hash value generated from packet network layer information; a modulus element to determine a modulus value of the hash value, the hash value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and a load-sharing element that shares packet traffic load among the outgoing data paths in response to the modulus value.
  • 32. The system as in claim 31, further comprising:a division circuit to determine the modulus value by dividing the hash value by a divisor to obtain a remainder, and using the remainder as the modulus value.
  • 33. The system as in claim 31 further comprising:the number of outgoing data paths used as the divisor.
  • 34. The system as in claim 31 further comprising:a number used as the divisor to yield a desired range for the remainder, the range being comparable to the number of outgoing data paths.
  • 35. The system as in claim 31 further comprising:a load sharing table indexed by the modulus.
  • 36. A system for distributing packets belonging to different flows among a number N of outgoing data paths, comprising:means for determining for each packet a hash value generated from packet network layer information; means for determining a modulus value of the hash value, the hash value having a first plurality of bits and the modulus value having a second plurality of bits, the first plurality of bits being greater in number of bits than the second plurality of bits, so that the modulus value has a maximum value comparable to N; and means for sharing packet traffic load among the N outgoing data paths in response to the modulus value.
  • 37. The system as in claim 36, further comprising:means for determining the modulus value by dividing the hash value by a divisor to obtain a remainder, and using the remainder as the modulus value.
  • 38. The system as in claim 36, further comprising:means for using as the divisor the number of outgoing data paths.
  • 39. The system as in claim 36 further comprising:means for using as a divisor a number which yields a desired range for the remainder, the range being comparable to the number of outgoing data paths.
  • 40. The system as in claim 37 further comprising:means for indexing into a load sharing table by the modulus.
  • 41. A method for distributing packets belonging to different flows among a number N of outgoing data paths, comprising:for each packet determining a hash value generated from packet network layer information; determining a modulus value of the hash value by dividing the hash value by a divisor to obtain a remainder, and using the remainder as the modulus value; indexing into a load sharing table by the modulus value; and sharing packet traffic load among the N outgoing data paths in response to an entry in the load sharing table indexed by the modulus value.
  • 42. A system for distributing packets belonging to different flows among a number N of outgoing data paths, comprising:a hash value generator for associating with each packet a hash value generated from packet network layer information; a modulus element to determine a modulus value of the hash value by dividing the hash value by a divisor to obtain a remainder, and using the remainder as the modulus value; a load sharing table indexed by the modulus value; a load-sharing element that shares packet traffic load among the outgoing data paths in response to an entry in the load sharing table indexed by the modulus value.
  • 43. A system for distributing packets belonging to different flows among a number N of outgoing data paths, comprising:means for determining a hash value for each packet, the hash value generated from a packet network layer information; means for determining a modulus value of the hash value by dividing the hash value by a divisor to obtain a remainder, and using the remainder as the modulus value; means for indexing into a load sharing table by the modulus value; and means for sharing packet traffic load among the N outgoing data paths in response to an entry in the load sharing table indexed by the modulus value.
  • 44. A computer readable media, comprising:the computer readable media having instructions for execution on a processor for the practice of the method of claim 1 or claim 26 or claim 41.
  • 45. Electromagnetic signals propagating on a computer network, comprising:the electromagnetic signals carrying information containing instructions for execution on a processor for the practice of the method of claim 1 or claim 26 or claim 41.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 09/002,210 filed Dec. 31, 1997 now U.S. Pat. No. 6,111,877, entitled “LOAD SHARING ACROSS FLOWS.” The entirety of said co-pending application is hereby incorporated herein by reference. The subject matter of the subject application is also related to that of co-pending U.S. patent application Ser. No. 09/053,237 filed Apr. 1, 1998, entitled “ROUTE/SERVICE PROCESSOR SCALABILITY VIA FLOW-BASED DISTRIBUTION OF TRAFFIC.”

US Referenced Citations (181)
Number Name Date Kind
4131767 Weinstein Dec 1978 A
4161719 Parikh et al. Jul 1979 A
4316284 Howson Feb 1982 A
4397020 Howson Aug 1983 A
4419728 Larson Dec 1983 A
4424565 Larson Jan 1984 A
4437087 Petr Mar 1984 A
4438511 Baran Mar 1984 A
4439763 Limb Mar 1984 A
4445213 Baugh et al. Apr 1984 A
4446555 Devault et al. May 1984 A
4456957 Schieltz Jun 1984 A
4464658 Thelen Aug 1984 A
4499576 Fraser Feb 1985 A
4506358 Montgomery Mar 1985 A
4507760 Fraser Mar 1985 A
4532626 Flores et al. Jul 1985 A
4644532 George et al. Feb 1987 A
4646287 Larson et al. Feb 1987 A
4677423 Benvenuto et al. Jun 1987 A
4679189 Olson et al. Jul 1987 A
4679227 Hughes-Hartogs Jul 1987 A
4723267 Jones et al. Feb 1988 A
4731816 Hughes-Hartogs Mar 1988 A
4750136 Arpin et al. Jun 1988 A
4757495 Decker et al. Jul 1988 A
4763191 Gordon et al. Aug 1988 A
4769810 Eckberg, Jr. et al. Sep 1988 A
4769811 Eckberg, Jr. et al. Sep 1988 A
4771425 Baran et al. Sep 1988 A
4819228 Baran et al. Apr 1989 A
4827411 Arrowood et al. May 1989 A
4833706 Hughes-Hartogs May 1989 A
4835737 Herrig et al. May 1989 A
4879551 Georgiou et al. Nov 1989 A
4893306 Chao et al. Jan 1990 A
4903261 Baran et al. Feb 1990 A
4922486 Lidinsky et al. May 1990 A
4933937 Konishi Jun 1990 A
4960310 Cushing Oct 1990 A
4962497 Ferenc et al. Oct 1990 A
4962532 Kasiraj et al. Oct 1990 A
4965772 Daniel et al. Oct 1990 A
4970678 Sladowski et al. Nov 1990 A
4980897 Decker et al. Dec 1990 A
4991169 Davis et al. Feb 1991 A
5003595 Collins et al. Mar 1991 A
5014265 Hahne et al. May 1991 A
5020058 Holden et al. May 1991 A
5033076 Jones et al. Jul 1991 A
5054034 Hughes-Hartogs Oct 1991 A
5059925 Weisbloom Oct 1991 A
5072449 Enns et al. Dec 1991 A
5088032 Bosack Feb 1992 A
5095480 Fenner Mar 1992 A
RE33900 Howson Apr 1992 E
5115431 Williams et al. May 1992 A
5128945 Enns et al. Jul 1992 A
5136580 Videlock et al. Aug 1992 A
5166930 Braff et al. Nov 1992 A
5199049 Wilson Mar 1993 A
5205866 Bingham Apr 1993 A
5208811 Kashio et al. May 1993 A
5213686 Joy et al. May 1993 A
5224099 Corbalis et al. Jun 1993 A
5226120 Brown et al. Jul 1993 A
5228062 Bingham Jul 1993 A
5229994 Balzano et al. Jul 1993 A
5237564 Lespagnol et al. Aug 1993 A
5241682 Bryant et al. Aug 1993 A
5243342 Kattemalalavadi et al. Sep 1993 A
5243596 Port et al. Sep 1993 A
5247516 Bernstein et al. Sep 1993 A
5249178 Kurano et al. Sep 1993 A
5253251 Aramaki Oct 1993 A
5255291 Holden et al. Oct 1993 A
5260933 Rouse Nov 1993 A
5260978 Fleischer et al. Nov 1993 A
5268592 Bellamy et al. Dec 1993 A
5268900 Hluchyj et al. Dec 1993 A
5271004 Proctor et al. Dec 1993 A
5274631 Bhardwaj Dec 1993 A
5274635 Rahman et al. Dec 1993 A
5274643 Fisk Dec 1993 A
5280470 Buhrke et al. Jan 1994 A
5280480 Pitt et al. Jan 1994 A
5280500 Mazzola et al. Jan 1994 A
5283783 Nguyen et al. Feb 1994 A
5287103 Kasprzyk et al. Feb 1994 A
5287453 Roberts Feb 1994 A
5291482 McHarg et al. Mar 1994 A
5305311 Lyles Apr 1994 A
5307343 Bostica et al. Apr 1994 A
5311509 Heddes et al. May 1994 A
5313454 Bustini et al. May 1994 A
5313582 Hendel et al. May 1994 A
5317562 Nardin et al. May 1994 A
5319644 Liang Jun 1994 A
5327421 Hiller et al. Jul 1994 A
5331637 Francis et al. Jul 1994 A
5345445 Hiller et al. Sep 1994 A
5345446 Hiller et al. Sep 1994 A
5359593 Corbalis et al. Oct 1994 A
5361250 Nguyen et al. Nov 1994 A
5361256 Doeringer et al. Nov 1994 A
5361259 Hunt et al. Nov 1994 A
5365524 Hiller et al. Nov 1994 A
5367517 Cidon et al. Nov 1994 A
5371852 Attanasio et al. Dec 1994 A
5386967 Lien et al. Jan 1995 A
5390170 Sawant et al. Feb 1995 A
5390175 Hiller et al. Feb 1995 A
5394394 Crowther et al. Feb 1995 A
5394402 Ross Feb 1995 A
5400325 Chatwani et al. Mar 1995 A
5408469 Opher et al. Apr 1995 A
5414704 Spinney May 1995 A
5416842 Aziz May 1995 A
5422880 Heitkamp et al. Jun 1995 A
5422882 Hiller et al. Jun 1995 A
5423002 Hart Jun 1995 A
5426636 Hiller et al. Jun 1995 A
5428607 Hiller et al. Jun 1995 A
5430715 Corbalis et al. Jul 1995 A
5442457 Najafi Aug 1995 A
5442630 Gagliardi et al. Aug 1995 A
5452297 Hiller et al. Sep 1995 A
5473599 Li et al. Dec 1995 A
5473607 Hausman et al. Dec 1995 A
5477541 White et al. Dec 1995 A
5485455 Dobbins et al. Jan 1996 A
5490140 Abensour et al. Feb 1996 A
5490256 Fenner Feb 1996 A
5491687 Christensen et al. Feb 1996 A
5491804 Heath et al. Feb 1996 A
5497368 Reijnierse et al. Mar 1996 A
5504747 Sweazey Apr 1996 A
5509006 Wilford et al. Apr 1996 A
5517494 Green May 1996 A
5519704 Farinacci et al. May 1996 A
5526489 Nilakantan et al. Jun 1996 A
5530963 Moore et al. Jun 1996 A
5535195 Lee Jul 1996 A
5539734 Burwell et al. Jul 1996 A
5555244 Gupta et al. Sep 1996 A
5561669 Lenney et al. Oct 1996 A
5583862 Callon Dec 1996 A
5592470 Rudrapatna et al. Jan 1997 A
5598581 Daines et al. Jan 1997 A
5600798 Cherukuri et al. Feb 1997 A
5604868 Komine et al. Feb 1997 A
5608726 Virgile Mar 1997 A
5614718 Belove et al. Mar 1997 A
5617417 Sathe et al. Apr 1997 A
5617421 Chin et al. Apr 1997 A
5630125 Zellweger May 1997 A
5631908 Saxe May 1997 A
5632021 Jennings et al. May 1997 A
5633858 Chang et al. May 1997 A
5634010 Ciscon et al. May 1997 A
5638359 Peltola et al. Jun 1997 A
5659684 Giovannoni et al. Aug 1997 A
5666353 Klausmeier et al. Sep 1997 A
5673265 Gupta et al. Sep 1997 A
5678006 Valizadeh et al. Oct 1997 A
5684797 Aznar et al. Nov 1997 A
5687324 Green et al. Nov 1997 A
5689506 Chiussi et al. Nov 1997 A
5694390 Yamato et al. Dec 1997 A
5708659 Rostoker et al. Jan 1998 A
5724351 Chao et al. Mar 1998 A
5748186 Raman May 1998 A
5748617 McLain, Jr. May 1998 A
5754547 Nakazawa May 1998 A
5757795 Schnell May 1998 A
5835710 Nagami et al. Nov 1998 A
5852607 Chin Dec 1998 A
5854903 Morrison et al. Dec 1998 A
5898686 Virgile Apr 1999 A
6084877 Egbert et al. Jul 2000 A
6292483 Kerstein Sep 2001 B1
Foreign Referenced Citations (7)
Number Date Country
0 384 758 Aug 1990 EP
0 431 751 Jun 1991 EP
0 567 217 Oct 1993 EP
WO9307569 Apr 1993 WO
WO9307692 Apr 1993 WO
WO9401828 Jan 1994 WO
WO9520850 Aug 1995 WO
Non-Patent Literature Citations (11)
Entry
Allen, M., “Novell IPX Over Various WAN Media (IPXW AN).” Network Working Group, RFC 1551, Dec. 1993, pp. 1-22.
Becker, D., “3c589.c: A 3c589 EtherLink3 ethernet driver lor linux.” becker@CESDIS.gsfc.nasa.gov, May 3, 1994, pp. 1-13.
Chowdhury, et al., “Alternative Bandwidth Allocation Algorithms for Packet Video in ATM Networks,” INFOCOM 1992, pp. 1061-1068.
Doeringer, W., “Routing on Longest-Matching Prefixes.” IEEE/ACM Transactions in Networking, vol. 4, No. 1, Feb. 1996, pp. 86-97.
Esaki, et al., “Datagram Delivery in an ATM-Internet.” 2334b IEICE Transactions on Communications, Mar. 1994. No. 3, Tokyo, Japan.
IBM Corporation, “Method and Apparatus for the Statistical Multiplexing of Voice. Data and Image Signals.” IBM Technical Disclosure Bulletin. No. 6, Nov. 1992. pp. 409-411.
Pei, et al., “Putting Routing Tables in Silicon,” IEEE Network Magazine, Jan. 1992, pp. 42-50.
Perkins, D., “Requirements for an Internet Standard Point-to-Point Protocol,” Network Working Group, RFC 1547, Dec. 1993, pp. 1-19.
Simpson, W., “The Point-to-Point Protocol (PPP),” Network Working Group, RFC 1548, Dec. 1993, pp. 1-53.
Tsuchiya, P.F., “A Search Algorithm for Table Entries with Non-Contiguous Wildcarding,” Abstract, Bellcore.
Zhang, et al., “Rate-Controlled Static-Priority Queueing,” INFOCOM 1993, pp. 227-236.
Continuations (1)
Number Date Country
Parent 09/002210 Dec 1997 US
Child 09/621415 US