Routing table lookup implemented using M-trie having nodes duplicated in multiple memory banks

Information

  • Patent Grant
  • 6308219
  • Patent Number
    6,308,219
  • Date Filed
    Friday, July 31, 1998
    26 years ago
  • Date Issued
    Tuesday, October 23, 2001
    23 years ago
  • Inventors
  • Original Assignees
  • Examiners
    • Maung; Zarni
    • Caldwell; Andrew
    Agents
    • Skjerven Morrill MacPherson LLP
    • Campbell, III; Sam G.
Abstract
The invention provides a method and system for rapid access to one or more M-tries for responding to header information. The M-tries are stored in a plurality of memory banks, which are accessed in parallel to provide relatively greater access throughput. Parts of the M-tries that are (or are expected to be) frequently referenced are stored in multiple banks of the memory, to provide concurrent simultaneous access for those parts of the M-tries for parallel lookup of multiple routes. Regions of the multiple banks of the memory can be dynamically reallocated to provide improved access through-put to those multiple banks. The invention can be applied to routing decisions in response to destination addresses, to combinations of destination and source addresses (either for unicast or multicast routing), to access control decisions, to quality of service decisions, to accounting, and to other administrative processing in response to header information.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




This invention relates to routing table lookup.




2. Related Art




In a computer network, routing devices receive messages at one of a set of input interfaces, and forward them on to one of a set of output interfaces. It is advantageous for such routing devices to operate as quickly as possible, to keep up with the rate of incoming messages. In a packet routing network, each packet includes a header, including information used for routing the packet to an output interface for forwarding to a destination device (or to another routing device for further forwarding). Header information used for routing can include a destination address, a source address, and other information such as a destination device port, a source device port, a protocol, packet length, and a priority for the packet. Header information used by routing devices for other administrative tasks can include information about access control, accounting, quality of service, and other purposes.




One problem in the known art is that there can be a relatively large number of possible destinations, and therefore a correspondingly large number of possible output interfaces (herein called “routes”), one of which is to be associated with the incoming packet. It is advantageous for the routing devices to match the associated output interface with the incoming packet as quickly as possible. Due to the nature of routing in computer networks, it is also advantageous for the routing devices to match the associated output interface with the longest possible sub-string of the header information (such as the destination address or a combination of the destination address and the source address).




One method in the known art is to use a branching tree, which differentiates among possible routes in response to each individual bit of the header information. A variant of this method is to generate an M-way branching tree (herein called an “M-trie,” which has up to M possible branches at each node). An M-trie differentiates among possible routes in response to groups of bits in the header information.




One problem in the known art is that using M-tries generates frequent references to memory to access the nodes of the M-trie. The access speed of the memory thus provides a limitation on the speed of the routing device. Moreover, some of the nodes of the M-trie near its root are relatively more frequently referenced than other nodes. The access speed of the memory for repeated references to the same location thus provides a second limitation on the speed of the routing device.




Accordingly, it would be desirable to provide a method and system for rapid access to one or more M-tries for responding to header information. This advantage is achieved in an embodiment of the invention in which the M-tries are stored in a plurality of memory banks, some of which duplicate parts of the M-tries that are frequently referenced.




SUMMARY OF THE INVENTION




The invention provides a method and system for rapid access to one or more M-tries for responding to header information. The M-tries are stored in a plurality of memory banks, which are accessed in parallel to provide relatively greater access throughput. Parts of the M-tries that are (or are expected to be) frequently referenced are stored in multiple banks of the memory, to provide concurrent simultaneous access to those parts of the M-tries for parallel lookup of multiple routes.




In a preferred embodiment, regions of the multiple banks of the memory can be dynamically reallocated to provide improved access throughput to those multiple banks. The invention can be applied to routing decisions in response to destination addresses, to combinations of destination and source addresses (either for unicast or multicast routing), to access control decisions, to quality of service decisions, to accounting, and to other administrative processing in response to header information.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

shows a block diagram of an improved system for routing table lookup.





FIG. 2

shows a memory data structure in an improved method for routing table lookup.





FIG. 3

shows a timing diagram for use of a receive or transmit memory.





FIG. 4

shows a flowchart for recording and using a routing table.





FIG. 5

shows possible contents of an M-trie.











DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT




In the following description, a preferred embodiment of the invention is described with regard to preferred process steps and data structures. Those skilled in the art would recognize after perusal of this application that embodiments of the invention can be implemented using circuits adapted to particular process steps and data structures described herein, and that implementation of the process steps and data structures described herein would not require undue experimentation or further invention.




System Elements





FIG. 1

shows a block diagram of an improved system for routing table lookup.




A system


100


for routing table lookup includes one or more routers


110


.




The router


110


is coupled to a set of physical interfaces


111


for receiving and for transmitting packets


112


from non-router devices. The router


110


is also coupled to one or more fabric interfaces


113


for receiving and for transmitting packets


112


to the network fabric.




Each router


110


includes a set of device interface elements


120


PLIM, a receive element


130


Rx, a lookup table


140


, a set of memory controllers


150


, memory


160


, one or more fabric interface elements


170


, and a transmit element


180


.




The device interface elements


120


are coupled to the physical interfaces


111


, and are disposed for receiving and for transmitting packets


112


.




The receive element


130


Rx is coupled to the device interface elements


120


and to the lookup table


140


. The receive element


130


operates in conjunction with the lookup table


140


to perform receive operations on received packets


112


. These receive operations can include determining packet


112


integrity, isolating routing information from a set of packet headers


113


of the packets


112


, and other functions.




A receive memory controller


150


is coupled to the receive element


130


and to a receive memory


160


. The receive memory controller


150


operates in conjunction with the receive memory


160


to determine routing treatments for received packets


112


. These routing treatments can include one or more of the following:




selection of one or more output interfaces to which to forward received packets


112


;




Selection can be responsive to the destination device, to the source and destination device, or to network flows as described in one or more of the following patent applications.




U.S. application Ser. No. 08/581,134, now U.S. Pat. No. 6,091,725, titled “Method For Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a Datagram Computer Network”, filed Dec. 29, 1995, in the name of inventors David R. Cheriton and Andreas V. Bechtolsheim, assigned to Cisco Technology, Inc.;




U.S. application Ser. No. 08/655,429, titled “Network Flow Switching and Flow Data Export”, filed May 28, 1996, in the name of inventors Darren Kerr and Barry Bruins, and assigned to Cisco Technology, Inc.; and




U.S. application Ser. No. 08/771,438, titled “Network Flow Switching and Flow Data Export”, filed Dec. 20, 1996, in the name of inventors Darren Kerr and Barry Bruins, assigned to Cisco Technology, Inc.,




PCT International Application PCT/US 96/20205, titled “Method For Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a




Datagram Computer Network”, filed Dec. 18, 1996, in the name of inventors David R. Cheriton and Andreas V. Bechtolsheim, and assigned to Cisco Technology, Inc.; and




U.S. application Ser. No. 08/886,900, Express Mail Mailing No. EM053698725US, titled “Network Flow Switching and Flow Data Export”, filed Jul. 2, 1997, in the name of inventors Darren Kerr and Barry Bruins, assigned to Cisco Technology, Inc.




Each of these applications is hereby incorporated by reference as if fully set forth herein. These applications are collectively referred to herein as the “Netflow Routing Disclosures.”




Selection can also be responsive to unicast routing, multicast routing, or a combination thereof




determination of ACL (access control list) treatment for received packets


112


;




determination of QoS (quality of service) treatment for received packets


112


;




determination of one or more accounting records or treatments for received packets


112


; and




determination of other administrative treatment for received packets


112


.




The receive memory


160


includes routing treatment information, disposed in a memory data structure responsive to information in packet


112


and it's packet header. The memory data structure is further described with regard to FIG.


2


.




The fabric interface elements


170


couple the router


100


to communication links to other routers


110


in the network fabric.




A transmit memory controller


150


is coupled to the fabric interface elements


170


and to a transmit memory


160


. The transmit memory controller


150


operates in conjunction with the transmit memory


160


to determine routing treatments for packets


112


for transmission. These routing treatments can be similar to the routing treatments for received packets


112


and can include one or more of the same treatments.




The transmit memory


160


includes routing treatment information, disposed in a memory data structure similar to that of the receive memory


160


.




The transmit memory controller


150


is coupled to the transmit element


180


Tx. The transmit element


180


operates to perform transmit operations on packets


112


for transmission. These transmit operations can include rewriting packet headers, altering routing information in the packet headers, and other functions.




Memory Data Structure





FIG. 2

shows a memory data structure in an improved method for routing table lookup.




In a preferred embodiment, the memory data structure is like that described for M-Tries in one or more of the Netflow Routing Disclosures.




The receive memory


160


includes at least one tree structure


200


(sometimes known as a “trie” structure), as described for M-Tries in one or more of the Netflow Routing Disclosures. Each tree structure


200


includes a set of nodes


210


, one of which is designated as a root node


210


, and a set of leaves


220


. The root node


210


and each other node


210


include a set of entries


211


, each of which points to either a sub-node


210


or to an associated leaf


220


.




Each leaf


220


includes a set of information for a routing treatment to be used for packets


112


. As noted herein, the routing treatment can include one or more of the following:




one or more output interfaces to which to forward packets


112


;




ACL treatment for packets


112


;




QoS treatment for packets


112


;




accounting treatments for packets


112


; and




other administrative treatment for packets


112


.




In a preferred embodiment in which each node


210


provides 16 bits of branching width, each node


210


includes sufficient entries


211


to use 64K bytes of the memory


160


. At least one region


220


of the memory


160


is about 64K bytes and is used for the root node


210


. In alternative embodiments, each node


210


may provide a different amount of branching width, such as 4 bits, 8 bits, 12 bits, or some variable responsive to the nature of the packet traffic.




Parallel Memory Operation





FIG. 3

shows a timing diagram for use of a receive or transmit memory.




The receive memory


160


includes an SDRAM (synchronous dynamic random access memory), having a plurality of memory banks


300


. As known in the art of computer memories, each memory bank


300


can be accessed separately and in parallel using a memory activate signal


310


and a memory read signal


320


. In response to the memory activate signal


310


and the memory read signal


320


, the memory


160


provides a data output signal


330


.




In a preferred embodiment, the receive memory


160


includes four memory banks


300


(bank


0


, bank


1


, bank


2


, and bank


3


). The receive memory controller


150


provides a periodic sequence of four memory activate signals


310


(A


0


, A


1


, A


2


, and A


3


) and four memory read signals


320


(R


0


, R


1


, R


2


, and R


3


), and receives a periodic sequence of four data output signals


330


(D


0


, D


1


, D


2


, and D


3


). Thus, the four memory banks


300


are effectively operated in parallel to provide quadruple the amount of throughput of a single memory bank


300


.




In a preferred embodiment, each memory bank


300


of the receive memory


160


operates at a cycle speed of about 80 nanoseconds. The memory activate signal


310


A


0


is presented to memory bank


300


bank


0


at an offset of about 0 nanoseconds into the cycle. The memory read signal


320


R


0


is presented to memory bank


300


bank


0


at an offset of about 30 nanoseconds into the cycle. The data output signal


330


D


0


is presented from the memory bank


300


bank


0


at an offset of about 60 nanoseconds into the cycle, and lasts about 20 nanoseconds to read out about 32 bits of data. CIS-


043






The memory activate signal


310


, memory read signal


320


, and data output signal


330


occur at offsets in the cycle that are similarly related.




Distributed Storage of the M-tries




The various nodes


210


of the tree structure


200


are recorded in the memory


160


in the various memory banks


300


. The receive memory controller


150


allocates the nodes


210


(and associated sub-trees depending from those nodes


220


) of the tree structure


200


so that concurrent access in referencing those nodes


210


can be optimized. This optimization can include the following:




(1) the root node


210


is recorded in multiple banks


300


;




(2) other frequently referenced nodes


210


are stored in multiple banks


300


; and




(3) nodes


210


are dynamically reallocated to new regions of the multiple banks


300


.




In the first optimization, because the root node


210


is so frequently referenced, it is recorded in each memory bank


300


.




In the second optimization, those nodes


210


that are frequently referenced are copied to multiple memory banks


300


. The memory controller


150


can determine whether particular nodes


210


are sufficiently frequently referenced by maintaining a reference or frequency count at the node


210


or in a separate table. Those nodes


210


that are referenced often can be copied to two of the four memory banks


300


, while those nodes


210


that are referenced even more often can be copied to three or four of the four memory banks


300


. Similarly, those nodes


210


that have been copied to multiple memory banks


300


but are no longer frequently referenced, are deleted from one or more memory banks


300


to restrict them to fewer memory banks, down to a single memory bank


300


.




In the third optimization the receive memory controller


150


determines whether particular memory banks


300


have recorded nodes


210


that are collectively relatively infrequently or relatively frequently referenced. If a first memory bank


300


has a collection of nodes


210


that are much more frequently referenced than a second memory bank


300


, it can occur that concurrent use of the memory


160


is reduced by frequent attempts at multiple access to the same contested memory bank


300


. Accordingly, in these cases, the receive memory controller


150


reallocates at least some nodes


210


from the first to the second memory bank


300


. Similarly, if the frequency of references to sets of recorded nodes


210


changes, the receive memory controller


150


re-reallocates at least some nodes


210


from the first to the second memory bank


300


or vice versa.




Recording and Using Routing Table





FIG. 4

shows a flowchart for recording and using a routing table. Step S


401


shows recording an M-trie in memory banks. At least one node


210


is recorded in at least two memory banks.




Steps S


402


and/or S


403


can follow step S


401


. In step S


402


, at least two memory banks are accessed in parallel. One node recorded in plural memory banks can be accessed concurrently. In step S


403


, nodes are dynamically reallocated. Nodes can be reallocated, for example, responsive to access frequency, changes in access frequency, throughput, and other factors.




M-trie Contents





FIG. 5

shows possible contents of an M-trie. M-tie


400


can include access control information, accounting information, information for responding to packets, quality of service information, multicast information, routing decision information (responsive to source and/or destination addresses), and/or unicast information. Other information can be included.




Alternative Embodiments




Although preferred embodiments are disclosed herein, many variations are possible which remain within the concept, scope, and spirit of the invention, and these variations would become clear to those skilled in the art after perusal of this application.




For example, as noted herein, the invention has wide applicability to a variety of operations possibly performed by routers.



Claims
  • 1. A system comprising:a plurality of memory banks, said plurality of memory banks collectively recording at least one M-trie responsive to header information in a set of packets; and means for dynamically reallocating said nodes within said plurality of banks; wherein at least one node of said M-trie is recorded in at least two of said banks.
  • 2. A system as in claim 1, wherein said plurality of memory banks are disposed for concurrent access.
  • 3. A system as in claim 1, comprising means for dynamically reallocating said nodes within said plurality of banks responsive to changes in relative frequency of access to said nodes.
  • 4. A system as in claim 1, comprising means for dynamically reallocating said nodes within said plurality of banks responsive to improving access throughput for references to said nodes.
  • 5. A system as in claim 1, comprising means for dynamically reallocating said nodes within said plurality of banks responsive to relative frequency of access to said nodes.
  • 6. A system as in claim 1, wherein said at least one node is a node expected to be referenced frequently.
  • 7. A system as in claim 1, wherein said at least one node is a root node for one of said at least one M-tries.
  • 8. A system as in claim 1, wherein said at least one node is disposed for a plurality of concurrent accesses.
  • 9. A system as in claim 1, wherein said one M-trie comprises access control information.
  • 10. A system as in claim 1, wherein said one M-trie comprises accounting information.
  • 11. A system as in claim 1, wherein said one M-trie comprises information for responding to packets.
  • 12. A system as in claim 1, wherein said one M-trie comprises quality of service information.
  • 13. A system as in claim 1, wherein said one M-trie comprises multicast routing information.
  • 14. A system as in claim 1, wherein said one M-trie comprises information for routing decisions.
  • 15. A system as in claim 1, wherein said one M-trie comprises information for routing decisions responsive to destination addresses and to source addresses.
  • 16. A system as in claim 1, wherein said one M-trie comprises information for routing decisions responsive to source addresses.
  • 17. A system as in claim 1, wherein said one M-trie comprises unicast routing information.
  • 18. A method comprising the step of:recording, in a plurality of memory banks, at least one M-trie responsive to header information in a set of packets; and dynamically reallocating said nodes within said plurality of banks; wherein at least one node of said M-trie is recorded in at least two of said banks.
  • 19. A method as in claim 18, further comprising the step of accessing said plurality of memory banks in parallel.
  • 20. A method as in claim 18, further comprising the step of dynamically reallocating said nodes within said plurality of banks, said step of allocating being responsive to changes in relative frequency of access to said nodes.
  • 21. A method as in claim 18, further comprising the step of dynamically reallocating said nodes within said plurality of banks, said step of reallocating being responsive to improving access throughput for references to said nodes.
  • 22. A method as in claim 18, further comprising the step of dynamically reallocating said nodes within said plurality of banks, said step of reallocating being responsive to relative frequency of access to said nodes.
  • 23. A method as in claim 18, wherein said at least one node is a node expected to be referenced frequently.
  • 24. A method as in claim 18, wherein said at least one node is a root node for one of said at least one M-tries.
  • 25. A method as in claim 18, wherein said at least one node is disposed for a plurality of concurrent accesses.
  • 26. A method as in claim 18, wherein said one M-trie comprises access control information.
  • 27. A method as in claim 18, wherein said one M-trie comprises accounting information.
  • 28. A method as in claim 18, wherein said one M-trie comprises information for responding to packets.
  • 29. A method as in claim 18, wherein said one M-trie comprises quality of service information.
  • 30. A method as in claim 18, wherein said one M-trie comprises multicast routing information.
  • 31. A method as in claim 18, wherein said one M-trie comprises information for routing decisions.
  • 32. A method as in claim 18, wherein said one M-trie comprises information for routing decisions responsive to destination addresses and to source addresses.
  • 33. A method as in claim 18, wherein said one M-trie comprises information for routing decisions responsive to source addresses.
  • 34. A method as in claim 18, wherein said one M-trie comprises unicast routing information.
US Referenced Citations (210)
Number Name Date Kind
RE. 33900 Howson Apr 1992
4131767 Weinstein Dec 1978
4161719 Parikh et al. Jul 1979
4316284 Howson Feb 1982
4397020 Howson Aug 1983
4419728 Larson Dec 1983
4424565 Larson Jan 1984
4437087 Petr Mar 1984
4438511 Baran Mar 1984
4439763 Limb Mar 1984
4445213 Baugh et al. Apr 1984
4446555 Devault et al. May 1984
4456957 Schieltz Jun 1984
4464658 Thelen Aug 1984
4499576 Fraser Feb 1985
4506358 Montgomery Mar 1985
4507760 Fraser Mar 1985
4532626 Flores et al. Jul 1985
4644532 George et al. Feb 1987
4646287 Larson et al. Feb 1987
4677423 Benvenuto et al. Jun 1987
4679189 Olson et al. Jul 1987
4679227 Hughes-Hartogs Jul 1987
4723267 Jones et al. Feb 1988
4731816 Hughes-Hartogs Mar 1988
4750136 Arpin et al. Jun 1988
4757495 Decker et al. Jul 1988
4763191 Gordon et al. Aug 1988
4769810 Eckberg, Jr. et al. Sep 1988
4769811 Eckberg, Jr. et al. Sep 1988
4771425 Baran, et al. Sep 1988
4819228 Baran, et al. Apr 1989
4827411 Arrowood, et al. May 1989
4833706 Hughes-Hartogs May 1989
4835737 Herrig et al. May 1989
4879551 Georgiou et al. Nov 1989
4893306 Chao et al. Jan 1990
4903261 Baran et al. Feb 1990
4922486 Lidinsky et al. May 1990
4933937 Konishi Jun 1990
4960310 Cushing Oct 1990
4962497 Ferenc et al. Oct 1990
4962532 Kasirai et al. Oct 1990
4965767 Kinoshita et al. Oct 1990
4965772 Daniel et al. Oct 1990
4970678 Sladowski et al. Nov 1990
4979118 Kheradpir Dec 1990
4980897 Decker et al. Dec 1990
4991169 Davis et al. Feb 1991
5003595 Collins et al. Mar 1991
5014265 Hahne et al. May 1991
5020058 Holden et al. May 1991
5033076 Jones et al. Jul 1991
5034919 Sasai et al. Jul 1991
5054034 Hughes-Hartogs Oct 1991
5059925 Weisbloom Oct 1991
5072449 Enns et al. Dec 1991
5088032 Bosack Feb 1992
5095480 Fenner Mar 1992
5115431 Williams et al. May 1992
5128945 Enns et al. Jul 1992
5136580 Videlock et al. Aug 1992
5166930 Braff et al. Nov 1992
5199049 Wilson Mar 1993
5206886 Bingham Apr 1993
5208811 Kashio et al. May 1993
5212686 Joy et al. May 1993
5224099 Corbalis et al. Jun 1993
5226120 Brown et al. Jul 1993
5228062 Bingham Jul 1993
5229994 Balzano et al. Jul 1993
5237564 Lespagnol et al. Aug 1993
5241682 Bryant et al. Aug 1993
5243342 Kattemalalavadi et al. Sep 1993
5243596 Port et al. Sep 1993
5247516 Bernstein et al. Sep 1993
5249178 Kurano et al. Sep 1993
5253251 Aramaki Oct 1993
5255291 Holden et al. Oct 1993
5260933 Rouse Nov 1993
5260978 Fleischer et al. Nov 1993
5268592 Bellamy et al. Dec 1993
5268900 Hluchyj et al. Dec 1993
5271004 Proctor et al. Dec 1993
5274631 Bhardwaj Dec 1993
5274635 Rahman et al. Dec 1993
5274643 Fisk Dec 1993
5280470 Buhrke et al. Jan 1994
5280480 Pitt et al. Jan 1994
5280500 Mazzola et al. Jan 1994
5283783 Nguyen et al. Feb 1994
5287103 Kasprzyk et al. Feb 1994
5287453 Roberts Feb 1994
5291482 McHarg et al. Mar 1994
5305311 Lyles Apr 1994
5307343 Bostica et al. Apr 1994
5309437 Perlman et al. May 1994
5311509 Heddes et al. May 1994
5313454 Bustini et al. May 1994
5313582 Hendel et al. May 1994
5317562 Nardin et al. May 1994
5319644 Liang Jun 1994
5327421 Hiller et al. Jul 1994
5331637 Francis et al. Jul 1994
5345445 Hiller et al. Sep 1994
5345446 Hiller et al. Sep 1994
5359592 Corbalis et al. Oct 1994
5361250 Nguyen et al. Nov 1994
5361256 Doeringer et al. Nov 1994
5361259 Hunt et al. Nov 1994
5365524 Hiller et al. Nov 1994
5367517 Cidon et al. Nov 1994
5371852 Attanasio et al. Dec 1994
5386413 McAuley et al. Jan 1995
5386567 Lien et al. Jan 1995
5390170 Sawant et al. Feb 1995
5390175 Hiller et al. Feb 1995
5394394 Crowther et al. Feb 1995
5394402 Ross Feb 1995
5400325 Chatwani et al. Mar 1995
5408469 Opher et al. Apr 1995
5416842 Aziz May 1995
5422880 Heitkamp et al. Jun 1995
5422882 Hiller et al. Jun 1995
5423002 Hart Jun 1995
5426636 Hiller et al. Jun 1995
5428607 Hiller et al. Jun 1995
5430715 Corbalis et al. Jul 1995
5430729 Rahnema Jul 1995
5442457 Najafi Aug 1995
5442630 Gagliardi et al. Aug 1995
5452297 Hiller et al. Sep 1995
5473599 Li et al. Dec 1995
5473607 Hausman et al. Dec 1995
5477541 White et al. Dec 1995
5485455 Dobbins et al. Jan 1996
5490140 Abensour et al. Feb 1996
5490258 Fenner Feb 1996
5491687 Christensen et al. Feb 1996
5491804 Heath et al. Feb 1996
5497368 Reijnierse et al. Mar 1996
5497478 Murata Mar 1996
5504747 Sweasey Apr 1996
5509006 Wilford et al. Apr 1996
5517494 Green May 1996
5519704 Farinacci et al. May 1996
5519858 Walton et al. May 1996
5526489 Nilakantan et al. Jun 1996
5530963 Moore et al. Jun 1996
5535195 Lee Jul 1996
5539734 Burwell et al. Jul 1996
5541911 Nilakantan et al. Jul 1996
5546370 Ishikawa Aug 1996
5546390 Stone Aug 1996
5555244 Gupta et al. Sep 1996
5561669 Lenney et al. Oct 1996
5583862 Callon Dec 1996
5592470 Rudrapatna et al. Jan 1997
5598581 Daines et al. Jan 1997
5600798 Cherukuri et al. Feb 1997
5602770 Ohira Feb 1997
5604868 Komine et al. Feb 1997
5608726 Virgile Mar 1997
5617417 Sathe et al. Apr 1997
5617421 Chin et al. Apr 1997
5630125 Zellweger May 1997
5631908 Saxe May 1997
5632021 Jennings et al. May 1997
5634010 Ciscon et al. May 1997
5638359 Peltola et al. Jun 1997
5644718 Belove et al. Jul 1997
5659684 Giovannoni et al. Aug 1997
5666353 Klausmeier et al. Sep 1997
5673265 Gupta et al. Sep 1997
5678006 Valizadeh et al. Oct 1997
5680116 Hashimoto et al. Oct 1997
5684797 Aznar et al. Nov 1997
5687324 Green et al. Nov 1997
5689506 Chiussi et al. Nov 1997
5694390 Yamato et al. Dec 1997
5724351 chao et al. Mar 1998
5740097 Satoh Apr 1998
5748186 Raman May 1998
5748617 McLain, Jr. May 1998
5754547 Nakazawa May 1998
5781772 Wilkinson, III et al. Jul 1998
5787430 Doeringer et al. Jul 1998
5802054 Bellenger Sep 1998
5835710 Nagami et al. Nov 1998
5841874 Kempke et al. Nov 1998
5842224 Fenner Nov 1998
5854903 Morrison et al. Dec 1998
5856981 Voelker Jan 1999
5892924 Lyon et al. Apr 1999
5898686 Virgile Apr 1999
5903559 Acharya et al. May 1999
5909440 Ferguson et al. Jun 1999
5917821 Gobuyan et al. Jun 1999
5991817 Rowett et al. Nov 1999
6011795 Varghese et al. Jan 2000
6014659 Wikinson, III et al. Jan 2000
6018524 Turner et al. Jan 2000
6023733 Periasamy et al. Feb 2000
6052683 Irwin Apr 2000
6061712 Tzeng May 2000
6067574 Tzeng May 2000
6115716 Tikkanen et al. Sep 2000
6141738 Munter et al. Oct 2000
6192051 Lipman et al. Feb 2001
6212184 Venkatachary et al. Apr 2001
Foreign Referenced Citations (7)
Number Date Country
0 384 758 A2 Aug 1990 EP
0 431 751 A1 Jun 1991 EP
0 567 217 A2 Oct 1993 EP
WO9307692 Apr 1993 WO
WO9307569 Apr 1993 WO
WO9401828 Jan 1994 WO
WO9520850 Aug 1995 WO
Non-Patent Literature Citations (15)
Entry
Pei et al., “VLSI Implementation of Routing Tables: tries and CAMs,” Proc. of INFOCOM '91, IEEE, pp. 515-524, Apr. 1991.*
M. Allen, “Novell IPX Over Various Wan Media (IPXWAN),” Novell, Inc. Dec. 1993, pp. 1-20.
Donald Becker, “3c589.c EtherLink3 Ethernet Driver for Linux,” 1994.
Shyamal Chowdhury and Kazem Sohraby, “Alternative Bandwidth Allocation Algorithms ForPacket Video In ATM Networks,” Infocom 1992, IEEE, pp. 1061-1068.
W. Doeringer, G. Karjoth and M. Nassehi, “Routing on Longest-Matching Prefixes,” IEEE / ACM Transactions on Networking, vol. 4, No. 1, Feb. 1996, pp. 86-97.
Hiroshi Esaki, Yoshiyuki Tsuda, Takeshi Saito, and Shigeyasu Natsubori, “Datagram Delivery In An ATM-Internet,” IEICE Trans. Communication, vol. E77-B, No. 3, Mar. 1994.
IBM© Technical Disclosure Bulletin, “Method and Apparatus For The Statistical Multiplexing of Voice, Data, and Image Signals,” vol. 35, No. 6, IBM Corporation, Nov. 1992, pp. 409-411.
Tong-Bi Pei and Charles Zukowski, “Putting Routing Tables In Silicon,” IEEE Network, Jan. 1992, vol. 6, No. 1, pp. 42-50.
D. Perkins, “Requirements For An Internet Standard Point-to-Point Protocol,” Carnegie Mellon University, Dec. 1993.
W. Simpson, “The Point-to-Point Protocol (PPP),” Daydreamer, Dec. 1993, pp. 1-47.
Paul F. Tsuchiya, “A Search Algorithm For Table Entries With Non-Contiguous Wildcarding,” pp. 1-10, (unpublished paper 1992).
H. Zhang and D. Ferrari, “Rate-Controlled Static-Priority Queuing,” Computer Science Division, University of Berkeley, IEEE Network, Mar. 1993, pp. 227-235.
William Stallings Ph.D., “Data and Computer Communications,” Macmillan Publishing Company and Collier Macmillan Publishers 1985, pp. 328-334.
William Stallings, Ph.D., “Data and Computer Communications,” second edition, Macmillan Publishing Company, New York, and Collier Macmillan Publishers, London, pp. 328-334.
William Stallings, Ph.D., “Data and Computer Communications,” fourth edition, Macmillan Publishing Company, New York, Maxwell Macmillan Canada, Toronto, and Maxwell Macmillan International, pp. 328-334.