Method and apparatus for reordering packet data units in storage queues for reading and writing memory

Information

  • Patent Grant
  • 6757791
  • Patent Number
    6,757,791
  • Date Filed
    Tuesday, March 30, 1999
    25 years ago
  • Date Issued
    Tuesday, June 29, 2004
    20 years ago
Abstract
A method and system for reordering data units that are to be written to, or read from, selected locations in a memory are described herein. The data units are reordered so that an order of accessing memory is optimal for speed of reading or writing memory, not necessarily an order in which data units were received or requested. Packets that are received at input interfaces are divided into cells, with cells being allocated to independent memory banks. Many such memory banks are kept busy concurrently, so cells (and thus the packets) are read into the memory as rapidly as possible. The system may include an input queue for receiving data units in a first sequence and a set of storage queues coupled to the input queue for receiving data units from the input queue. The data units may be written from the storage queues to the memory in an order other than the first sequence. The system may also include a disassembly element for generating data units from a packet and a reassembling element for reassembling a packet from the data units.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




This invention relates to reordering data units for reading and writing memory, such as for example used in packet buffering in a packet router.




In a computer network, routing devices receive messages at one of a set of input interfaces, and forward them on to one of a set of output interfaces. It is advantageous for such routing devices to operate as quickly as possible so as to keep up with the rate of incoming messages. As they are received at an input interface, packets are read from the input interface into a memory, a decision is made by the router regarding to which output interface the packet is to be sent, and the packet is read from the memory to the output interface.




One problem in the known art is that packets are often of differing lengths, so that storing the packet in memory can use multiple cells of that memory. This complicates the decision of where in the memory to record the packet, and, depending on the nature of the memory, can slow the operations of reading packets into memory or reading packets from memory.




This problem in the known art is exacerbated by the relative speed with which memory can read and write. As memory becomes ever faster, the desirability of using the relative speed of that memory becomes ever greater. This problem is particularly acute when the memory itself has a plurality of memory banks capable of operating concurrently. Moreover, this problem is exacerbated when memory transfers use a relatively wide data transfer width; transfers that require just one or a few bytes more than the maximum transfer width waste relatively more memory read and write bandwidth as the data transfer width becomes relatively larger.




Accordingly, it would be advantageous to provide a packet buffer memory that uses as much of the speed of the memory as possible, particularly when that memory has banks which are capable of operating concurrently. This advantage is achieved in an embodiment of the invention in which packets are divided into cells, cells are allocated to memory banks capable of operating concurrently, and packets are reconstructed from the cells that were recorded in memory. Writing cells into the memory and reading cells from the memory need not occur in the same order in which those cells are received.




SUMMARY OF THE INVENTION




The invention is directed to a method and system for reordering data units that are to be written to, or read from, selected locations in a memory. The data units are re-ordered so that an order of accessing memory (or portions thereof) is optimal for speed of reading or writing memory, not necessarily an order in which data units were received or requested.




The invention is applicable to a packet memory, and a method for operating that packet memory, so as to use as much memory speed as possible. Packets that are received at input interfaces are divided into cells, with the cells being allocated to independent memory banks. Many such memory banks are kept busy concurrently, so the cells (and thus the packets) are read into the memory as rapidly as possible. A set of first-in-first-out (FIFO) queues includes one queue for each such memory bank, and is disposed in a sequence of rows (herein called “stripes”) so as to have one queue element for each time slot to write to the memory. The FIFO queues can include cells in each stripe from more than one complete packet, so as to reduce the number of memory operations for multiple packets.




In a preferred embodiment, as packets are received, their packet information is disassembled into cells of a uniform size. The individual cells are mapped to sequential memory addresses, in response to the order in which they appear in packets, and in response to the packet queue(s) the packet is to be written to. When the memory is ready to read cells into the memory, a stripe of cells from those queues is read into the memory.




Similarly, for packets that are to be sent to output interfaces, cells can be located in the independent memory banks and read therefrom, so the cells (and thus the packets) are read out of the memory as rapidly as possible. Cells from the memory can be placed in individual queues for each memory bank. When the memory is ready to read cells out of the memory, one stripe of cells from those queues can be read out of the memory, and packets can be reassembled from those cells.




In a preferred embodiment, each stripe of cells to be read into or read out of the memory is filled, as much as possible, before the next memory cycle, so as to make most efficient use of the parallel capacity of the memory. Similarly, stripes of cells to be read into or read out of the memory are also filled, as much as possible, in advance, so that memory cycles can be performed rapidly without waiting for filling any individual memory bank queue. Either of these might involve advancing cells of one or more packets out of order from the rest of their packet, so as to maintain one or more memory banks busy.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

shows a block diagram of an improved system for packet buffer memory use.





FIG. 2

shows a block diagram of the memory controllers in an improved system for packet buffer memory use.





FIG. 3

shows a process flow diagram of a method for using an improved system for packet buffer memory use.





FIG. 4

shows a timing diagram that illustrates the timing of activities occurring in an improved system for packet buffer memory use.











DESCRIPTION OF THE PREFERRED EMBODIMENT




In the following description, a preferred embodiment of the invention is described with regard to preferred process steps and data structures. Those skilled in the art would recognize after perusal of this application that embodiments of the invention can be implemented using processors or circuits adapted to particular process steps and data structures described herein, and that implementation of the process steps and data structures described herein would not require undue experimentation or further invention.




System Elements





FIG. 1

shows a block diagram of an improved system for packet buffer memory use.




A system


100


includes a set of line cards


110


and a switching crossbar fabric


120


.




Each line card


110


includes a set of input interfaces


111


, a set of output interfaces


112


, and a set of (i.e., plural) control elements


130


.




Each control element


130


is disposed for receiving packets


113


at the input interfaces


111


and for transmitting those packets


113


to the switching fabric


120


for processing by the same or another control element


130


.




Each control element


130


includes a receive element


140


, a first memory controller


150


, a first memory


160


, a second memory controller


150


, a second memory


160


, a fabric interface element


170


, a transmit element


180


, and control circuits


190


.




The receive element


140


includes circuits disposed for receiving a sequence of packets


113


from a set of associated input interfaces


111


, and for sending those packets


113


to the first memory controller


150


. In a preferred embodiment, the receive element


140


is also disposed for performing relatively simple processing functions on the packets


113


, such as computing and checking consistency for packet headers or Internet Protocol (IP) header check-sum values.




The first memory controller


150


includes circuits disposed for the following functions:




receiving packets


113


from the receive element


140


;




disassembling packets


113


into sequences of cells


151


; and




storing (and scheduling for storing) cells


151


in the first memory


160


.




Although referred to herein as a single physical memory, the first memory


160


can include more than one SDRAM operating concurrently under the control of the first memory controller


150


. In a preferred embodiment, the first memory


160


includes two physical memories, each of which can operate concurrently or in parallel; however, there is no particular requirement for using more than one physical memory for operation of the invention.




Similar to the first memory controller


150


, the second memory controller


150


includes circuits disposed for the following functions:




retrieving (and scheduling for retrieving) cells


151


from the second memory


160


;




reassembling packets


113


from sequences of cells


151


; and




sending packets


113


to the transmit element


180


.




Similar to the first memory


160


, although referred to herein as a single physical memory, the second memory


160


can include more than one SDRAM operating concurrently under the control of the second memory controller


150


. In a preferred embodiment, the second memory


160


includes two physical memories, each of which can operate concurrently or in parallel; however, there is no particular requirement for using more than one physical memory for operation of the invention.




The fabric interface element


170


includes circuits disposed for sending packets


113


to the switching fabric


120


, and disposed for receiving packets


113


from the switching fabric


120


.




The transmit element


180


includes circuits disposed for sending packets


113


to output interfaces


112


.




The control circuits


190


provide for control of the control element


130


in accordance with techniques described herein.




Memory Controllers





FIG. 2

shows a block diagram of the memory controllers in an improved system for packet buffer memory use.




The first memory controller


150


and the second memory controller


150


have similar design; accordingly, the description below is made with reference to a single memory controller and a single memory. The single memory controller could be either the first memory controller


150


or the second memory controller


150


, while the single memory could be either the first memory


160


or the second memory


160


, as appropriate.




Write Controller




The memory controller


150


includes a set of input queues


210


, each disposed for receiving and storing packets


113


.




The memory controller


150


includes, for each input queue


210


, a disassembly element


220


disposed for disassembling packets


113


into sequences of cells


151


. In a preferred embodiment, each cell


151


is a uniform length (preferably 64 bytes). Thus, each packet


113


can consist of one or more cells


151


.




The memory controller


150


includes a memory queuing element


230


, disposed for queuing cells


151


for storage in the memory


160


.




The memory


160


(or if there is more than one physical memory, each physical memory) includes a plurality of memory banks


161


, each of which includes a segment of memory which is addressable by the memory


160


and separately usable by the memory


160


.




For example, SDRAM memories having a plurality of memory banks are known in the art. One property of known memories having a plurality of banks is that the time during which the entire memory is busy (herein called “busy time”) in storing an element in one bank is less than the time during which the individual bank is busy (herein called “latency time”).




The memory queuing element


230


uses the difference between busy time and latency time for the memory


160


to access separate memory banks


161


of the memory


160


. The memory queuing element


230


arranges the cells


151


so they can be stored into separate memory banks


161


in order, so that it can store cells


151


into separate memory banks


161


faster than if those cells


151


were stored into separate memory banks


161


at random.




The memory queuing element


230


includes a plurality of storage queues


231


, one for each memory bank


161


. Each storage queue


231


includes a set of cell locations


232


, each of which holds a single cell


151


, a storage queue head pointer


233


, and a storage queue tail pointer


234


.




In sequence, at a speed consistent with the busy time of the memory


160


, the memory queuing element


230


commands memory banks


161


to store single cells


151


from the cell location


232


denoted by the storage queue head pointer


233


.




Memory banks


161


are therefore used to store single cells


151


at a speed consistent with the busy time, rather than the latency time, of the memory


160


. Where there are two or more physical memories, the memory queuing element


230


commands those two or more physical memories to operate concurrently or in parallel, so that storage bandwidth into each physical memory can be as fully utilized as possible.




The memory controller


150


includes a dequeuing element


240


, disposed for dequeuing cells


151


from the storage queues


231


for storage in the memory


160


.




The dequeuing element


240


stores one cell


151


in sequence from one cell location


232


at the head of each storage queue


231


into its designated location in the memory


160


. The dequeuing element


240


updates for each storage queue


231


, the storage queue head pointer


233


and the storage queue tail pointer


234


.




In a preferred embodiment, the dequeuing element


240


stores the cells


151


in the memory


160


in the sequence in which they were entered into each storage queue


231


(that is, the dequeuing element


240


does not reorder cells


151


within any of the storage queues


231


). In alternative embodiments, the dequeuing element


240


may reorder cells


151


within the storage queues


231


to achieve greater speed at writing to the memory


160


.




Read Memory Controller




The memory controller


150


includes a memory reading element


250


, disposed for reading cells


151


from the memory


160


for transmission.




The following description is similar to operation of the memory queuing element


230


.




The memory reading element


250


may identify packets


113


that are to be sent to output interfaces in response to their location in the memory


160


(that is, in selected areas of the memory


160


designated for associated output interfaces).




The memory reading element


250


may read the cells


151


for those packets


113


in sequence from their locations in the memory


160


into a set of reassembly queues


251


, similar to the storage queues


231


. The memory reading element


250


may reassemble the packets


113


from the cells


151


located in the reassembly queues


251


, similar to disassembly of the packets


113


into cells


151


for placement in the storage queues


231


.




Once packets


113


are reassembled, they are sent to a set of output queues


252


, each of which is associated with a selected output interface


112


. From each selected output interface


112


, packets


113


are sent to the associated fabric interface element


170


or transmit element


180


.




Timing Diagram





FIG. 4

shows a timing diagram that illustrates the timing of activities occurring in an improved system for packet buffer memory use.




A timing diagram


400


includes an X axis


401


showing progressive time, and a Y axis


402


showing individual memory banks


161


in the memory


160


.




A first trace


410


shows a busy time


411


during which the entire memory


160


is busy for writing to a first memory bank


161


, and a latency time


412


during which the first memory bank


161


is busy but the rest of the memory


160


is not necessarily busy.




Similarly, a second trace


420


shows a similar busy time


421


and latency time


422


for writing to a second memory bank


161


.




Similarly, a third trace


430


shows a similar busy time


431


and latency time


432


for writing to a third memory bank


161


.




Similarly, a fourth trace


440


shows a similar busy time


441


and latency time


442


for writing to a fourth memory bank


161


.




The timing diagram


400


shows that operation of the memory


160


proceeds at an effective rate equal to L/B times the ordinary storage speed of the memory


160


, where L=latency time, and B=busy time.




Writing Stripes




The memory queuing element


230


arranges cells


151


in the storage queues


231


so that an entire set of cell locations


232


, one for each memory bank


161


(herein called a “stripe”) can be stored into the memory


160


concurrently.




The memory queuing element


230


arranges sequential cells


151


from packets


113


in sequential storage queues


231


, so that when those sequential cells


151


are stored into the memory


160


, they are stored into sequential locations therein. However, those sequential cells


151


are written into the memory


160


in stripe order, thus not necessarily in the order in which they arrived in packets


113


.




Although sequential cells


151


are written into the memory


160


in stripe order, they are not necessarily written into the memory


160


in sequential locations in those memory


160


. Thus, a single stripe can include cells


151


to be written into different areas of the memory


160


, in particular in a preferred embodiment where those different areas of the memory


160


are associated with different packet queues.




Method of Operation





FIG. 3

shows a process flow diagram of a method for using an improved system for packet buffer memory use.




A method


300


is performed by the system


100


, including the plural line cards


110


, each having plural control elements


130


, and switching (crossbar) fabric


120


. Each control element


130


includes the receive element


140


, the first memory controller


150


, the first memory


160


, the second memory controller


150


, the second memory


160


, the fabric interface element


170


, the transmit element


180


, and control circuits


190


.




Receiving Packets




At a flow point


310


, the system


100


is ready to receive a packet


113


.




At a step


311


, the receive element


140


receives the packet


113


and stores the packet


113


in an input queue


210


.




At a step


312


, the disassembly element


220


disassembles the packet


113


into one or more cells


151


. Each cell


151


has a uniform length, preferably 64 bytes. If the packet


113


is not an exact multiple of the cell length, the last cell


151


in that packet


113


can contain undetermined data which had been stored in the cell


151


at an earlier time.




At a step


313


, the memory queuing element


230


places the sequence of cells


151


for the packet


113


in a related sequence of storage queues


231


. As each cell


151


is placed in its related storage queue


231


, the memory queuing element


230


updates the tail pointer for that storage queue


231


.




Storing Cells into Memory




At a flow point


320


, the system


100


is ready to store cells


151


in the memory


160


. The method


300


performs those steps following this flow point in parallel with those steps following other flow points.




At a step


321


, the dequeuing element


240


writes cells


151


in sequence from the head of each storage queue


231


to its designated location in the memory


160


.




At a step


322


, the dequeuing element


240


updates, for each storage queue


231


, the storage queue head pointer


233


and the storage queue tail pointer


234


.




Reading Cells from Memory




At a flow point


330


, the system


100


is ready to read cells


151


from the memory


160


. The method


300


performs those steps following this flow point in parallel with those steps following other flow points.




The following description is similar to operation following the flow point


320


.




At a step


331


, the memory reading element


250


identifies, in response to their location in the memory


160


, packets


113


that are to be sent to output interfaces


112


.




At a step


332


, the memory reading element


250


reads the cells


151


for those packets


113


in sequence from their locations in the memory


160


into the reassembly queues


251


.




At a step


333


, the memory reading element


250


reassembles packets


113


from those cells


151


.




Transmitting Packets




At a flow point


340


, the system


100


is ready to transmit packets


113


. The method


300


performs those steps following this flow point in parallel with those steps following other flow points.




At a step


341


, reassembled packets


113


are sent to a set of output queues


252


, each of which is associated with a selected output interface


112


.




At a step


342


, from each selected output interface


112


, packets


113


are sent to the associated fabric interface element


170


(for the first memory controller


150


) or to the transmit element


180


(for the second memory controller


150


).




Generality of the Invention




The invention has substantial generality of application to various fields in which data is reordered for writing into or reading from a storage device. These various fields include, one or more of, or a combination of, any of the following (or any related fields):




Routing packets, frames or cells in a datagram routing network, in a virtual circuit network, or in another type of network. This application could include packet routing systems, frame relay systems, aynchronous transfer mode (ATM) systems, satellite uplink and downlink systems, voice and data systems, and related systems.




Queueing packets, frames or cells in a scheme in which different queues correspond to differing quality of service or allocation of bandwidth for transmission. This application could include broadcast, narrowcast, multicast, simulcast, or other systems in which information is sent to multiple recipients concurrently.




Operating in parallel using multiple memory banks or related storage devices, or using memory banks or related storage devices that have the capability to perform true parallel operation.




Reordering data for processing by multiple components, including either hardware processor components or software components. This application could include reordering of data that is not necessarily fixed length, such as the cells used in the preferred embodiment.




Alternative Embodiments




Although preferred embodiments are disclosed herein, many variations are possible which remain within the concept, scope, and spirit of the invention, and these variations would become clear to those skilled in the art after perusal of this application.



Claims
  • 1. A method including steps forreceiving data units for writing to a memory, said data units being received in a first sequence; storing said data units in a set of storage queues, wherein storing includes advancing a number of said data units out of said first sequence; and writing data units from said set of storage queues to a plurality of regions of said memory, in an order other than the order of said first sequence.
  • 2. A method as in claim 1, wherein writing said data units in an order other than the order of said first sequence enables concurrent access to a plurality of independent memory banks comprising said memory.
  • 3. A method as in claim 1, wherein the size of said data units is uniform.
  • 4. A method as in claim 1, whereinsaid memory has a busy time for each of said regions; said memory has a latency time for each of said regions, said latency time being greater than said busy time for each region; whereby the writing of data units occurs with less latency than writing said data units in said first sequence.
  • 5. A method as in claim 1, wherein said plurality of regions are located in memory elements capable of operating in parallel.
  • 6. A method as in claim 1, wherein said step for storing is performed in an order of said first sequence.
  • 7. A method as in claim 1, wherein the writing of data units occurs in an order that is optimal for writing said data units into said memory with respect to at least one of the following parameters: (1) latency and (2) speed.
  • 8. A method as in claim 1, further including steps forreading data units from said memory in a sequence; storing said sequence of data units in a set of reassembly queues; and reordering said data units as they are output from said reassembly queues to a set of output queues.
  • 9. A method as in claim 1, wherein said plurality of regions are located in a plurality of independent memory banks comprising said memory.
  • 10. A method as in claim 9, wherein said plurality of independent memory banks operate concurrently.
  • 11. A method as in claim 1, wherein said first sequence of data units comprises a sequence in which said data units are present in a packet, said method further including steps forreceiving said packet; and disassembling said packet into said data units.
  • 12. A method as in claim 11, further including steps forreading data units from said memory; and reassembling said packet from said data units read from said memory.
  • 13. A memory controller apparatus coupled to a memory, said apparatus includingan input queue capable of receiving data units in a first sequence; a set of storage queues coupled to said input queue and receiving said data units from said input queue such that adjacent data units in said sequence are stored in differing storage queues, wherein a number of said data units are advanced out of said first sequence; and dequeuing element coupled to said storage queues, wherein said dequeuing element controls the writing of said data units from said storage queues to said memory in an order other than said first sequence.
  • 14. Apparatus as in claim 13, wherein the size of said data units is uniform.
  • 15. Apparatus as in claim 13, wherein said dequeuing element is disposed for reordering said data units stored in said storage queues so as to maintain busy a plurality of memory banks that operate independently in said memory.
  • 16. Apparatus as in claim 13, wherein said dequeuing element operates to write said data units in an order that is optimal for writing into said memory with respect to at least of one of the following parameters: (1) latency and (2) speed.
  • 17. Apparatus as in claim 13, whereinsaid memory has a busy time for each of said regions; said memory has a latency time for each of said regions, said latency time being greater than said busy time for each said region; and said dequeuing element operates to write to said memory with less latency than writing said data units in said first sequence.
  • 18. Apparatus as in claim 13, wherein said memory includes a plurality of memory elements capable of operating in parallel.
  • 19. Apparatus as in claim 13, wherein said first sequence of data units comprises a sequence in which said data units are present in a packet, and wherein said apparatus further includes a disassembly element capable of generating one or more of said data units from said packet.
  • 20. Apparatus as in claim 19, wherein said memory controller is disposed forreading said data units from said memory; and reassembling said packet from said data units read from said memory.
  • 21. Apparatus as in claim 13, wherein said memory includes a plurality of independent memory banks.
  • 22. Apparatus as in claim 21, wherein a plurality of said memory banks operate concurrently.
  • 23. A method of rapidly storing packets in a memory of a packet-switching router, comprising the computer-implemented steps of:receiving a plurality of packets in a set of input queues for writing to a memory; disassembling said packets into data units in a first sequence; storing said data units in a set of storage queues, wherein said step of storing includes advancing a number of said data units out of said first sequence; dequeuing said data units from said storage queues; and writing said data units from said set of storage queues to a plurality of regions of said memory in an order other than the order of said first sequence.
  • 24. A method as recited in claim 23, further comprising the steps of:reading data units from said memory in a sequence; storing said sequence of data units in a set of reassembly queues; and reordering said data units as they are output from said reassembly queues to a set of output queues.
  • 25. A memory controller apparatus coupled to a memory, said apparatus includinga set of input queues capable of receiving packets; a disassembly element coupled to said set of input queues, said disassembly element disassembling said packets into data units in a first sequence; a set of storage queues coupled to said disassembly element and receiving said data units from said disassembly element such that adjacent data units in said first sequence are stored in differing storage queues, wherein a number of data units are advanced out of said first sequence; and a dequeuing element coupled to said storage queues, wherein said dequeuing element controls the writing of said data units from said storage queues to said memory in an order other than said first sequence, said dequeuing element further dequeuing said data units from said storage queues.
  • 26. Apparatus as in claim 25, wherein said apparatus further includesa reading element capable of reading data units from a memory in a first sequence; a set of reassembly queues coupled to said reading element and capable of storing said data units; and a reassembly element coupled to said reassembly queues, said reassembly element reordering and outputting said data units to a set of output queues.
US Referenced Citations (205)
Number Name Date Kind
4131767 Weinstein Dec 1978 A
4161719 Parikh et al. Jul 1979 A
4316284 Howson Feb 1982 A
4397020 Howson Aug 1983 A
4419728 Larson Dec 1983 A
4424565 Larson Jan 1984 A
4437087 Petr Mar 1984 A
4438511 Baran Mar 1984 A
4439763 Limb Mar 1984 A
4445213 Baugh et al. Apr 1984 A
4446555 Devault et al. May 1984 A
4456957 Schieltz Jun 1984 A
4464658 Thelen Aug 1984 A
4499576 Fraser Feb 1985 A
4506358 Montgomery Mar 1985 A
4507760 Fraser Mar 1985 A
4532626 Flores et al. Jul 1985 A
4644532 George et al. Feb 1987 A
4646287 Larson et al. Feb 1987 A
4677423 Benvenuto et al. Jun 1987 A
4679189 Olson et al. Jul 1987 A
4679227 Hughes-Hartogs Jul 1987 A
4723267 Jones et al. Feb 1988 A
4731816 Hughes-Hartogs Mar 1988 A
4750136 Arpin et al. Jun 1988 A
4757495 Decker et al. Jul 1988 A
4763191 Gordon et al. Aug 1988 A
4769810 Eckberg, Jr. et al. Sep 1988 A
4769811 Eckberg, Jr. et al. Sep 1988 A
4771425 Baran et al. Sep 1988 A
4819228 Baran et al. Apr 1989 A
4827411 Arrowood et al. May 1989 A
4833706 Hughes-Hartogs May 1989 A
4835737 Herrig et al. May 1989 A
4879551 Georgiou et al. Nov 1989 A
4893306 Chao et al. Jan 1990 A
4903261 Baran et al. Feb 1990 A
4922486 Lidinsky et al. May 1990 A
4933937 Konishi Jun 1990 A
4960310 Cushing Oct 1990 A
4962497 Ferenc et al. Oct 1990 A
4962532 Kasirai et al. Oct 1990 A
4965772 Daniel et al. Oct 1990 A
4970678 Sladowski et al. Nov 1990 A
4979118 Kheradpir Dec 1990 A
4980897 Decker et al. Dec 1990 A
4991169 Davis et al. Feb 1991 A
5003595 Collins et al. Mar 1991 A
5006982 Ebersole et al. Apr 1991 A
5014265 Hahne et al. May 1991 A
5020058 Holden et al. May 1991 A
5033076 Jones et al. Jul 1991 A
5054034 Hughes-Hartogs Oct 1991 A
5059925 Weisbloom Oct 1991 A
5072449 Enns et al. Dec 1991 A
5088032 Bosack Feb 1992 A
5095480 Fenner Mar 1992 A
RE33900 Howson Apr 1992 E
5115431 Williams et al. May 1992 A
5128945 Enns et al. Jul 1992 A
5136580 Videlock et al. Aug 1992 A
5166930 Braff et al. Nov 1992 A
5199049 Wilson Mar 1993 A
5206886 Bingham Apr 1993 A
5208811 Kashio et al. May 1993 A
5212686 Joy et al. May 1993 A
5224099 Corbalis et al. Jun 1993 A
5226120 Brown et al. Jul 1993 A
5229994 Balzano et al. Jul 1993 A
5237564 Lespagnol et al. Aug 1993 A
5241682 Bryant et al. Aug 1993 A
5243342 Kattemalalavadi et al. Sep 1993 A
5243596 Port et al. Sep 1993 A
5247516 Bernstein et al. Sep 1993 A
5249178 Kurano et al. Sep 1993 A
5253251 Aramaki Oct 1993 A
5255291 Holden et al. Oct 1993 A
5260933 Rouse Nov 1993 A
5260978 Fleischer et al. Nov 1993 A
5268592 Bellamy et al. Dec 1993 A
5268900 Hluchyj et al. Dec 1993 A
5271004 Proctor et al. Dec 1993 A
5274631 Bhardwaj Dec 1993 A
5274635 Rahman et al. Dec 1993 A
5274643 Fisk Dec 1993 A
5280470 Buhrke et al. Jan 1994 A
5280480 Pitt et al. Jan 1994 A
5280500 Mazzola et al. Jan 1994 A
5283783 Nguyen et al. Feb 1994 A
5287103 Kasprzyk et al. Feb 1994 A
5287453 Roberts Feb 1994 A
5291482 McHarg et al. Mar 1994 A
5305311 Lyles Apr 1994 A
5307343 Bostica et al. Apr 1994 A
5309437 Perlman et al. May 1994 A
5311509 Heddes et al. May 1994 A
5313454 Bustini et al. May 1994 A
5313582 Hendel et al. May 1994 A
5317562 Nardin et al. May 1994 A
5319644 Liang Jun 1994 A
5327421 Hiller et al. Jul 1994 A
5331637 Francis et al. Jul 1994 A
5339311 Turner Aug 1994 A
5345445 Hiller et al. Sep 1994 A
5345446 Hiller et al. Sep 1994 A
5359592 Corbalis et al. Oct 1994 A
5361250 Nguyen et al. Nov 1994 A
5361256 Doeringer et al. Nov 1994 A
5361259 Hunt et al. Nov 1994 A
5365524 Hiller et al. Nov 1994 A
5367517 Cidon et al. Nov 1994 A
5371852 Attanasio et al. Dec 1994 A
5386567 Lien et al. Jan 1995 A
5390170 Sawant et al. Feb 1995 A
5390175 Hiller et al. Feb 1995 A
5394394 Crowther et al. Feb 1995 A
5394402 Ross Feb 1995 A
5400325 Chatwani et al. Mar 1995 A
5408469 Opher et al. Apr 1995 A
5416842 Aziz May 1995 A
5422880 Heitkamp et al. Jun 1995 A
5422882 Hiller et al. Jun 1995 A
5423002 Hart Jun 1995 A
5426636 Hiller et al. Jun 1995 A
5428607 Hiller et al. Jun 1995 A
5430715 Corbalis et al. Jul 1995 A
5430729 Rahnema Jul 1995 A
5442457 Najafi Aug 1995 A
5442630 Gagliardi et al. Aug 1995 A
5452297 Hiller et al. Sep 1995 A
5473599 Li et al. Dec 1995 A
5473607 Hausman et al. Dec 1995 A
5477541 White et al. Dec 1995 A
5483523 Nederlof Jan 1996 A
5485455 Dobbins et al. Jan 1996 A
5490140 Abensour et al. Feb 1996 A
5491687 Christensen et al. Feb 1996 A
5491804 Heath et al. Feb 1996 A
5497368 Reijnierse et al. Mar 1996 A
5504747 Sweazey Apr 1996 A
5509006 Wilford et al. Apr 1996 A
5517494 Green May 1996 A
5519700 Punj May 1996 A
5519704 Farinacci et al. May 1996 A
5519858 Walton et al. May 1996 A
5526489 Nilakantan et al. Jun 1996 A
5530963 Moore et al. Jun 1996 A
5535195 Lee Jul 1996 A
5539734 Burwell et al. Jul 1996 A
5541911 Nilakantan et al. Jul 1996 A
5546370 Ishikawa Aug 1996 A
5555244 Gupta et al. Sep 1996 A
5561669 Lenney et al. Oct 1996 A
5583862 Callon Dec 1996 A
5592470 Rudrapatna et al. Jan 1997 A
5598581 Daines et al. Jan 1997 A
5600798 Cherukuri et al. Feb 1997 A
5604868 Komine et al. Feb 1997 A
5608726 Virgile Mar 1997 A
5617417 Sathe et al. Apr 1997 A
5617421 Chin et al. Apr 1997 A
5629927 Waclawsky et al. May 1997 A
5630125 Zellweger May 1997 A
5631908 Saxe May 1997 A
5632021 Jennings et al. May 1997 A
5633865 Short May 1997 A
5634010 Ciscon et al. May 1997 A
5638359 Peltola et al. Jun 1997 A
5640399 Rostoker et al. Jun 1997 A
5644718 Belove et al. Jul 1997 A
5659684 Giovannoni et al. Aug 1997 A
5666353 Klausmeier et al. Sep 1997 A
5673265 Gupta et al. Sep 1997 A
5678006 Valizadeh et al. Oct 1997 A
5680116 Hashimoto et al. Oct 1997 A
5684797 Aznar et al. Nov 1997 A
5687324 Green et al. Nov 1997 A
5689506 Chiussi et al. Nov 1997 A
5694390 Yamato et al. Dec 1997 A
5724351 Chao et al. Mar 1998 A
5732041 Joffe Mar 1998 A
5740402 Bratt et al. Apr 1998 A
5748186 Raman May 1998 A
5748617 McLain, Jr. May 1998 A
5754547 Nakazawa May 1998 A
5802054 Bellenger Sep 1998 A
5809415 Rossmann Sep 1998 A
5822772 Chan et al. Oct 1998 A
5835710 Nagami et al. Nov 1998 A
5854903 Morrison et al. Dec 1998 A
5856981 Voelker Jan 1999 A
5859856 Oskouy et al. Jan 1999 A
5862136 Irwin Jan 1999 A
5870382 Tounai et al. Feb 1999 A
5892924 Lyon et al. Apr 1999 A
5898686 Virgile Apr 1999 A
5903559 Acharya et al. May 1999 A
5905725 Sindhu et al. May 1999 A
6038646 Sproull Mar 2000 A
6137807 Rusu et al. Oct 2000 A
6144637 Calvignac et al. Nov 2000 A
6259699 Opalka et al. Jul 2001 B1
6272567 Pal et al. Aug 2001 B1
6487202 Klausmeier et al. Nov 2002 B1
6493347 Sindhu et al. Dec 2002 B2
Foreign Referenced Citations (7)
Number Date Country
0 384 758 Aug 1990 EP
0 431 751 Jun 1991 EP
WO 9307569 Apr 1993 WO
WO 9307692 Apr 1993 WO
0 567 217 Oct 1993 WO
WO 9401828 Jan 1994 WO
WO 9520850 Aug 1995 WO
Non-Patent Literature Citations (12)
Entry
William Stallings, Data and Computer Communications, pp.: 329-333, Prentice Hall, Upper Saddle River, New Jersey 07458.
Allen, M., “Novell IPX Over Various WAN Media (IPX AN),” Network Working Group, RFC 1551, Dec. 1993, pp. 1-22.
Becker, D., “3c589.c: A 3c589 EtherLink3 ethernet driver for linux,” becker@CESDIS.gsfc.nasa.gov, May 3, 1994, pp. 1-13.
Chowdhury, et al., “Alternative Bandwidth Allocation Algorithms for Packet Video in ATM Networks,” INFOCOM 1992, pp. 1061-1068.
Doeringer, W., “Routing on Longest-Matching Prefixes,” IEEE/ACM Transactions in Networking, vol. 4, No. 1, Feb. 1996, pp. 86-97.
Esaki, et al., “Datagram Delivery in an ATM-Internet,” 2334b IEICE Transactions on Communications, Mar. 1994, No. 3, Tokyo, Japan.
IBM Corporation, “Method and Apparatus for the Statistical Multiplexing of Voice, Data and Image Signals,” IBM Technical Disclosure Bulletin, No. 6, Nov. 1992, pp. 409-411.
Pei, et al., “Putting Routing Tables in Silicon,” IEEE Network Magazine, Jan. 1992, pp. 42-50.
Perkins, D., “Requirements for an Internet Standard Point-to-Point Protocol,” Network Working Group, RFC 1547, Dec. 1993, pp. 1-19.
Simpson, W., “The Point-to-Point Protocol (PPP),” Network Working Group, RFC 1548, Dec. 1993, pp. 1-53.
Tsuchiya, P.F., “A Search Algorithm for Table Entries with Non-Contiguous Wildcarding,” Abstract, Bellcore.
Zhang, et al., “Rate-Controlled Static-Priority Queueing,” INFOCOM 1993, pp. 227-236.