Queue arrays in network devices

Information

  • Patent Grant
  • 8380923
  • Patent Number
    8,380,923
  • Date Filed
    Monday, November 8, 2010
    13 years ago
  • Date Issued
    Tuesday, February 19, 2013
    11 years ago
Abstract
A queue descriptor including a head pointer pointing to the first element in a queue and a tail pointer pointing to the last element in the queue is stored in memory. In response to a command to perform an enqueue or dequeue operation with respect to the queue, fetching from the memory to a cache only one of either the head pointer or tail pointer and returning to the memory from the cache portions of the queue descriptor modified by the operation.
Description
BACKGROUND

This invention relates to utilizing queue arrays in network devices.


Some network devices such as routers and switches have line speeds that can be faster than 10 Gigabits. For maximum efficiency the network devices' processors should be able to process data packets, including storing them to and retrieving them from memory at a rate at least equal to the line rate. However, current network devices may lack the necessary bandwidth between their processors and memory to process data packets at the devices' line speeds.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a network system.



FIG. 2 is a block diagram of a network device.



FIG. 3 shows a queue and queue descriptor.



FIG. 4 is a block diagram of a network processor's cache.



FIG. 5 is a flow chart illustrating an enqueue operation.



FIG. 6 is a flow chart illustrating a dequeue operation.



FIG. 7 is a flow chart illustrating a fetch operation.





DETAILED DESCRIPTION

As shown in FIG. 1, a network system 2 for processing data packets includes sources of data packets 4 coupled to a network device 6 and destinations for data packets 8 coupled to the network device 6. The network device 6 includes a processor 10 with memory data structures configured to receive, store and forward the data packets to a specified destination. The network device 6 can include a network switch, a network router or other network device. The source of data packets 4 can include other network devices connected over a communications path operating at high data packet transfer line speeds. Examples of such communications paths include an optical carrier (OC)-192 line, and a 10-Gigabit line. Likewise, the destination 8 of data packets also can include other network devices as well as a similar network connection.


As shown in FIG. 2 the network device 6 includes memory 14 coupled to the processor 10. The memory 14 stores output queues 18 and their corresponding queue descriptors 20. Upon receiving a data packet from a source 4 (FIG. 1), the processor 10 performs enqueue and dequeue operations to process the packet. An enqueue operation adds information that has arrived in a data packet, which previously was stored in memory 14, to one of the output queues 18 and updates its corresponding queue descriptor 20. A dequeue operation removes information from one of the output queues 18 and updates the corresponding queue descriptor 20, thereby allowing the network device 6 to transmit the information to an appropriate destination 8.


An example of an output queue 18 and its corresponding queue descriptor is shown in FIG. 3. The output queue 18 includes a linked list of elements. 22, each of which contains a pointer 24 to the next element 22 in the output queue 18. A function of the address of each element 22 implicitly maps to the information 26 stored in the memory 14 that the element 22 represents. For example, the first element 22a of output queue 18 shown in FIG. 3 is located at address A. The location in memory of the information 26a that element 22a represents is implicit from the element's address A, illustrated by dashed arrow 27a. Element 22a contains the address B, which serves as a pointer 24 to the next element 22b in the output queue 18, located at address B.


The queue descriptor 20 includes a head pointer 28, a tail pointer 30 and a count 32. The head pointer 28 points to the first element 22 of the output queue 18, and the tail pointer 30 points to the last element 22 of the output queue 18. The count 32 identifies the number (N) of elements 22 in the output queue 18.


Enqueue and dequeue operations for a large number of output queues 18 in memory 14 at high bandwidth line rates can be accomplished by storing some of the queue descriptors 20 in a cache 12 at the processor's 10 memory controller 16 (FIG. 2). Commands to perform enqueue or dequeue operations reference queue descriptors 20 presently stored in the cache 12. When an enqueue or a dequeue operation is required with respect to a queue descriptor 20 that is not presently in the cache 12, the processor 10 issues commands to the memory controller 16 to remove a queue descriptor 20 from the cache 12 to the memory 14 and to fetch a new queue descriptor 20 from memory 14 for storage in the cache 12. In this manner, modifications to a queue descriptor 20 made by enqueue and dequeue operations occur in the cache 12 and are copied to the corresponding queue descriptor 20 in memory 14 upon removal of that queue descriptor 20 from the cache 12.


In order to reduce the read and write operations between the cache 12 and the memory 14, it is possible to fetch and return only those parts of the queue descriptor 20 necessary for the enqueue or dequeue operations.



FIG. 4 illustrates the contents of the cache 12 used to accomplish this function according to one particular implementation. In addition to a number of queue descriptors 20 corresponding to some of the queue descriptors stored in the memory 14, the cache 12 designates a head pointer valid bit 34 and a tail pointer valid bit 36 for each queue descriptor 20 it stores. The valid bits are set when the pointers to which they correspond are modified while stored in the cache 12. The cache 12 also tracks the frequency with which queue descriptors have been used. When a command requires the removal of a queue descriptor, the least-recently-used (“LRU”) queue descriptor 20 is returned to memory 14.


As illustrated by FIG. 5, when performing an enqueue operation, the processor 10 checks 40 if a queue descriptor 20 for the particular queue 18 to which the information will be attached is in the cache 12. If it is not, the processor 10 removes 42 the least-recently-used queue descriptor 20 from the cache 12 to make room for the requested queue descriptor. The tail pointer 30 and count 32 of the requested queue descriptor 20 are fetched 44 from memory 14 and stored in the cache 12, and the tail pointer valid bit (Vbit) 36 is set 46. The processor 10 then proceeds with the enqueue operation at block 60.


If (at block 40) the queue descriptor 20 for the particular requested queue 18 is already in the cache 12, the processor 10 checks 48 whether the tail pointer valid bit 36 has been set. If it has not been set, the tail pointer 30 is fetched 50 from memory 14 and stored in the queue descriptor 20 in the cache 12, and the tail pointer valid bit 36 is set 46. The processor 10 then proceeds with the enqueue operation at block 60. If (at block 48) the tail pointer valid bit 36 has been set, the processor proceeds directly to the enqueue operation at block 60.


In block 60, the processor 10 determines whether the output queue 18 is empty by checking if the count 32 is set to zero. If the count 32 is set to zero, the output queue 18 is empty (it has no elements 22 in it). The address of the new element 22 which implicitly maps to the new information 26, the information 26 being already in the memory 14, is written 62 in both the head pointer 28 and tail pointer 30 in the cache 12 as the new (and only) element 22 in the output queue 18. The count 32 is set 64 to equal one and the head pointer valid bit is set 66.


If (at block 60) the count 32 is not set to zero and the output queue 18 is, therefore, not empty, the processor links 68 the address of the new information's 26 element 22 to the pointer 24 of the last element 22. Thus the pointer 24 of the last element 22 in the queue 18 points to a new element 22 representing the new information 26. The processor 10 writes 70 the address of this new element 22 to the tail pointer 30 of the queue descriptor 20 in the cache 12. The processor 10 increments 72 the count by one and the Enqueue operation is then complete.



FIG. 6 illustrates a dequeue operation. The processor 10 checks 80 whether the queue descriptor 20 for the particular output queue to be used in the dequeue operation is presently in the cache 12. If it is not, the processor 10 removes 81 a queue descriptor from the cache 12 to make room for the requested queue descriptor 20. The processor 10 then fetches 82 the head pointer 28 and count 32 of the requested queue descriptor 20 from memory 14, stores them in the cache 12 and sets 84 the head pointer valid bit (Vbit). The processor 10 proceeds with the dequeue operation at block 90.


If (at block 80) the queue descriptor 20 for the particular output queue 18 requested is already in the cache 12, the processor checks 86 whether the head pointer valid bit 34 has been set. If it has not been set, the head pointer 28 is fetched 88 and the processor 10 proceeds with the dequeue operation at block 90. If the head pointer valid bit 34 has been set, the processor 10 proceeds directly to the dequeue operation at block 90.


In block 90, the head pointer 28 is read to identify the location in memory 14 of the first element 22 in the output queue 18. The information implicitly mapped by the element's 22 address is to be provided as output. That element 22 is also read to obtain the address of the next element 22 in the output queue 18. The address of the next element 22 is written into the head pointer 28, and the count 32 is decremented.


The head pointer 28 need not be fetched during an enqueue operation, thereby saving read bandwidth between the processor 10 and memory 14. Similarly, a tail pointer 30 need not be fetched from memory 14 during a dequeue operation. When a queue descriptor 20 is removed 42, 81 from the cache 12, the processor 10 checks the valid bits 34, 36. If there were no modifications to the tail pointer 30 (for example, when only dequeue operations were performed on the queue), the tail pointer valid bit 36 remains unset. This indicates that write bandwidth can be saved by writing back to memory 14 only the count 32 and head pointer 28. If there were no modifications to the head pointer 28 (for example, when only enqueue operations to a non-empty output queue 18 were performed), the head pointer valid bit 34 remains unset. This indicates that only the count 32 and tail pointer 30 need to be written back to the queue descriptor 20 in memory 14, thus saving write bandwidth.


In some implementations, when a particular queue descriptor 20 is used in the cache 12 for a second time, a “fetch other” operation is executed before the enqueue or dequeue operation. As shown by FIG. 7, one implementation of the “fetch other” operation 94 causes the processor 10 to determine 94 whether the head pointer valid bit 34 has been set and to fetch 95 the head pointer 28 from memory 14 if it has not. If the head valid bit 34 has been set, the processor 10 checks 96 whether the tail valid bit 36 has been set and, if it has not, fetches 97 the tail pointer 30. At completion of the “fetch other” operation, both the head valid bit 34 and the tail valid bit 36 are set 98.


The use of both pointers is needed only if the second enqueue or dequeue operation with respect to the queue descriptor 20 is not the same as the first such operation. However excess bandwidth to support this possibly superfluous fetch and return of queue descriptor 20 parts 28, 30 can be available when the queue descriptor is used by operations more than once while stored in the cache 12.


Various features of the system can be implemented in hardware, software or a combination of hardware and software. For example, some aspects of the system can be implemented in computer programs executing on programmable computers. Each program can be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. Furthermore, each such computer program can be stored on a storage medium, such as read only memory (ROM) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage medium is read by the computer to perform the functions described above.


Other implementations are within the scope of the following claims.

Claims
  • 1. A computer-implemented method comprising: storing in memory a queue descriptor for a queue, the queue descriptor including a count identifying a number of elements in the queue and at least one pointer related to the queue;in response to a command to perform either an enqueue operation with respect to the queue or a dequeue operation with respect to the queue, fetching from the memory to a cache at least a portion of the queue descriptor, including the count;modifying at least a portion of the queue descriptor within the cache in response to the enqueue operation or the dequeue operation; andreturning to the memory from the cache at least the portion of the queue descriptor modified based on the enqueue operation or the dequeue operation.
  • 2. The computer-implemented method of claim 1, wherein the at least one pointer related to the queue comprises at least one of a head pointer or a tail pointer.
  • 3. The computer-implemented method of claim 2, wherein the head pointer points to a first element in the queue and the tail pointer points to a last element in the queue.
  • 4. The computer-implemented method of claim 2 including: fetching the count and the head pointer and not the tail pointer in response to a command to perform a dequeue operation or fetching the count and the tail pointer and not the head pointer in response to a command to perform an enqueue operation.
  • 5. The computer-implemented method of claim 2, wherein the at least a portion of the queue descriptor comprises the count and at least one of a head pointer or a tail pointer.
  • 6. A computer-implemented method comprising: storing in memory of a computer a queue descriptor for a queue;determining whether a pointer of the queue descriptor that was fetched from the memory to a cache of the computer in response to an operation on the queue had been modified by the operation;returning a count from the cache to the memory identifying a number of elements in the queue; andreturning the pointer to the memory from the cache if that pointer had been modified.
  • 7. An apparatus comprising: memory for storing queue descriptors which include a count identifying a number of elements in a queue and a pointer related to the queue;a cache for caching queue descriptors from the memory's queue descriptors; anda processor configured to: fetch from the memory to the cache the count and the pointer related to a particular queue in response to a command to perform an operation with respect to a particular queue descriptor; andreturn to the memory from the cache portions of the particular queue descriptor modified by the operation.
  • 8. The apparatus of claim 7 wherein the processor is configured to fetch the count and a head pointer and not a tail pointer in response to a first command to perform a dequeue operation; or fetch the count and the tail pointer and not the head pointer in response to a second command to perform an enqueue operation.
  • 9. An article comprising a computer-readable medium that stores computer-executable instructions for causing a computer system to: store in memory a queue descriptor for a queue, the queue descriptor including a count identifying a number of elements in the queue and at least one pointer related to the queue;in response to a command to perform either an enqueue operation with respect to the queue or a dequeue operation with respect to the queue, fetch from the memory to a cache at least a portion of the queue descriptor, including the count;modify at least a portion of the queue descriptor within the cache in response to the enqueue operation or the dequeue operation; andreturn to the memory from the cache at least the portion of the queue descriptor modified based on the enqueue operation or the dequeue operation.
  • 10. The article of claim 9 including instructions to cause the computer system to: fetch the count and a head pointer and not a tail pointer in response to a command to perform the dequeue operation; orfetch the count and the tail pointer and not the head pointer in response to a command to perform the enqueue operation.
  • 11. The article of claim 9, wherein the at least a portion of the queue descriptor comprises the count and at least one of a head pointer or a tail pointer.
Parent Case Info

This is a Continuation of U.S. application Ser. No. 10/039,289 filed Jan. 4, 2002, which issued into U.S. Pat. No. 7,895,239 on Feb. 22, 2011.

US Referenced Citations (248)
Number Name Date Kind
3373408 Ling Mar 1968 A
3478322 Evans Nov 1968 A
3792441 Wymore et al. Feb 1974 A
3940745 Sajeva Feb 1976 A
4130890 Adam Dec 1978 A
4400770 Chan et al. Aug 1983 A
4514807 Nogi Apr 1985 A
4523272 Fukunaga et al. Jun 1985 A
4745544 Renner et al. May 1988 A
4866664 Burkhardt, Jr. et al. Sep 1989 A
5140685 Sipple et al. Aug 1992 A
5142683 Burkhardt, Jr. et al. Aug 1992 A
5155831 Emma et al. Oct 1992 A
5155854 Flynn et al. Oct 1992 A
5168555 Byers et al. Dec 1992 A
5173897 Schrodi et al. Dec 1992 A
5185861 Valencia Feb 1993 A
5255239 Taborn et al. Oct 1993 A
5263169 Genusov et al. Nov 1993 A
5268900 Hluchyj et al. Dec 1993 A
5317720 Stamm et al. May 1994 A
5347648 Stamm et al. Sep 1994 A
5367678 Lee et al. Nov 1994 A
5390329 Gaertner et al. Feb 1995 A
5392391 Caulk et al. Feb 1995 A
5392411 Ozaki Feb 1995 A
5392412 McKenna Feb 1995 A
5404464 Bennett Apr 1995 A
5404482 Stamm et al. Apr 1995 A
5432918 Stamm Jul 1995 A
5448702 Garcia, Jr. et al. Sep 1995 A
5450351 Heddes Sep 1995 A
5452437 Richey et al. Sep 1995 A
5459842 Begun et al. Oct 1995 A
5463625 Yasrebi Oct 1995 A
5467452 Blum et al. Nov 1995 A
5517648 Bertone et al. May 1996 A
5542070 LeBlanc et al. Jul 1996 A
5542088 Jennings, Jr. et al. Jul 1996 A
5544236 Andruska et al. Aug 1996 A
5550816 Hardwick et al. Aug 1996 A
5557766 Takiguchi et al. Sep 1996 A
5568617 Kametani Oct 1996 A
5574922 James Nov 1996 A
5592622 Isfeld et al. Jan 1997 A
5613071 Rankin et al. Mar 1997 A
5613136 Casavant et al. Mar 1997 A
5623489 Cotton et al. Apr 1997 A
5627829 Gleeson et al. May 1997 A
5630130 Perotto et al. May 1997 A
5634015 Chang et al. May 1997 A
5644623 Gulledge Jul 1997 A
5649157 Williams Jul 1997 A
5659687 Kim et al. Aug 1997 A
5671446 Rakity et al. Sep 1997 A
5680641 Sidman Oct 1997 A
5682513 Candelaria et al. Oct 1997 A
5684962 Black et al. Nov 1997 A
5689566 Nguyen Nov 1997 A
5699537 Sharangpani et al. Dec 1997 A
5717898 Kagan et al. Feb 1998 A
5721870 Matsumoto Feb 1998 A
5742587 Zornig et al. Apr 1998 A
5742782 Ito et al. Apr 1998 A
5742822 Motomura Apr 1998 A
5745913 Pattin et al. Apr 1998 A
5751987 Mahant Shetti et al. May 1998 A
5761507 Govett Jun 1998 A
5761522 Hisanaga et al. Jun 1998 A
5781774 Krick Jul 1998 A
5784649 Begur et al. Jul 1998 A
5784712 Byers et al. Jul 1998 A
5796413 Shipp et al. Aug 1998 A
5797043 Lewis et al. Aug 1998 A
5809235 Sharma et al. Sep 1998 A
5809530 Samra et al. Sep 1998 A
5812868 Moyer et al. Sep 1998 A
5828746 Ardon Oct 1998 A
5828863 Barrett et al. Oct 1998 A
5832215 Kato et al. Nov 1998 A
5835755 Stellwagen, Jr. Nov 1998 A
5850395 Hauser et al. Dec 1998 A
5854922 Gravenstein et al. Dec 1998 A
5860158 Pai et al. Jan 1999 A
5860159 Hagersten Jan 1999 A
5872769 Caldara et al. Feb 1999 A
5873089 Regache Feb 1999 A
5886992 Raatikainen et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5890208 Kwon Mar 1999 A
5892979 Shiraki et al. Apr 1999 A
5893162 Lau et al. Apr 1999 A
5897658 Eskesen et al. Apr 1999 A
5901333 Hewitt May 1999 A
5905876 Pawlowski et al. May 1999 A
5905889 Wilhelm, Jr. May 1999 A
5915123 Mirsky et al. Jun 1999 A
5937187 Kosche et al. Aug 1999 A
5938736 Muller et al. Aug 1999 A
5940612 Brady et al. Aug 1999 A
5940866 Chisholm et al. Aug 1999 A
5946487 Dangelo Aug 1999 A
5948081 Foster Sep 1999 A
5958031 Kim Sep 1999 A
5961628 Nguyen et al. Oct 1999 A
5970013 Fischer et al. Oct 1999 A
5974518 Nogradi Oct 1999 A
5978838 Mohamed et al. Nov 1999 A
5983274 Hyder et al. Nov 1999 A
6012151 Mano Jan 2000 A
6014729 Lannan et al. Jan 2000 A
6023742 Ebeling et al. Feb 2000 A
6029205 Alferness et al. Feb 2000 A
6058168 Braband May 2000 A
6067585 Hoang May 2000 A
6070231 Ottinger May 2000 A
6072781 Feeney et al. Jun 2000 A
6073215 Snyder Jun 2000 A
6079008 Clery, III Jun 2000 A
6085215 Ramakrishnan et al. Jul 2000 A
6085294 Van Doren et al. Jul 2000 A
6092127 Tausheck Jul 2000 A
6092158 Harriman et al. Jul 2000 A
6112016 MacWilliams et al. Aug 2000 A
6134665 Klein et al. Oct 2000 A
6141689 Yasrebi Oct 2000 A
6141765 Sherman Oct 2000 A
6144669 Williams et al. Nov 2000 A
6145054 Mehrotra et al. Nov 2000 A
6157955 Narad et al. Dec 2000 A
6160562 Chin et al. Dec 2000 A
6182177 Harriman Jan 2001 B1
6195676 Spix et al. Feb 2001 B1
6199133 Schnell Mar 2001 B1
6201807 Prasanna Mar 2001 B1
6212542 Kahle et al. Apr 2001 B1
6212611 Nizar et al. Apr 2001 B1
6216220 Hwang Apr 2001 B1
6223207 Lucovsky et al. Apr 2001 B1
6223238 Meyer et al. Apr 2001 B1
6223279 Nishimura et al. Apr 2001 B1
6247025 Bacon Jun 2001 B1
6247064 Alferness et al. Jun 2001 B1
6256713 Audityan et al. Jul 2001 B1
6272616 Fernando et al. Aug 2001 B1
6275505 O'Loughlin et al. Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6289011 Seo et al. Sep 2001 B1
6298370 Tang et al. Oct 2001 B1
6307789 Wolrich et al. Oct 2001 B1
6308259 Witt Oct 2001 B1
6320861 Adam et al. Nov 2001 B1
6324624 Wolrich et al. Nov 2001 B1
6345334 Nakagawa et al. Feb 2002 B1
6347341 Glassen et al. Feb 2002 B1
6347344 Baker et al. Feb 2002 B1
6351474 Robinett et al. Feb 2002 B1
6356962 Kasper et al. Mar 2002 B1
6359911 Movshovich et al. Mar 2002 B1
6360262 Guenthner et al. Mar 2002 B1
6373848 Allison et al. Apr 2002 B1
6385658 Harter et al. May 2002 B2
6389449 Nemirovsky et al. May 2002 B1
6393483 Latif et al. May 2002 B1
6393531 Novak et al. May 2002 B1
6415338 Habot Jul 2002 B1
6426940 Seo et al. Jul 2002 B1
6426957 Hauser et al. Jul 2002 B1
6427196 Adiletta et al. Jul 2002 B1
6430626 Witkowski et al. Aug 2002 B1
6434145 Opsasnick et al. Aug 2002 B1
6438651 Slane Aug 2002 B1
6463072 Wolrich et al. Oct 2002 B1
6484238 Cutter Nov 2002 B1
6522188 Poole Feb 2003 B1
6523060 Kao Feb 2003 B1
6532509 Wolrich et al. Mar 2003 B1
6539024 Janoska et al. Mar 2003 B1
6552826 Adler et al. Apr 2003 B2
6560667 Wolrich et al. May 2003 B1
6577542 Wolrich et al. Jun 2003 B2
6584522 Wolrich et al. Jun 2003 B1
6587906 Wolrich et al. Jul 2003 B2
6606704 Adiletta et al. Aug 2003 B1
6625654 Wolrich et al. Sep 2003 B1
6631430 Wolrich et al. Oct 2003 B1
6631462 Wolrich et al. Oct 2003 B1
6636883 Zebrowski, Jr. Oct 2003 B1
6658546 Calvignac et al. Dec 2003 B2
6661794 Wolrich et al. Dec 2003 B1
6667920 Wolrich et al. Dec 2003 B2
6668317 Bernstein et al. Dec 2003 B1
6681300 Wolrich et al. Jan 2004 B2
6684303 LaBerge Jan 2004 B2
6687247 Wilford et al. Feb 2004 B1
6694380 Wolrich et al. Feb 2004 B1
6715039 Michael et al. Mar 2004 B1
6724721 Cheriton Apr 2004 B1
6728845 Adiletta et al. Apr 2004 B2
6731596 Chiang et al. May 2004 B1
6754223 Lussier et al. Jun 2004 B1
6757791 O'Grady et al. Jun 2004 B1
6768717 Reynolds et al. Jul 2004 B1
6779084 Wolrich et al. Aug 2004 B2
6791989 Steinmetz et al. Sep 2004 B1
6795447 Kadambi et al. Sep 2004 B2
6804239 Lussier et al. Oct 2004 B1
6810426 Mysore et al. Oct 2004 B2
6813249 Lauffenburger et al. Nov 2004 B1
6816498 Viswanath Nov 2004 B1
6822958 Branth et al. Nov 2004 B1
6822959 Galbi et al. Nov 2004 B2
6842457 Malalur Jan 2005 B1
6850999 Mak et al. Feb 2005 B1
6868087 Agarwala et al. Mar 2005 B1
6888830 Snyder, II et al. May 2005 B1
6944863 Ward et al. Sep 2005 B1
6975637 Lenell Dec 2005 B1
7158964 Wolrich et al. Jan 2007 B2
7181573 Wolrich et al. Feb 2007 B2
7620702 Wolrich et al. Nov 2009 B1
7895239 Wolrich et al. Feb 2011 B2
20010014100 Abe et al. Aug 2001 A1
20020131443 Robinelt et al. Sep 2002 A1
20020144006 Cranston et al. Oct 2002 A1
20020174302 Frank Nov 2002 A1
20020196778 Colemant et al. Dec 2002 A1
20030009633 Hill et al. Jan 2003 A1
20030041216 Rosenbluth et al. Feb 2003 A1
20030046488 Rosenbluth et al. Mar 2003 A1
20030110166 Wolrich et al. Jun 2003 A1
20030115347 Wolrich et al. Jun 2003 A1
20030115426 Rosenbluth et al. Jun 2003 A1
20030131022 Wolrich et al. Jul 2003 A1
20030131198 Wolrich et al. Jul 2003 A1
20030140196 Wolrich et al. Jul 2003 A1
20030147409 Wolrich et al. Aug 2003 A1
20040037302 Varma et al. Feb 2004 A1
20040039895 Wolrich et al. Feb 2004 A1
20040054880 Bernstein et al. Mar 2004 A1
20040071152 Wolrich et al. Apr 2004 A1
20040073728 Wolrich et al. Apr 2004 A1
20040073778 Adiletta et al. Apr 2004 A1
20040098496 Wolrich et al. May 2004 A1
20040109369 Wolrich et al. Jun 2004 A1
20040179533 Donovan Sep 2004 A1
20060069869 Lakshmanamurthy et al. Mar 2006 A1
20080276056 Giacomoni et al. Nov 2008 A1
Foreign Referenced Citations (23)
Number Date Country
0379709 Aug 1990 EP
0418447 Mar 1991 EP
0464715 Jan 1992 EP
0633678 Jan 1995 EP
0745933 Dec 1996 EP
0760501 Mar 1997 EP
0809180 Nov 1997 EP
59-111533 Jun 1984 JP
WO 9415287 Jul 1994 WO
WO 9738372 Oct 1997 WO
WO 9825210 Jun 1998 WO
WO 0115718 Mar 2001 WO
WO 0116769 Mar 2001 WO
WO 0116770 Mar 2001 WO
WO 0116782 Mar 2001 WO
WO 0148596 Jul 2001 WO
WO 0148606 Jul 2001 WO
WO 0148619 Jul 2001 WO
WO 0150247 Jul 2001 WO
WO 0150679 Jul 2001 WO
WO 02069601 Sep 2002 WO
WO 03017541 Feb 2003 WO
WO 2006138649 Dec 2006 WO
Non-Patent Literature Citations (31)
Entry
Janice M. Stone, “A simple and correct shared-queue algorithm using Compare-and-Swap”, 1995 IEEE, pp. 495-504.
Liu Zhanglin et al. “Queue Usage and Memory-Level Parallelism Sensitive Scheduling”,Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA'05),2005, IEEE, 6 pages.
Adiletta, Matthew et al., “The Next Generation of Intel IXP Network Processors”, Intel Technology Journal, Network Processors, vol. 6, Issue 3, Aug. 15, 2002, pp. 6-18.
Kornaros, G., et al. “A Fully-Programmable Memory Management System Optimizing Queue Handling at Multi Gigabit Rates”, ACM, Jun. 2-6, 2003, pp. 54-59.
Michael, Maged M., “Scalable Lock-Free Dynamic Memory Allocation”, PLDI,'04, ACM, Jun. 2004, pp. 1-12.
Dandamudi, Sivarama P., “Multiprocessors”, IEEE computer, Mar. 1997, pp. 82-89.
Kumar, Sailesh et al., “A Scalable, Cache-Based Queue Management Subsystem for Network Processors”, In Proceedings of the ASPLOS-XI Workshop on Building Block Engine Architectures for Computers and Networks(BEACON). Boston, MA. Oct. 2004, pp. 1-7.
Buyuktosunoglu, Alper et al., “Tradeoffs in Power-Efficient Issue Queue Design”, ISLPED'02, ACM, Aug. 2002, 6 pages.
Jonkers, Henk, “Queueing Models of Shared-Memory Parallel Applications”, Computer and Telecommunication Systems Performance Engineering, 1994, 13 pages.
McLuckie, Lorraine et al., “Using the RapidIO Messaging Unit on Power QUICC III”, Freescale Semiconductor, Inc., 2004, Rev 1, pp. 1-19.
Lymar, Tatyana et al., “Data Streams Organization in Query Executor for Parallel DBMS”, Journal Programming and Computer Software, Nov.-Dec. 2001, 4 pages, vol. 27 Issue 6.
Scott, Michael L., “Non-Blocking Timeout in Scalable Queue-Based Spin Locks”, PODC, 2002, ACM, Jul. 2002, pp. 31-40.
Pan, Heidi et al., “Heads and Tails: A Variable-Length Instruction Format Supporting Parallel Fetch and Decode”, CASES 01, Nov. 16-17, 2001, 8 pages.
Wazlowski et al., “PRISM-II Computer and Architecture,” IEEE Proceedings, Workshop on FPGAs for Custom Computing Machines, 1993, 8 pages.
Vibhatavanij et al., “Simultaneous Multithreading-Based Routers,” Proceedings of the 2000 International Conference of Parallel Processing, Toronto, Ontario, Canada, Aug. 21-24, 2000, pp. 362-369.
Turner et al., “Design of a High Performance Active Router,” Internet Document, Online, pp. 1-20, Mar. 18, 1999.
Trimberger et al, “A time-multiplexed FPGA,” Proceedings of the 5th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, pp. 22-28, 1998.
Tremblay et al., “A Three Dimensional Register File for Superscalar Processors,” IEEE Proceedings of the 28th Annual Hawaii International Conference on System Sciences, 1995, pp. 191-201.
Thistle et al., “A Processor Architecture for Horizon,” IEEE, 1998, pp. 35-41.
Schmidt et al., “The Performance of Alternative Threading Architectures for Parallel Communication Subsystems,” Internet Document, Online!, Nov. 13, 1998.
Litch et al., “StrongARMing Portable Communications,” IEEE Micro, 1998, pp. 48-55.
Hyde, R., “Overview of Memory Management,” Byte, 1998, pp. 219-225, vol. 13, No. 4.
Hauser et al., “Garp: A MIPS Processor With a Reconfigurable Coprocessor,” Proceedings of the 5th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 1997, pp. 12-21.
Haug et al., “Reconfigurable Hardware as Shared Resource for Parallel Threads,” IEEE Symposium on FPGAs for Custom Computing Machines, 1998, 2 pages.
Gomez et al.,, “Efficient Multithreaded User-Space Transport for Network Computing: Design and Test of the TRAP Protocol,” Journal of Parallel and Distributed Computing, Jan. 10, 1997, pp. 103-117, vol. 40, No. 1, Academic Press, Duluth, Minnesota.
Fillo et al., “The M-Machine Multicomputer,” IEEE Proceedings of MICRO-28, 1995, pp. 146-156.
Doyle et al., “Microsoft Press Computer Dictionary, 2nd ed.”, 1994. p. 326., Microsoft Press, Redmond, Washington.
Byrd et al., “Multithread Processor Architectures,” IEEE Spectrum, Aug. 1, 1995, pp. 38-46, vol. 32, No. 8, New York.
Brewer, Eric A. et al., “Remote Queue: Exposing Message Queues for Optimization and Atomicity”, appears in SPM '95, 1995, pp. 1-13, Santa Barbara, CA.
Hendler, Danny et al. “Work Dealing”, SPAA '02, ACM, Aug. 2002, pp. 164.
U.S. Appl. No. 09/475,614, filed Dec. 30, 1999.
Related Publications (1)
Number Date Country
20110113197 A1 May 2011 US
Continuations (1)
Number Date Country
Parent 10039289 Jan 2002 US
Child 12941802 US