Free list and ring data structure management

Information

  • Patent Grant
  • 7337275
  • Patent Number
    7,337,275
  • Date Filed
    Tuesday, August 13, 2002
    22 years ago
  • Date Issued
    Tuesday, February 26, 2008
    16 years ago
Abstract
A method of managing a free list and ring data structure, which may be used to store journaling information, by storing and modifying information describing a structure of the free list or ring data structure in a cache memory that may also be used to store information describing a structure of a queue of buffers.
Description
BACKGROUND

This application relates to free list and ring data structure management.


A network processor may buffer data packets dynamically by storing received data in linked memory buffers. After the data associated with a particular buffer have been transmitted, that buffer may be returned to a pool, called a “free list,” where available buffers are stored.


A network processor may also buffer data packets using statically allocated, e.g., predefined memory buffers. A ring data structure includes such predefined memory locations. A pointer may be used to track the insertion location of the ring data structure. Another pointer may be used to track the removal location of the ring data structure.


Managing a large number of pools and buffers efficiently may be an important factor in the operation and cost of network processors.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram that illustrates a system that includes a pipelined network processor.



FIG. 2 is a block diagram that illustrates an exemplary pipelined network processor.



FIG. 3 is a block diagram of an exemplary cache data structure.



FIG. 4 is a flow chart that illustrates the flow of enqueue requests to a queue according to an implementation.



FIG. 5 is a block diagram that shows an enqueue operation according to an implementation.



FIG. 6 is a block diagram that shows an enqueue operation subsequent to an enqueue operation to a queue according to an implementation.



FIG. 7 is a flow chart that illustrates the flow of dequeue requests to a queue according to an implementation.



FIG. 8 is a block diagram that shows a dequeue operation according to an implementation.



FIG. 9 is a block diagram that shows a dequeue operation subsequent to a dequeue operation to a queue according to an implementation.



FIG. 10 is a block diagram of a cache data structure that includes memory controller-resident pointers of a free list according to an implementation.



FIG. 11 is a block diagram of a cache data structure that includes memory controller-resident pointers of a free list prior to an enqueue operation according to an implementation.



FIG. 12 is a block diagram that shows an enqueue operation to a free list according to an implementation.



FIG. 13 is a block diagram of a cache data structure that includes memory controller-resident pointers of a free list prior to a dequeue operation according to an implementation.



FIG. 14 is a block diagram that shows a dequeue operation to a free list according to an implementation.



FIG. 15 is a block diagram of a cache data structure that includes memory controller-resident pointers of a memory ring data structure according to an implementation.



FIG. 16 is a block diagram that illustrates a put command operation according to an implementation.



FIG. 17 is a block diagram that illustrates a get command operation according to an implementation.





DETAILED DESCRIPTION

Referring to FIG. 1, a network system 10 for processing data packets includes a source 12 of data packets coupled to an input of a network device 14, such as an interface to other network devices. An output of the network device 14 is coupled to a destination 16 of data packets, such as an interface to other network devices. The network device 14 may include a network processor 18 having a memory for operating on memory data structures. The processor executes instructions and operates with the memory data structures as configured to store and forward the data packets to a specified destination. The data packets received in the network processor are network packets. Network device 14 may include or be part of, for example, a network switch or a network router. The source of data packets 12 may include an interface to other network devices connected over a communications path operating at high data packet transfer line speeds, such as, an optical carrier 10 gigabit line (i.e., OC-192) or other line speeds. The destination 16 of data packets may include a similar network connection or interface.


Referring to FIG. 2, the network processor 18 has multiple programming engines that function, respectively, as a receive pipeline 21, a transmit scheduler 24, a queue manager 27 and a transmit pipeline 28. Each programming engine has a multiple-entry content addressable memory (CAM) to track N of the most recently used queue descriptors where N is the number of entries in the CAM. For example, the queue manager 27 includes the CAM 29. The network processor 18 includes a memory controller 34 that is coupled to a first memory 30 and second memory 32. A third memory 17 includes software instructions for causing the engines to operate as discussed in detail below. Although the illustrated implementation uses separate memories, a single memory may be used to perform the tasks of the first and second memory mentioned above. The memory controller 34 initiates queue commands in the order in which they are received and exchanges data with the queue manager 27. The first memory 30 has a memory space for storing data. The second memory 32 is coupled to the queue manager 27 and other components of the network processor 18.


As shown in FIG. 2, the first memory 30 and the second memory 32 reside externally to the network processor 18. Alternatively, the first memory 30 and/or the second memory 32 may be internal to the network processor 18. The processor 18 also includes hardware interfaces 6, 8 to a receive bus and a transmit bus that are coupled to receive and transmit buffers 20, 36.


The receive buffer 20 is configured to buffer data packets received from the source 12 of data packets. Each data packet may contain a real data portion representing the actual data being sent to the destination, a source data portion representing the network address of the source of the data, and a destination data portion representing the network address of the destination of the data. The receive pipeline 21 is coupled to the output of the receive buffer 20. The receive pipeline 21 also is coupled to a receive ring 22, which may have a first-in-first-out (FIFO) data structure. The receive ring 22 is coupled to the queue manager 27.


The receive pipeline 21 is configured to process the data packets from the receive buffer 20 and store the data packets in data buffers included in memory addresses 38 in the second memory 32. The receive pipeline 21 makes requests 23 to the queue manager 27 through the receive ring 22 to append a buffer to the end of a queue.


Once the data packets are processed by the receive pipeline 21, the receive pipeline may generate enqueue requests 23 directed to the queue manager 27. The receive pipeline 21 may include multi-threaded programming engines working in a pipelined manner. The engines receive packets, classify them, and store them on an output queue based on the classification. This receive processing determines an output queue for each packet. By pipelining, the programming engine may perform the first stage of execution of an instruction and, when the instruction passes to the next stage, a new instruction may be started. The processor does not have to lie idle while waiting for the first instruction to be completed. Therefore, pipelining may lead to improvements in system performance. An enqueue request represents a request to append a buffer descriptor that describes a newly received buffer to the last buffer descriptor in a queue of buffer descriptors 48 in the first memory 30. The receive pipeline 21 may buffer several packets before generating an enqueue request. Consequently, the total number of enqueue requests generated may be reduced.


The transmit scheduler 24 is coupled to the queue manager 27 through the receive ring 22 and is responsible for generating dequeue requests 25 based on specified criteria. Such criteria may include the time when the number of buffers in a particular queue of buffers reaches a predetermined level. The transmit scheduler 24 determines the order of packets to be transmitted. A dequeue request 25 represents a request to remove the first buffer from the queue 48. The transmit scheduler 24 also may include scheduling algorithms for generating dequeue requests 25 such as “round robin”, priority-based or other scheduling algorithms. The transmit scheduler 24 may be configured to use congestion avoidance techniques such as random early detection (RED) which involves calculating statistics for the packet traffic. The transmit scheduler maintains a bit for each queue signifying whether the queue is empty.


The queue manager 27, which in one implementation is provided by a single multi-threaded programming engine, processes enqueue requests from the receive pipeline 21 as well as dequeue requests from the transmit scheduler 24. The queue manager 27 allows for dynamic memory allocation by maintaining linked list data structures for each queue.


The queue manager 27 includes software components configured to manage a cache of data structures that describe the queues (“queue descriptors”). As shown in FIG. 3, a queue descriptor 46a includes a head pointer 50a which points to the first entry A of a queue, a tail pointer 50b which points to the last entry C of the queue, and a count field 50c which indicates the number of entries currently on the queue. The word alignment of the address of the head pointer for all queue descriptors should be a power of two because it is more efficient to work in powers of two when accessing memory to find queue descriptors.


Referring to FIG. 2, the cache has a tag portion 44a and a data store portion 44b. The tag portion 44a of the cache resides in the queue manager 27, and the data store portion 44b of the cache resides in the memory controller 34. The tag portion 44a is managed by the CAM 29, which may include hardware components configured to implement a cache entry replacement policy such as a least recently used (LRU) policy. The tag portion of each entry in the cache references one of the last N queue descriptors used to perform an enqueue or dequeue operation. The queue descriptor's location in memory is stored as a CAM entry. The corresponding queue descriptor is stored in the data store portion 44b of the memory controller 34 at the address entered in the CAM. The actual data (e.g., included in memory addresses 38a38c in FIG. 3) placed on the queue is stored in the second memory 32 and is referenced by the queue of buffer descriptors (e.g., 48a) located in the first memory 30.


The queue manager 27 may alternately service enqueue and dequeue requests. For single-buffer packets, an enqueue request references a tail pointer of an entry in the data store portion 44b. A dequeue request references a head pointer of an entry in the data store portion 44b. Because the cache includes valid updated queue descriptors, the need to lock access to a queue descriptor may be eliminated when near simultaneous enqueue and dequeue operations to the same queue are required. Therefore, the atomic accesses and latency that accompany locking may be avoided.


The data store portion 44b maintains a list of a certain number of the most recently used (MRU) queue descriptors 46. Each queue descriptor includes pointers to a corresponding MRU queue of buffer descriptors 48. In one implementation, the number of MRU queue descriptors 46 in the data store portion 44b is sixteen. Each MRU queue descriptor 46 is referenced by a set of pointers 45 residing in the tag portion 44a. In addition, each MRU queue descriptor 46 may be associated with a unique identifier so that it may be identified easily.


Referring to FIG. 3, the operation of the cache, is illustrated. The first entry in the tag portion 44a is associated with a pointer 45a that points to a MRU queue descriptor 46a residing in the data store portion 44b. The queue descriptor 46a is associated with a MRU queue of buffer descriptors 48a which are discussed in detail below. The queue descriptor 46a includes a head pointer 50a pointing to the first buffer descriptor A and a tail pointer 50b pointing to the last buffer descriptor C. An optional count field 50c maintains the number of buffer descriptors in the queue of buffer descriptors 48a. In this case the count field 50c is set to the value “3” representing the buffer descriptors A, B and C. As discussed in further detail below, the head pointer 50a, the tail pointer 50b and the count field 50c may be modified in response to enqueue requests and dequeue requests.


A buffer descriptor is a data structure that describes a buffer. A buffer descriptor may include an address field, a cell count field and an end of packet (EOP) bit. The address field includes the memory address of a data buffer. Because each data buffer may be further divided into cells, the cell count field includes information about a buffer's cell count. The EOP bit is set to signify that a buffer is the last buffer in a packet.


Referring back to FIG. 2, the present technique implements an implicit mapping 53 between the address of the buffer descriptors in the first memory 30, which may include static random access memory (SRAM), and the addresses of the data buffers in the second memory 32, which may include dynamic random access memory (DRAM). In this context, a queue is an ordered list of buffer descriptors describing data buffers that may be stored at discontinuous addresses.


As shown, for example, in FIG. 3, each buffer descriptor A, B in the queue 48a, except the last buffer descriptor in the queue, includes a buffer descriptor pointer 55a, 55b to the next buffer descriptor in the queue. The buffer descriptor pointer 55c of the last buffer descriptor C in the queue is NULL.


Referring again to FIG. 2, the uncached queue descriptors 50 are stored in the first memory 30 and are not currently referenced by the data store portion 44b. Each uncached queue descriptor 50 also may be associated with a unique identifier. In addition, each uncached queue descriptor 50 includes pointers 51 to a corresponding uncached queue of buffer descriptors 52. In turn, each uncached queue of buffer descriptors 52 includes pointers 57 to the corresponding data buffers included in memory addresses 38 residing in the second memory 32.


Each enqueue request includes an address 38 of the data buffer associated with the corresponding data packet. In addition, each enqueue or dequeue request includes an identifier specifying either an uncached queue descriptor 50 or a MRU queue descriptor 46 associated with the data buffer included in memory address 38.


Referring to FIGS. 4 and 5, in response to the receiving an enqueue request 23, the queue manager 27 generates 100 an enqueue command 13 directed to the memory controller 34. In the illustrated example, the enqueue request 23 is associated with a subsequent data buffer included in memory address 38d and received after the data buffer included in memory address 38c. The enqueue command 13 may include information specifying a MRU queue descriptor 46 residing in the data store portion 44b. It is assumed that the enqueue request 23 includes information specifying the queue descriptor 46a and an address 38d associated with a data buffer. The tail pointer 50b currently pointing to buffer descriptor C in the queue 48a is returned to the queue manager 27. The enqueue request 23 is evaluated to determine whether the specified queue descriptor is currently in the data store portion 44b. If it is not, then a replacement task is performed 110. The replacement task is discussed further below.


The buffer descriptor pointer 55c associated with buffer descriptor C is changed from a NULL value and is set 102 to point to the subsequent buffer descriptor D. That is accomplished by setting the buffer descriptor pointer 55c to the address of the buffer descriptor D. The buffer descriptor D points to the data buffer in memory address 38d that stores the received data packet, as indicated by line 53d.


Once the buffer descriptor pointer 55c has been set, the tail pointer 50b is set 104 to point to buffer descriptor D as indicated by dashed line 61. That is accomplished by setting the tail pointer 50b to the address of the buffer descriptor D. Since buffer descriptor D is now the last buffer descriptor in the queue 48a, the value of the buffer descriptor pointer 55d is NULL. Moreover, the value in the count field 50c is updated to “4” to reflect the number of buffer descriptors in the queue 48a. As a result, the buffer descriptor D is added to the queue 48a by using the queue descriptor 46a residing in the data store portion 44b.


If the enqueue command 13 includes a queue identifier specifying a queue descriptor which is not among the MRU queue descriptors 46, the queue manager 27 replaces a particular MRU queue descriptor 46 with the specified queue descriptor. As a result, the specified queue descriptor and the corresponding uncached queue of buffer descriptors are referenced by the data store portion 44b. In addition, for an enqueue command, the newly referenced queue of buffer descriptors 52 associated with the specified queue descriptor is updated to point to the memory address of the particular data buffer included in memory address 38 storing the received data packet. The MRU queue descriptor 46 may be updated quickly and efficiently because the queue descriptor is already in the data store portion 44b.


Referring to FIG. 6, the processor 18 may receive 106 a subsequent enqueue request associated with the same queue descriptor 46a and queue 48a. For example, it is assumed that the queue manager 27 receives a subsequent enqueue request associated with a newly arrived data buffer 38e. It also is assumed that the data buffer included in memory address 38e is associated with the queue descriptor 46a. The tail pointer 50b may be set to point to buffer E as indicated by the dashed line 62. The tail pointer 50b is updated without having to retrieve it from memory because it is already in the data store portion 44b. As a result, the latency of back-to-back enqueue operations to the same queue of buffers may be reduced. Hence, the queue manager may manage requests to a large number of queues as well as successive requests to only a few queues or to a single queue. Additionally, the queue manager 27 issues commands indicating to the memory controller 34 which of the multiple data store portion entries to use to perform the command.


Referring to FIGS. 7 and 8, in response to receiving 200 a dequeue request 25, the queue manager 27 generates 200 a dequeue command 15 directed to the memory controller 34. In this example, the dequeue request is associated with the queue descriptor 46a and represents a request to retrieve a data buffer from the second memory 32. Once the data buffer is retrieved, it may be transmitted from the second memory 32 to the transmit buffer 36. The dequeue request 25 includes information specifying the queue descriptor 46a. The head pointer 50a of the queue descriptor 46a points, for example, to the first buffer descriptor A which, in turn, points to the data buffer in memory address 38a. As a result, the data buffer in memory address 38a is returned to the queue manager 27.


The head pointer 50a is set 202 to point to the next buffer descriptor B in the queue 48a as indicated by the dashed line 64. That may be accomplished by setting the head pointer 50a to the address of buffer B descriptor. The value in the count field 50c is updated to “4”, reflecting the remaining number of buffer descriptors (B through E). As a result, the data buffer included in memory address 38a is retrieved from the queue 48a by using the queue descriptor 46a residing in the data store portion 44b.


The queue manager 27 may receive 204 subsequent dequeue requests 25 associated with the same queue descriptor. It is assumed, for example, that the queue manager 27 receives a further dequeue request 25 associated with the queue descriptor 46a. Referring to FIG. 9, as indicated by the line 64, the head pointer 46a currently points to buffer B which is now the first buffer because the reference to buffer A previously was removed. The head pointer 50a may be set 206 to point to buffer C, as indicated by a dashed line 65, without first having to retrieve the head pointer 50a from memory because it is already in the data store portion 44b. As a result, the latency of back-to-back dequeue operations to the same queue of buffers may be reduced.


In some situations, however, the queue descriptor 46a currently occupying an entry of the data store portion 44b is not associated with the data buffer in memory address 38b. In that case, the processor 18 performs 208 a replacement task similar to the one discussed above. Once the replacement task has been completed, operations associated with the dequeue request are performed as discussed above.


The cache of queue descriptors may be implemented in a distributed manner such that the tag portion 44a resides in the memory controller 34 and the data store portion 44b resides in the first memory 30. Data buffers included in memory addresses 38 that are received from the receive buffer 20 may be processed quickly. For example, the second of a pair of dequeue commands may be started once the head pointer for that queue descriptor is updated as a result of the first dequeue memory read of the head pointer. Similarly, the second of a pair of enqueue commands may be started once the tail pointer for that queue descriptor is updated as a result of the first enqueue memory read of the tail pointer. In addition, using a queue of buffers, such as a linked list of buffers, allows for a flexible approach to processing a large number of queues. Data buffers may be quickly enqueued to the queue of buffers and dequeued from the queue of buffers.


Entries of the data store portion 44b of the cache which are not used to store information describing the structure of a queue of data buffers may be used to store (1) information describing the structure of a free list as non-cached or permanently-resident entries; (2) information describing the structure of a memory ring as non-cached or permanently-resident entries, (3) information describing the structure of a journal as permanently resident entries or (4) any combination of these uses. Permanently-resident entries are entries that will not be removed to make space for new entries.


A free list functions as a pool of currently unused buffers. Free lists may be used for buffer storage by systems that dynamically allocate memory. Such systems allocate available free storage from a free list for newly received data. An entry is taken from the pool as needed when a packet or cell is received. An entry is returned to the pool when the packet or cell is transmitted or discarded. When a free list is implemented using a linked list data structure, a new buffer may be taken from the front of the queue of currently unused buffers using the dequeue command. Similarly, a buffer whose usage is terminated may be added to the end of the queue of currently unused buffers using the enqueue command.


Alternatively, when a free list is implemented using a stack data structure, a new buffer may be removed for newly received data from the stack using a pop command. A buffer whose usage may be terminated may be added to the stack using a push command. Because a stack is a last-in, first-out (LIFO) data structure, buffers are removed in the reverse order from that in which they are added to the stack. The buffer most recently added to the stack is the first buffer removed.


As shown in FIG. 10, a number of entries 146 of the data store portion 44b of the cache that are not used to store the MRU queue descriptors 46 may be used to store queue descriptors 146a describing the structure of one or more free lists. In one implementation, the number of queue descriptors 46 describing data buffers in the data store portion is sixteen, and the total number of entries in the data store portion 44b of the cache is sixty-four.


The entries 146 of the data store portion used to store queue descriptors describing a structure of a free list may be non-cached or permanently resident entries. Therefore, for each queue descriptor describing a structure of a free list desired to be stored, the fetch replacement task may be performed only once at system initialization to load them into a subset 146 of the entries of the data store portion of the queue.


When the data contained in a buffer has been transmitted, the present usage of the buffer is terminated and the buffer is returned to the free list to replenish the pool of currently unused buffers. A processing engine thread, such as a thread providing a queue manager 27, may generate an enqueue command directed to the memory controller that references a free list entry 146.


Referring to FIG. 11, the operation of the cache is illustrated. In this example, a queue descriptor 146a describing the structure of a free list 148a includes a head pointer 150a pointing to the first buffer V in the free list, a tail pointer 150b pointing to the last buffer Y in the free list, and a count field 150c that maintains the number of buffers in the free list 148a. In this case, the count field 150c is set to the value “4” representing buffers V, W, X and Y. As discussed in further detail below, the head pointer 150a, the tail pointer 150b and the count field 150c may be modified in response to enqueue and dequeue commands that are associated with a free list.


Each buffer in the free list 148a, such as a first buffer V, contains a buffer pointer 155v that points to a next ordered buffer W. The buffer pointer 155y associated with the last buffer Y has a value set to NULL to indicate that it is the last buffer in the queue 148a.


In the example illustrated in FIG. 12, the tail pointer 150b currently pointing to buffer Y is returned to the queue manager 27. The buffer pointer 155y associated with buffer Y currently contains a NULL value indicating that it is the last buffer in the free list 148a. The buffer pointer 155y is set to point to the subsequent buffer Z, which is a buffer whose usage was just terminated. That may be accomplished by setting the buffer pointer 155y to the address of the buffer Z.


Once the buffer pointer 155y has been set, the tail pointer 150b is set to point to buffer Z as indicated by dashed line 161. This may be accomplished by setting the tail pointer 150b to the address of the buffer Z. Moreover, the value in the count field 150c is updated to “5” to reflect the number of buffers in the free list 148a. As a result, the buffer Z is added to the free list 148a by using the queue descriptor 146a residing in the data store portion 44b.


When a store and forward processor receives a new data packet, the system allocates a buffer from the free list.


Referring to FIG. 13, the operation of the cache is illustrated. In this example, a processing engine thread, such as a thread providing the queue manager 27, may generate a dequeue command directed to the memory controller 34 that references a free list entry. The dequeue request is associated with the information describing a structure of the free list 146a and represents a request to retrieve an unused buffer from the memory. Once the unused buffer is retrieved, it may be transmitted from the memory to the receive buffer. The dequeue request 25 includes information specifying the structure of the free list 146a. The head pointer 150a of the information describing the structure of the free list 146a points to the first buffer V in the free list. As a result, unused buffer V is returned to the queue manager.


Referring to FIG. 14, the head pointer 150a is set to point to the next buffer W in the free list 148a as indicated by the dashed line 164. That may be accomplished by setting the head pointer 150a to the address of buffer W. The value in the count field 150c is updated to “4”, reflecting the remaining number of buffers (W through Z). As a result, unused buffer V is retrieved from the free list 148a by using information describing the structure of a free list 146a residing in the data store portion 44b and may be used by the processor to store newly received packets or cells.


As discussed above, enqueue operations that reference information describing the structure of a free list in the cache are used to return buffers to that free list. Dequeue operations that reference information describing the structure of a free list in the cache are used to remove buffers from that free list. Using the present technique, the processor may manage a large number of free lists in an efficient and low cost manner by using hardware (e.g., a memory controller, CAM) already present to perform other tasks.


Entries of the data store portion 44b of the cache which are not used to store information describing the structure of a queue of data buffers also may be used to manage a ring data structure. Because a ring data structure includes a block of contiguous memory addresses that is of a predefined size and location, it may be used for static memory allocation.


Referring to FIG. 15, a technique defines and implements commands that use entries 246 of the data store portion 44b of the cache to store information describing a structure of a ring 300. The information 246a describing a structure of a ring includes a head pointer 250a which tracks the memory location 0003 where data is to be inserted, a tail pointer 250b which tracks the memory location 0001 where data 301 is to be removed, and an optional count field 250c which tracks the number of entries in the ring 300. The entries 246 of the data store portion used to store information describing the structure of a ring may be non-cached or permanently resident entries. Because the ring data structure has a fixed size, whenever either pointer 250a, 250b points to the address at the end of the ring, it wraps back to the address at the start of the ring.


A context of a programming engine may issue a put command to cause data to be written to a ring data structure. The put command specifies a length field and a head pointer, where the length field is specified as a number of words.


Referring to FIG. 16, a data word 303 is written to a ring at the address 0003 indicated by the head pointer 250a. Once the data word has been written to the address 0003, the head pointer 250a is set to point to the next memory location 0004 as indicated by dashed line 175. That is accomplished by setting the head pointer 250a to the memory address 0004. Moreover, the value of the count field 250c is updated to “3” to reflect the number of data words in the ring 300. Additionally, the count field and a status bit indicating whether there was sufficient memory available to write the specified length of words to the ring are returned to the programming engine context that issued the put command. As a result, data is written to the ring 300 by using information describing the structure of the ring 246a residing in the data store portion 44b.


A context of a programming engine may issue a get command to cause data to be read from a ring data structure. The get command specifies a length field and a tail pointer, where the length field is specified as a number of words.


Referring to FIG. 17, a data word 301 is read from a ring at the memory address 0001 indicated by the tail pointer 250b. Once the data word has been read, the tail pointer 250b is set to point to memory location 0002 as indicated by dashed line 176. That is accomplished by setting the tail pointer 250b to the memory address 0002. Moreover, the value of the count field 250c is updated to “2” to reflect the number of data words in the ring 300. As a result, data is removed from the ring 300 by using information describing the structure of the ring 246a residing in the data store portion 44b. If the count field 250c is less than the length field specified in the get command, an identifier, such as a zero data word, indicating the ring 300 is empty is returned to the programming engine context that issued the get command and no data is removed from the ring.


Because a network processor may include multiple programming engines each of which may execute multiple threads or contexts, observing how code is executing on any individual programming engine thread and tracking the progress of different programming engine threads with respect to one another may be useful to help debug applications running on the network processor.


The present technique defines and implements a set of journaling commands that provide a way to observe how code is executing during system operation. The technique uses entries of the data store portion 44b of the cache that are not used to store information describing the structure of a queue of data buffers. These entries are used to manage a ring data structure implemented as a journal. Each of these entries includes information describing a structure of a ring. As discussed earlier in connection with FIGS. 15–17, the information 246a describing the structure of the ring includes a head pointer 250a which tracks the location where data is to be inserted, a tail pointer 250b which tracks the location where data is to be removed, and an optional count field 250c which tracks the number of journal entries made. Because data is inserted into the journal but no data is removed from the journal during program execution, the tail pointer 250b is more meaningful than the head pointer 250a for this purpose. The entries used to support the journal commands may be permanently resident in the data store portion.


Although an executing program may generate messages that provide useful information about the state of an executing context when predetermined locations of the program are reached, the number of instructions used to support a journal should be minimal. Otherwise, the system resources used to support the journal may interfere with the system's real-time programming needs. Hence, the amount of information in the journal should be balanced against the number of instructions and cycles necessary to provide this information.


A context of a programming engine may issue a journal command. The journal command is defined to move a number of words specified by the length field from a memory register to the journal, where each word may include thirty-two bits of journaling information. The journal command may be used to store a number of words from a memory register to the journal when predetermined checkpoints in a program are reached.


The journal_tag command is defined to move a number of words specified by the length field from a memory register to the journal. Each word includes thirty-two bits of journaling information, comprising four bits of programming engine identification, three bits of thread identification and twenty-five bits of journaling information. Hence, the journal_tag command may include the data information provided by the journal command and also may include information about which programming engine and which context of that programming engine issued the journal_tag command.


The fast_journal command is defined to move the command address field from a memory register to the journal. Because all commands have an address field, the fast_journal command provides information about which command or checkpoint was reached in the program that is being debugged.


The fast_journal_tag command is defined to move the command address field from a memory register to the journal, where each word may include four bits of programming engine identification, three bits of context identification, and twenty-five bits of command address to indicate what command was issued. Therefore, the fast_journal_tag command may include the data information provided by the fast_journal command and also may include information about which programming engine and which context of that programming engine issued the command.


The present technique can provide a method of implementing elaborate tracking systems in an efficient and low cost manner by using hardware already present for performing other tasks. One implementation includes sixteen programming engines with eight contexts each. The implementation also includes sixty-four data store portion entries per SRAM channel, sixteen of which are used to store information describing the structure of a queue of data buffers. Because as many as forty-eight data store portion entries may be available per SRAM channel to store information describing the structure of a journal, the present technique may support multiple journals. Other implementations may differ in some respects.


After writing to all memory locations of a journal that implements a ring data structure, the tail pointer wraps around to the start address of the journal to continue writing data. If the ring data structure is completely written, subsequent journal write operations will overwrite the data previously written. Only the most recent data will be present in the ring. The put command, discussed earlier, returns a ring full notification to the programming engine context that issued the put command, using a status bit to indicate there is insufficient memory available to write the specified length of words to the ring. In contrast, all journal commands are completed because there is no need to wait if the insert pointer exceeds the remove pointer.


Various features of the system may be implemented in hardware, software, or a combination of hardware and software. For example, some aspects of the system may be implemented in storage media, having instructions stored thereon, executed by a machine or in computer programs executing on programmable computers or processors. Each program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. Furthermore, each such computer program may be stored on a storage medium, such as read-only-memory (ROM) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage medium is read by the computer to perform the tasks described above.


Other implementations are within the scope of the following claims.

Claims
  • 1. A method comprising: checking a content addressable memory for a tag corresponding to a queue of data buffers associated with a dequeue request;accessing a queue descriptor, for the queue of data buffers, in a cache memory based on a result of the checking;removing a data buffer from the queue of data buffers using the queue descriptor from the cache memory;processing information in the removed data buffer; andappending the data buffer to a queue of currently unused buffers in response to an enqueue request.
  • 2. The method of claim 1 further comprising storing currently unused buffers using a linked list data structure.
  • 3. The method of claim 1 further comprising storing currently unused buffers using a stack data structure.
  • 4. The method of claim 1 wherein the data buffer comprises a network packet and wherein the network packet comprises a packet received in a network processor.
  • 5. The method of claim 1 further comprising modifying information describing a structure of the queue of currently unused buffers.
  • 6. The method of claim 5 wherein said removing is performed in response to receiving a data packet in a receive pipeline and appending and modifying are performed in response to receiving the enqueue request.
  • 7. A method comprising: removing a currently unused buffer from a queue of currently unused buffers in response to a dequeue request;processing a newly received data packet;storing the newly received data packet in the removed buffer;checking a content addressable memory for a tag corresponding to a queue of data buffers associated with an enqueue request;accessing a queue descriptor, for the queue of data buffers, in a cache memory based on a result of the checking;appending the removed buffer to the queue of data buffers using the queue descriptor from the cache memory.
  • 8. The method of claim 7 further comprising storing currently unused buffers using a linked list data structure.
  • 9. The method of claim 7 further comprising storing currently unused buffers using a stack data structure.
  • 10. The method of claim 7 wherein the data buffer comprises a network packet and wherein the network packet comprises a packet received in a network processor.
  • 11. The method of claim 7 further comprising modifying information describing a structure of the queue of currently unused buffers.
  • 12. The method of claim 11 wherein said removing is performed in response to receiving a data packet in the receive pipeline and storing and modifying are performed in response to receiving the dequeue request.
  • 13. A method comprising: receiving a request to write data to a memory ring data structure; andissuing a command, in response to the request, the command specifying a pointer to a memory location in which the data is to be inserted, said pointer describing a structure of the memory ring;writing data to a memory ring address identified by the information describing a structure of the memory ring;incrementing a pointer to a memory location in which data is to be inserted, said pointer describing a structure of the memory ring;incrementing the number of entries in the ring, said number of entries describing a structure of the memory ring; andstoring the modified pointer and number of entries which describe a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers or a structure of a queue of currently unused buffers.
  • 14. The method of claim 13 wherein the command specifies a length field and wherein the number of entries in the ring is incremented by the specified length field.
  • 15. The method of claim 13 further comprising: returning to an issuing programming engine thread, in response to the issued command, the number of entries in the ring, said number of entries describing a structure of the memory ring data structure; andreturning a status bit that indicates whether sufficient memory is available to cause data to be written successfully to the memory ring address identified by a pointer to a memory location where data is to be inserted, said pointer describing a structure of the memory ring.
  • 16. The method of claim 13, the command further specifying a memory address from which to obtain data that is to be written to the memory ring.
  • 17. The method of claim 16 wherein the data contains bite representing an output message from an executing program.
  • 18. The method of claim 17 wherein the bits also include a programming engine identification and a context identification.
  • 19. The method of claim 16 wherein the data contains bits representing a command address, said bits also include a programming engine identification and a context identification.
  • 20. The method of claim 16 further comprising: writing data to a memory ring address identified by the information describing a structure of the memory ring;incrementing a pointer to the memory location where the data is to be inserted, said pointer describing a structure of the memory ring;incrementing, by the specified length field, the number of entries in the ring, said number of entries describing a structure of the memory ring; andstoring the modified pointer and number of entries describing a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers.
  • 21. The method of claim 20 wherein the cache memory can be used to store information about multiple memory ring data structures.
  • 22. A method comprising: receiving a request to read data from a memory ring data structure; andissuing a command, in response to the request, specifying a pointer to a memory location from which the data is to be removed, said pointer describing a structure of the memory ring;reading data from a memory ring address identified by the information describing a structure of the memory ring;incrementing a pointer to a memory location from which data is to be removed, said pointer describing a structure of the memory ring;decrementing the number of entries in the ring, said number of entries describing a structure of the memory ring; andstoring the modified pointer and number of entries which describe a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers.
  • 23. The method of claim 22 wherein the command specifies a length field and wherein the number of entries in the ring is decremented by the specified length field.
  • 24. The method of claim 22 further comprising; returning an identifier to an issuing programming engine context, in response to the issued command, when the number of entries in the ring, said number of entries describing a structure of the memory ring, is less than the specified length field.
  • 25. An apparatus comprising: a processor providing a queue manager and a content addressable memory to store tags associated with buffer queues;a first memory coupled to the processor to store a queue of data buffers and at least one of a queue of currently unused buffers or a ring data structure;a cache memory coupled to the processor to store information describing a structure of the queue of data buffers and information describing at least one of a structure of the queue of currently unused buffers or a structure of the memory ring; anda second memory to store instructions that, when applied to the processor, cause the processor to:check the content addressable memory for a tag corresponding to the queue of data buffers;access the information describing the structure of the queue of data buffers in the cache memory based on a result of the check;remove a data buffer from the queue of data buffers using the information from the cache memory describing the structure of the queue of data buffers;process information in the removed data butter; andappend the data buffer to a queue of currently unused buffers in response to an enqueue request.
  • 26. The apparatus of claim 25 wherein the second memory further includes instructions to cause the processor to store currently unused buffers using a linked list data structure.
  • 27. The apparatus of claim 25 wherein the second memory further includes instructions to cause the processor to store currently unused buffers using a stack data structure.
  • 28. The apparatus of claim 25 wherein the second memory further includes instructions to cause the processor to modify information describing a structure of the queue of currently unused buffers.
  • 29. The apparatus of claim 25 wherein the data buffer comprises a network packet and wherein the network packet comprises a packet received in a network processor.
  • 30. An apparatus comprising: a processor providing a queue manager and a content addressable memory to store tags associated with buffer queues;a first memory coupled to the processor to store a queue of data buffers and at least one of a queue of currently unused buffers or a ring data structure;a cache memory coupled to the processor to store information describing a structure of a queue of data buffers and information describing at least one of a structure of the queue of currently unused buffers or a structure of the memory ring; anda second memory to store instructions that, when applied to the processor, cause the processor to:remove a currently unused buffer from a queue of currently unused buffers in response to a dequeue request;process a newly received data packet;store the newly received data packet in the removed buffer;check the content addressable memory for a tag corresponding to the queue of data buffers;access the information describing the structure of the queue of data buffers in the cache memory based on a result of the check; andappend the removed buffer to the queue of data buffers using the queue descriptor from the cache memory.
  • 31. The apparatus of claim 30 wherein the second memory further includes instructions to cause the processor to store currently unused buffers using a linked list data structure.
  • 32. The apparatus of claim 30 wherein the second memory further includes instructions to cause the processor to store currently unused buffers using a stack data structure.
  • 33. The apparatus of claim 30 wherein the data buffer comprises a network packet and wherein the network packet comprises a packet received in a network processor.
  • 34. The apparatus of claim 30 wherein the second memory further includes instructions to cause the processor to modify information describing a structure of the queue of currently unused buffers.
  • 35. An apparatus comprising: a processor providing a queue manager;a first memory coupled to the processor to store a queue of data buffers and at least one of a queue of currently unused buffers or a ring data structure;a cache memory coupled to the processor to store information describing a structure of the queue of data buffers and information describing at least one of a structure of the queue of currently unused buffers or a structure of the memory ring; anda second memory to store instructions that, when applied to the processor, cause the processor to:receive a request to write data to a memory ring data structure;issue a command, in response to the request, the command specifying a pointer to a memory location where the data is to be inserted, said pointer describing a structure of the memory ring;write data to a memory ring address identified by the information describing a structure of the memory ring;increment a pointer to a memory location where data is to be inserted, said pointer describing a structure of the memory ring;increment the number of entries in the ring, said number of entries describing a structure of the memory ring; andstore the modified pointer and number of entries which describe a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers or a structure of a queue of currently unused buffers.
  • 36. The apparatus of claim 35 wherein the command specifies a length field and wherein the number of entries in the ring is incremented by the specified length field.
  • 37. The apparatus of claim 35 wherein the second memory further includes instructions to cause the processor to: return to an issuing programming engine thread, in response to the issued command, the number of entries in the ring, said number of entries describing a structure of the memory ring data structure; andreturn a status bit that indicates whether sufficient memory is available to cause data to be written successfully to the memory ring address identified by a pointer to a memory location where data is to be inserted, said pointer describing a structure of the memory ring.
  • 38. The apparatus of claim 35 wherein the command further specifies a memory address from which to obtain data that is to be written to the memory ring.
  • 39. The apparatus of claim 38 wherein the data contains bits representing an output message from an executing program.
  • 40. The apparatus of claim 39 wherein the bits also include a programming engine identification and a context identification.
  • 41. The apparatus of claim 38 wherein the data contains bits representing a command address, said bits also include a programming engine identification and a context identification.
  • 42. The apparatus of claim 38 wherein the second memory further includes instructions to cause the processor to: write data to a memory ring address identified by the information describing a structure of the memory ring;increment a pointer to the memory location where the data is to be inserted, said pointer describing a structure of the memory ring;increment, by the specified length field, the number of entries in the ring, said number of entries describing a structure of the memory ring; andstore the modified pointer and number of entries describing a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers.
  • 43. The apparatus of claim 38 wherein the cache memory can be used to store information about multiple memory ring data structures.
  • 44. An apparatus comprising: a processor providing a queue manager;a first memory coupled to the processor to store a queue of data buffers and at least one of a queue of currently unused buffers or a ring data structure;a cache memory coupled to the processor to store information describing a structure of the queue of data buffers and information describing at least one of a structure of the queue of currently unused buffers or a structure of the memory ring; anda second memory to store instructions that, when applied to the processor, cause the processor to:receive a request to read data from a memory ring data structure; andissue a command, in response to the request, specifying a pointer to a memory location from which the data is to be removed, said pointer describing a structure of the memory ring;read data from a memory ring address identified by the information describing a structure of the memory ring;increment a pointer to a memory location where data is to be removed, said pointer describing a structure of the memory ring;decrement the number of entries in the ring, said number of entries describing a structure of the memory ring; andstore the modified pointer and number of entries which describe a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers.
  • 45. The apparatus of claim 44 wherein the command specifies a length field and wherein the number of entries in the ring is decremented by the specified length field.
  • 46. The apparatus of claim 44 wherein the second memory further includes instructions to cause the processor to return an identifier to an issuing programming engine context, in response to the issued command, when the number of entries in the ring, said number of entries describing a structure of the memory ring is less than the specified length field.
  • 47. A system comprising: a source of data packets;a destination of data packets; anda device operating to transfer data packets from the source to the destination comprising:a processor providing a queue manager;a first memory coupled to the processor to store a queue of data buffers and at least one of a queue of currently unused buffers or a ring data structure;a cache memory coupled to the processor to store information describing a structure of the queue of data buffers and information describing at least one of a structure of the queue of currently unused buffers or a structure of the memory ring; anda second memory to store instructions that, when applied to the processor, cause the processor to:remove a data buffer from a linked list of data buffers;process information in the removed data buffer;append the data buffer to a queue of currently unused buffers;store information describing a structure of a queue of currently unused buffers and a queue of data buffers; andmodify information describing a structure of the queue of currently unused buffers.
  • 48. The system of claim 47 wherein the storing is performed using the cache memory having entries to store information describing a structure of a queue of data buffers or a structure of a queue of currently unused buffers.
  • 49. The system of claim 47 comprising the second memory storing instructions that, when applied to the processor, further cause the processor to store currently unused buffers using a linked list data structure.
  • 50. The system of claim 47 wherein the data buffer comprises a network packet and wherein the network packet comprises a packet received in a network processor.
  • 51. A system comprising: a source of data packets;a destination of data packets; anda device operating to transfer data packets from the source to the destination comprising: a processor providing a queue manager;a first memory coupled to the processor to store a queue of data buffers and at least one of a queue of currently unused buffers or a ring data structure;a cache memory coupled to the processor to store information describing a structure of a queue of data buffers and information describing at least one of a structure of the queue of currently unused buffers or a structure of the memory ring; anda second memory to store instructions that, when applied to the processor, cause the processor to: remove a currently unused buffer from a queue of currently unused buffers;process a newly received data packet;store the newly received data packet in the removed buffer;append the removed buffer to a linked list of data buffers;store information describing a structure of a queue of currently unused buffers and a queue of data buffers in a cache memory having entries to store information describing a structure of a queue of data buffers or a structure of a queue of currently unused buffers; andmodify information describing a structure of the queue of currently unused buffers.
  • 52. The system of claim 51 comprising the second memory storing instructions that, when applied to the processor, further cause the processor to store currently unused buffers using a linked list data structure.
  • 53. An article comprising a storage medium having stored thereon instructions that, when executed by a machine, cause the machine to: check a content addressable memory for a tag corresponding to a queue of data buffers associated with a dequeue request;access a queue descriptor, for the queue of data buffers, in a cache memory based on a result of the check;remove a data buffer from the queue of data buffers using the queue descriptor from the cache memoryprocess information in the removed data buffer; andappend the data buffer to a queue of currently unused buffers in response to an enqueue request.
  • 54. The article of claim 53 including instructions that, when executed by a machine, cause the machine to store currently unused buffers using a linked list data structure.
  • 55. The article of claim 53 further including instructions that, when executed by a machine, cause the machine to store currently unused buffers using a stack data structure.
  • 56. The article of claim 53 wherein the data buffer comprises a network packet and wherein the network packet comprises a packet received in a network processor.
  • 57. The article of claim 53 further including instructions that, when executed by a machine, cause the machine to modify information describing a structure of the queue of currently unused buffers.
  • 58. The article of claim 57 wherein the removing is performed in response to receiving a data packet in a receive pipeline appending and modifying are performed in response to receiving the enqueue request.
  • 59. An article comprising a storage medium having stored thereon instructions that, when executed by a machine, cause the machine to: remove a currently unused buffer from a queue of currently unused buffers in response to a dequeue request;process a newly received data packet;store the newly received data packet in the removed buffer;check a content addressable memory for a tag corresponding to a queue of data buffers associated with an enqueue request;access a queue descriptor, for the queue of data buffers, in a cache memory based on a result of the check;append the removed buffer to the queue of data buffers using the queue descriptor from the cache memory.
  • 60. The article of claim 59 further including instructions that, when executed by a machine, cause the machine to store currently unused buffers using a linked list data structure.
  • 61. The article of claim 59 further including instructions that, when executed by a machine, cause the machine to store currently unused buffers using a stack data structure.
  • 62. The article of claim 59 wherein the data buffer comprises a network packet and wherein the network packet comprises a packet received in a network processor.
  • 63. The article of claim 59 further including instructions that, when executed by a machine, cause the machine to modify information describing a structure of the queue of currently unused buffers.
  • 64. The article of claim 63 wherein the removing is performed in response to receiving a data packet in the receive pipeline, and storing and modifying are performed in response to receiving the dequeue request.
  • 65. An article comprising a storage medium having stored thereon instructions that, when executed by a machine, cause the machine to: receive a request to write data to a memory ring data structure; andissue a command, in response to the request, the command specifying a pointer to a memory location where the data is to be inserted, said pointer describing a structure of the memory ring;write data to a memory ring address identified by the information describing a structure of the memory ring;increment a pointer to a memory location where data is to be inserted, said pointer describing a structure of the memory ring;increment the number of entries in the ring, said number of entries describing a structure of the memory ring; andstore the modified pointer and number of entries which describe a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers or a structure of a queue of currently unused buffers.
  • 66. The article of claim 65 wherein the command specifies a length field and wherein the number of entries in the ring is incremented by the specified length field.
  • 67. The article of claim 65 further including instructions that, when executed by a machine, cause the machine to: return to an issuing programming engine thread, in response to the issued command the number of entries in the ring, said number of entries describing a structure of the memory ring data structure; andreturn a status bit that indicates whether sufficient memory is available to cause data to be written successfully to the memory ring address identified by a pointer to a the memory location where data is to be inserted, said pointer describing a structure of the memory ring.
  • 68. The article of claim 65 wherein the command further specifies a memory address from which to obtain data that is to he written to the memory ring.
  • 69. The article of claim 68 wherein the data contains bits representing an output message from an executing program.
  • 70. The article of claim 69 wherein the bits include a programming engine identification and a context identification.
  • 71. The article of claim 68 wherein the data contains bits representing a command address, said bits also include a programming engine identification and a context identification.
  • 72. The article of claim 68 further including instructions that, when executed by a machine, cause the machine to: write data to a memory ring address identified by the information describing a structure of the memory ring;increment a pointer to the memory location where the data is to be inserted, said pointer describing a structure of the memory ring;increment, by the specified length field, the number of entries in the ring, said number of entries describing a structure of the memory ring; andstore the modified pointer and number of entries describing a structure of the memory ring in a cache memory of which a subset of entries may be used to store information describing a queue of data buffers.
  • 73. The article of claim 68 wherein the cache memory can be used to store information about multiple memory ring data structures.
  • 74. An article comprising a storage medium having stored thereon instructions that, when executed by a machine, cause the machine to: receive a request to read data from a memory ring data structure; andissue a command, in response to the request, specifying a pointer to a memory location where the data is to be removed, said pointer describing a structure of the memory ring;read data from a memory ring address identified by the information describing a structure of the memory ring;increment a pointer to a memory location where data is to be removed, said pointer describing a structure of the memory ring;decrement the number of entries in the ring, said number of entries describing a structure of the memory ring; andstore the modified pointer and number of entries describing a structure of the memory ring in a cache memory having entries to store information describing a structure of a queue of data buffers.
  • 75. The article of claim 74 wherein the command specifies a length field and wherein the number of entries in the ring is decremented by the specified length field.
  • 76. The article of claim 74 further including instructions that, when executed by a machine, cause the machine to return an identifier to an issuing programming engine context, in response to the issued command, when the number of entries in the ring, said number of entries describing a structure of the memory ring, is less than the specified length field.
US Referenced Citations (327)
Number Name Date Kind
3373408 Ling Mar 1968 A
3478322 Evans Nov 1969 A
3792441 Wymore et al. Feb 1974 A
3881173 Larsen et al. Apr 1975 A
3913074 Homberg et al. Oct 1975 A
3940745 Sajeva Feb 1976 A
4023023 Bourrez et al. May 1977 A
4045782 Anderson et al. Aug 1977 A
4130890 Adam Dec 1978 A
4189767 Ahuja Feb 1980 A
4392758 Bowles et al. Jul 1983 A
4400770 Chan et al. Aug 1983 A
4514807 Nogi Apr 1985 A
4523272 Fukunaga et al. Jun 1985 A
4569016 Hao et al. Feb 1986 A
4724521 Carron et al. Feb 1988 A
4742451 Bruckert et al. May 1988 A
4745544 Renner et al. May 1988 A
4777587 Case et al. Oct 1988 A
4833657 Tanaka May 1989 A
4866664 Burkhardt, Jr. et al. Sep 1989 A
4868735 Moller et al. Sep 1989 A
4992934 Portanova et al. Feb 1991 A
5008808 Fries et al. Apr 1991 A
5073864 Methvin et al. Dec 1991 A
5113516 Johnson May 1992 A
5140685 Sipple et al. Aug 1992 A
5142676 Fried et al. Aug 1992 A
5142683 Burkhardt, Jr. et al. Aug 1992 A
5155831 Emma et al. Oct 1992 A
5155854 Flynn et al. Oct 1992 A
5165025 Lass Nov 1992 A
5166872 Weaver et al. Nov 1992 A
5168555 Byers et al. Dec 1992 A
5173897 Schrodi et al. Dec 1992 A
5247671 Adkins et al. Sep 1993 A
5255239 Taborn et al. Oct 1993 A
5263169 Genusov et al. Nov 1993 A
5274770 Yeoh et al. Dec 1993 A
5347648 Stamm et al. Sep 1994 A
5357617 Davis et al. Oct 1994 A
5363448 Koopman, Jr. et al. Nov 1994 A
5367678 Lee et al. Nov 1994 A
5390329 Gaertner et al. Feb 1995 A
5392391 Caulk, Jr. et al. Feb 1995 A
5392411 Ozaki Feb 1995 A
5392412 McKenna Feb 1995 A
5404464 Bennett Apr 1995 A
5404482 Stamm et al. Apr 1995 A
5428809 Coffin et al. Jun 1995 A
5432918 Stamm Jul 1995 A
5436626 Fujiwara et al. Jul 1995 A
5442756 Grochowski et al. Aug 1995 A
5448702 Garcia, Jr. et al. Sep 1995 A
5450351 Heddes Sep 1995 A
5450603 Davies Sep 1995 A
5452437 Richey et al. Sep 1995 A
5459842 Begun et al. Oct 1995 A
5463625 Yasrebi Oct 1995 A
5467452 Blum et al. Nov 1995 A
5481683 Karim Jan 1996 A
5487159 Byers et al. Jan 1996 A
5517628 Morrison et al. May 1996 A
5517648 Bertone et al. May 1996 A
5541920 Angle et al. Jul 1996 A
5542070 LeBlanc et al. Jul 1996 A
5542088 Jennings, Jr. et al. Jul 1996 A
5544236 Andruska et al. Aug 1996 A
5550816 Hardwick et al. Aug 1996 A
5557766 Takiguchi et al. Sep 1996 A
5568617 Kametani Oct 1996 A
5574922 James Nov 1996 A
5574939 Keckler et al. Nov 1996 A
5592622 Isfeld et al. Jan 1997 A
5600812 Park Feb 1997 A
5606676 Grochowski et al. Feb 1997 A
5610864 Manning Mar 1997 A
5613071 Rankin et al. Mar 1997 A
5613136 Casavant et al. Mar 1997 A
5623489 Cotton et al. Apr 1997 A
5627829 Gleeson et al. May 1997 A
5630130 Perotto et al. May 1997 A
5640538 Dyer et al. Jun 1997 A
5644623 Gulledge Jul 1997 A
5649109 Griesmer et al. Jul 1997 A
5649157 Williams Jul 1997 A
5652583 Kang Jul 1997 A
5659687 Kim et al. Aug 1997 A
5659722 Blaner et al. Aug 1997 A
5680641 Sidman Oct 1997 A
5689566 Nguyen Nov 1997 A
5692167 Grochowski et al. Nov 1997 A
5699537 Sharangpani et al. Dec 1997 A
5701435 Chi Dec 1997 A
5717760 Satterfield Feb 1998 A
5717898 Kagan et al. Feb 1998 A
5721870 Matsumoto Feb 1998 A
5724563 Hasegawa Mar 1998 A
5742587 Zornig et al. Apr 1998 A
5742782 Ito et al. Apr 1998 A
5742822 Motomura Apr 1998 A
5745913 Pattin et al. Apr 1998 A
5751987 Mahant-Shetti et al. May 1998 A
5761507 Govett Jun 1998 A
5761522 Hisanaga et al. Jun 1998 A
5781774 Krick Jul 1998 A
5784649 Begur et al. Jul 1998 A
5784712 Byers et al. Jul 1998 A
5790813 Whittaker Aug 1998 A
5796413 Shipp et al. Aug 1998 A
5797043 Lewis et al. Aug 1998 A
5809235 Sharma et al. Sep 1998 A
5809530 Samra et al. Sep 1998 A
5812799 Zuravleff et al. Sep 1998 A
5812839 Hoyt et al. Sep 1998 A
5812868 Moyer et al. Sep 1998 A
5813031 Chou et al. Sep 1998 A
5815714 Shridhar et al. Sep 1998 A
5815799 Barnes et al. Sep 1998 A
5819080 Dutton et al. Oct 1998 A
5828746 Ardon Oct 1998 A
5828863 Barrett et al. Oct 1998 A
5829033 Hagersten et al. Oct 1998 A
5832215 Kato et al. Nov 1998 A
5832258 Kiuchi et al. Nov 1998 A
5835755 Stellwagen, Jr. Nov 1998 A
5835928 Auslander et al. Nov 1998 A
5854922 Gravenstein et al. Dec 1998 A
5860158 Pai et al. Jan 1999 A
5886992 Raatikainen et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5890208 Kwon Mar 1999 A
5892979 Shiraki et al. Apr 1999 A
5893162 Lau et al. Apr 1999 A
5905876 Pawlowski et al. May 1999 A
5905889 Wilhelm, Jr. May 1999 A
5915123 Mirsky et al. Jun 1999 A
5933627 Parady Aug 1999 A
5937187 Kosche et al. Aug 1999 A
5938736 Muller et al. Aug 1999 A
5940612 Brady et al. Aug 1999 A
5940866 Chisholm et al. Aug 1999 A
5946487 Dangelo Aug 1999 A
5948081 Foster Sep 1999 A
5951679 Anderson et al. Sep 1999 A
5958031 Kim Sep 1999 A
5961628 Nguyen et al. Oct 1999 A
5970013 Fischer et al. Oct 1999 A
5978838 Mohamed et al. Nov 1999 A
5978874 Singhal et al. Nov 1999 A
5983274 Hyder et al. Nov 1999 A
5996068 Dwyer, III et al. Nov 1999 A
6002881 York et al. Dec 1999 A
6009505 Thayer et al. Dec 1999 A
6009515 Steele, Jr. Dec 1999 A
6012151 Mano Jan 2000 A
6014729 Lannan et al. Jan 2000 A
6023742 Ebeling et al. Feb 2000 A
6029170 Garger et al. Feb 2000 A
6029228 Cai et al. Feb 2000 A
6047334 Langendorf et al. Apr 2000 A
6058168 Braband May 2000 A
6058465 Nguyen May 2000 A
6067585 Hoang May 2000 A
6070231 Ottinger May 2000 A
6072781 Feeney et al. Jun 2000 A
6073215 Snyder Jun 2000 A
6076129 Fenwick et al. Jun 2000 A
6076158 Sites et al. Jun 2000 A
6079008 Clery, III Jun 2000 A
6079014 Papworth et al. Jun 2000 A
6085215 Ramakrishnan et al. Jul 2000 A
6085294 Van Doren et al. Jul 2000 A
6088783 Morton Jul 2000 A
6092127 Tausheck Jul 2000 A
6092158 Harriman et al. Jul 2000 A
6092175 Levy et al. Jul 2000 A
6112016 MacWilliams et al. Aug 2000 A
6115811 Steele, Jr. Sep 2000 A
6134665 Klein et al. Oct 2000 A
6141348 Muntz Oct 2000 A
6141689 Yasrebi Oct 2000 A
6141765 Sherman Oct 2000 A
6144669 Williams et al. Nov 2000 A
6145054 Mehrotra et al. Nov 2000 A
6145123 Torrey et al. Nov 2000 A
6157955 Narad et al. Dec 2000 A
6160562 Chin et al. Dec 2000 A
6173349 Qureshi et al. Jan 2001 B1
6182177 Harriman Jan 2001 B1
6195676 Spix et al. Feb 2001 B1
6199133 Schnell Mar 2001 B1
6201807 Prasanna Mar 2001 B1
6212542 Kahle et al. Apr 2001 B1
6212602 Wicki et al. Apr 2001 B1
6212604 Tremblay Apr 2001 B1
6212611 Nizar et al. Apr 2001 B1
6216220 Hwang Apr 2001 B1
6223207 Lucovsky et al. Apr 2001 B1
6223238 Meyer et al. Apr 2001 B1
6223277 Karguth Apr 2001 B1
6223279 Nishimura et al. Apr 2001 B1
6230119 Mitchell May 2001 B1
6230261 Henry et al. May 2001 B1
6233599 Nation et al. May 2001 B1
6247025 Bacon Jun 2001 B1
6247040 Born et al. Jun 2001 B1
6247086 Allingham Jun 2001 B1
6249829 Bloks et al. Jun 2001 B1
6256713 Audityan et al. Jul 2001 B1
6272616 Fernando et al. Aug 2001 B1
6275505 O Loughlin, et al. Aug 2001 B1
6278289 Guccione et al. Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6289011 Seo et al. Sep 2001 B1
6298370 Tang et al. Oct 2001 B1
6307789 Wolrich et al. Oct 2001 B1
6311256 Halligan et al. Oct 2001 B2
6324624 Wolrich et al. Nov 2001 B1
6345334 Nakagawa et al. Feb 2002 B1
6347344 Baker et al. Feb 2002 B1
6351808 Joy et al. Feb 2002 B1
6356962 Kasper et al. Mar 2002 B1
6357016 Rodgers et al. Mar 2002 B1
6360262 Guenthner et al. Mar 2002 B1
6366978 Middleton et al. Apr 2002 B1
6373848 Allison et al. Apr 2002 B1
6378124 Bates et al. Apr 2002 B1
6381668 Lunteren Apr 2002 B1
6389449 Nemirovsky et al. May 2002 B1
6393483 Latif et al. May 2002 B1
6401149 Dennin et al. Jun 2002 B1
6408325 Shaylor Jun 2002 B1
6415338 Habot Jul 2002 B1
6426940 Seo et al. Jul 2002 B1
6427196 Adiletta et al. Jul 2002 B1
6430626 Witkowski et al. Aug 2002 B1
6430646 Thusoo et al. Aug 2002 B1
6434145 Opsasnick et al. Aug 2002 B1
6449289 Quicksall Sep 2002 B1
6457078 Magro et al. Sep 2002 B1
6463072 Wolrich et al. Oct 2002 B1
6480943 Douglas et al. Nov 2002 B1
6490642 Thekkath et al. Dec 2002 B1
6496925 Rodgers et al. Dec 2002 B1
6505229 Turner et al. Jan 2003 B1
6505281 Sherry Jan 2003 B1
6513089 Hofmann et al. Jan 2003 B1
6523108 James et al. Feb 2003 B1
6529999 Keller et al. Mar 2003 B1
6532509 Wolrich et al. Mar 2003 B1
6539439 Nguyen et al. Mar 2003 B1
6552826 Adler et al. Apr 2003 B2
6560667 Wolrich et al. May 2003 B1
6570877 Kloth et al. May 2003 B1
6577542 Wolrich et al. Jun 2003 B2
6577625 Chiou et al. Jun 2003 B1
6581124 Anand Jun 2003 B1
6584522 Wolrich et al. Jun 2003 B1
6587905 Correale et al. Jul 2003 B1
6587906 Wolrich et al. Jul 2003 B2
6606704 Adiletta et al. Aug 2003 B1
6625654 Wolrich et al. Sep 2003 B1
6628652 Chrin et al. Sep 2003 B1
6629237 Wolrich et al. Sep 2003 B2
6631430 Wolrich et al. Oct 2003 B1
6631462 Wolrich et al. Oct 2003 B1
6633938 Rowlands et al. Oct 2003 B1
6643726 Patkar et al. Nov 2003 B1
6654836 Misra et al. Nov 2003 B1
6661794 Wolrich et al. Dec 2003 B1
6661795 Adas et al. Dec 2003 B1
6667920 Wolrich et al. Dec 2003 B2
6668311 Hooper et al. Dec 2003 B2
6668317 Bernstein et al. Dec 2003 B1
6671761 Kim Dec 2003 B2
6671827 Guilford et al. Dec 2003 B2
6678248 Haddock et al. Jan 2004 B1
6681300 Wolrich et al. Jan 2004 B2
6684361 Tong et al. Jan 2004 B2
6694380 Wolrich et al. Feb 2004 B1
6697923 Chen et al. Feb 2004 B2
6724767 Chong et al. Apr 2004 B1
6725313 Wingard et al. Apr 2004 B1
6728845 Adiletta et al. Apr 2004 B2
6738831 Wolrich et al. May 2004 B2
6754662 Li Jun 2004 B1
6754795 Chen et al. Jun 2004 B2
6781992 Rana et al. Aug 2004 B1
6785843 McRae et al. Aug 2004 B1
6823399 Horiguchi et al. Nov 2004 B2
6826180 Bergantino et al. Nov 2004 B1
6847645 Potter et al. Jan 2005 B1
6868476 Rosenbluth et al. Mar 2005 B2
6889319 Rodgers et al. May 2005 B1
6941438 Wolrich et al. Sep 2005 B2
6958973 Chen et al. Oct 2005 B1
7028118 Smith et al. Apr 2006 B2
7051329 Boggs et al. May 2006 B1
7089379 Sharma et al. Aug 2006 B1
7216204 Rosenbluth et al. May 2007 B2
7225281 Rosenbluth et al. May 2007 B2
20010043614 Viswanadham et al. Nov 2001 A1
20020053017 Adiletta et al. May 2002 A1
20020056037 Wolrich et al. May 2002 A1
20030012198 Kaganoi et al. Jan 2003 A1
20030041216 Rosenbluth et al. Feb 2003 A1
20030041228 Rosenbluth et al. Feb 2003 A1
20030046488 Rosenbluth et al. Mar 2003 A1
20030065862 Wyland Apr 2003 A1
20030078950 Abernathy et al. Apr 2003 A1
20030105899 Rosenbluth et al. Jun 2003 A1
20030145155 Wolrich et al. Jul 2003 A1
20030145159 Adiletta et al. Jul 2003 A1
20030191866 Wolrich et al. Oct 2003 A1
20040034743 Wolrich et al. Feb 2004 A1
20040039895 Wolrich et al. Feb 2004 A1
20040054880 Bernstein et al. Mar 2004 A1
20040071152 Wolrich et al. Apr 2004 A1
20040073728 Wolrich et al. Apr 2004 A1
20040073778 Adiletta et al. Apr 2004 A1
20040098496 Wolrich et al. May 2004 A1
20040109369 Wolrich et al. Jun 2004 A1
20040139290 Wolrich et al. Jul 2004 A1
20040205747 Bernstein et al. Oct 2004 A1
20050132132 Rosenbluth et al. Jun 2005 A1
20050185437 Wolrich et al. Aug 2005 A1
Foreign Referenced Citations (34)
Number Date Country
0 379 709 Aug 1990 EP
0 464 715 Jan 1992 EP
0 633 678 Jan 1995 EP
0 745 933 Dec 1996 EP
0 809 180 Nov 1997 EP
0 953 897 Nov 1999 EP
1 191 445 Mar 2002 EP
59111533 Jun 1984 JP
WO9415287 Jul 1994 WO
WO9738372 Oct 1997 WO
WO 0033195 Jun 2000 WO
WO 0115718 Mar 2001 WO
WO 0116697 Mar 2001 WO
WO 0116698 Mar 2001 WO
WO 0116702 Mar 2001 WO
WO 0116703 Mar 2001 WO
WO 0116713 Mar 2001 WO
WO 0116714 Mar 2001 WO
WO 0116715 Mar 2001 WO
WO 0116716 Mar 2001 WO
WO 0116718 Mar 2001 WO
WO 0116722 Mar 2001 WO
WO 0116758 Mar 2001 WO
WO 0116769 Mar 2001 WO
WO 0116770 Mar 2001 WO
WO 0116782 Mar 2001 WO
WO 0118646 Mar 2001 WO
WO 0141530 Jun 2001 WO
WO 0148596 Jul 2001 WO
WO 0148599 Jul 2001 WO
WO 0148606 Jul 2001 WO
WO 0148619 Jul 2001 WO
WO 0150247 Jul 2001 WO
WO 0150679 Jul 2001 WO
Related Publications (1)
Number Date Country
20040034743 A1 Feb 2004 US