Processing a data packet

Information

  • Patent Grant
  • 7443836
  • Patent Number
    7,443,836
  • Date Filed
    Monday, June 16, 2003
    21 years ago
  • Date Issued
    Tuesday, October 28, 2008
    15 years ago
Abstract
A device and method for processing a data packet at a device are described. The device receives data packets and determines available memory in one or more of local memories of a plurality of execution threads. The device stores packet information in an available one of the local memories of the execution threads.
Description
BACKGROUND

Networks enable computers and other devices to exchange data such as e-mail messages, web pages, audio, video, and so forth. To send data across a network, a sending device typically constructs a collection of packets. In networks, individual packets store some portion of the data being sent. A receiver can reassemble the data into its original form after receiving the packets.


A packet traveling across a network may make many “hops” to intermediate network devices before reaching its final destination. A packet includes data being sent and information used to deliver the packet. This information is often stored in the packet's “payload” and “header(s)”, respectively. The header(s) may include information for a number of different communication protocols that define the information that should be stored in a packet. Different protocols may operate at different layers. For example, a low level layer generally known as the “link layer” coordinates transmission of data over physical connections. A higher level layer generally known as the “network layer” handles routing, switching, and other tasks that determine how to move a packet forward through a network.


Many different hardware and software schemes have been developed to handle packets. For example, some designs use software to program a general purpose CPU (Central Processing Unit) processor to process packets. Other designs use components such as ASICs (application-specific integrated circuits), feature dedicated, “hard-wired” approaches. Field programmable processors enable software programmers to quickly reprogram network processor operations.





DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram of a communication system employing a hardware-based multithreaded processor.



FIG. 2 is a block diagram of a microengine unit employed in the hardware-based multithreaded processor of FIG. 1.



FIG. 3 is a diagram of the processing of a packet.



FIG. 4 is a flow chart of the processing of a packet.



FIG. 5 is a flow chart of the initial handling and storing of packet information prior to processing by the threads.





DETAILED DESCRIPTION

Referring to FIG. 1, a communication system 10 includes a parallel, hardware-based multithreaded processor 12. The hardware-based multithreaded processor 12 is coupled to a bus such as a Peripheral Component Interconnect (PCI) bus 14, a memory system 16 and a second bus 18. The system 10 is especially useful for tasks that can be broken into parallel subtasks. Specifically hardware-based multithreaded processor 12 is useful for tasks that are bandwidth oriented rather than latency oriented. The hardware-based multithreaded processor 12 has multiple microengines 22 each with multiple hardware controlled program threads that can be simultaneously active and independently work on a task. A program thread is an independent program that runs a series of instruction. From the program's point-of-view, a program thread is the information needed to serve one individual user or a particular service request.


The hardware-based multithreaded processor 12 also includes a central controller 20 that assists in loading microcode control for other resources of the hardware-based multithreaded processor 12 and performs other general purpose computer type tasks such as handling protocols, exceptions, extra support for packet processing where the microengines pass the packets off for more detailed processing such as in boundary conditions. In one embodiment, the processor 20 is a Strong Arm® (Arm is a trademark of ARM Limited, United Kingdom) based architecture. The general purpose microprocessor 20 has an operating system. Through the operating system the processor 20 can call functions to operate on microengines 22a-22f. The processor 20 can use supported operating system preferably a real time operating system. For the core processor implemented as a Strong Arm architecture, operating systems such as, Microsoft NT real-time, VXWorks and TCUS, a freeware operating system available over the Internet, can be used.


The hardware-based multithreaded processor 12 also includes a plurality of microengines 22a-22f. Microengines 22a-22f each maintain a plurality of program counters in hardware and states associated with the program counters. Effectively, a corresponding plurality of sets of program threads can be simultaneously active on each of the microengines 22a-22f while only one is actually operating at one time.


In one embodiment, there are six microengines 22a-22f, each having capabilities for processing four hardware program threads. The six microengines 22a-22f operate with shared resources including memory system 16 and bus interfaces 24 and 28. The memory system 16 includes a Synchronous Dynamic Random Access Memory (SDRAM) controller 26a and a Static Random Access Memory (SRAM) controller 26b. SDRAM memory 16a and SDRAM controller 26a are typically used for processing large volumes of data, e.g., processing of network payloads from network packets. The SRAM controller 26b and SRAM memory 16b are used in a networking implementation for low latency, fast access tasks, e.g., accessing look-up tables, memory for the core processor 20, and so forth.


Hardware context swapping enables other contexts with unique program counters to execute in the same microengine. Hardware context swapping also synchronizes completion of tasks. For example, two program threads could request the same shared resource e.g., SRAM. Each one of these separate units, e.g., the FBUS interface 28, the SRAM controller 26a, and the SDRAM controller 26b, when they complete a requested task from one of the microengine program thread contexts reports back a flag signaling completion of an operation. When the flag is received by the microengine, the microengine can determine which program thread to turn on.


As a network processor the hardware-based multithreaded processor 12 interfaces to network devices such as a media access controller device e.g., a 10/100BaseT Octal MAC 13a or a Gigabit Ethernet device 13b coupled to communication ports or other physical layer devices. In general, as a network processor, the hardware-based multithreaded processor 12 can interface to different types of communication device or interface that receives/sends large amounts of data. The network processor can include a router 10 in a networking application route network packets amongst devices 13a, 13b in a parallel manner. With the hardware-based multithreaded processor 12, each network packet can be independently processed. 26.


The processor 12 includes a bus interface 28 that couples the processor to the second bus 18. Bus interface 28 in one embodiment couples the processor 12 to the so-called FBUS 18 (FIFO bus). The FBUS interface 28 is responsible for controlling and interfacing the processor 1b2 to the FBUS 18. The FBUS 18 is a 64-bit wide FIFO bus, used to interface to Media Access Controller (MAC) devices. The processor 12 includes a second interface e.g., a PCI bus interface 24 that couples other system components that reside on the PCI 14 bus to the processor 12. The units are coupled to one or more internal buses. The internal buses are dual, 32 bit buses (e.g., one bus for read and one for write). The hardware-based multithreaded processor 12 also is constructed such that the sum of the bandwidths of the internal buses in the processor 12 exceed the bandwidth of external buses coupled to the processor 12. The processor 12 includes an internal core processor bus 32, e.g., an ASB bus (Advanced System Bus) that couples the processor core 20 to the memory controllers 26a, 26b and to an ASB translator 30 described below. The ASB bus is a subset of the so-called AMBA bus that is used with the Strong Arm processor core. The processor 12 also includes a private bus 34 that couples the microengine units to SRAM controller 26b, ASB translator 30 and FBUS interface 28. A memory bus 38 couples the memory controller 26a, 26b to the bus interfaces 24 and 28 and memory system 16 including flashrom 16c used for boot operations and so forth.


Each of the microengines 22a-22f includes an arbiter that examines flags to determine the available program threads to be operated upon. The program thread of the microengines 22a-22f can access the SDRAM controller 26a, SDRAM controller 26b or FBUS interface 28. The SDRAM controller 26a and SDRAM controller 26b each include a plurality of queues to store outstanding memory reference requests. The queues either maintain order of memory references or arrange memory references to optimize memory bandwidth.


Although microengines 22 can use the register set to exchange data. A scratchpad or shared memory is also provided to permit microengines to write data out to the memory for other microengines to read. The scratchpad is coupled to bus 34.


Referring to FIG. 2, an exemplary one of the microengines 22a-22f, e.g., microengine 22f is shown. The microengine includes a control store 70 which, in one implementation, includes a RAM of here 1,024 words of 32 bits. The RAM stores a microprogram that is loadable by the core processor 20. The microengine 22f also includes controller logic 72. The controller logic includes an instruction decoder 73 and program counter (PC) units 72a-72d. The four micro program counters 72a-72d are maintained in hardware. The microengine 22f also includes context event switching logic 74. Context event logic 74 receives messages (e.g., SEQ_#_EVENT_RESPONSE; FBI_EVENT_RESPONSE; SRAM_EVENT_RESPONSE; SDRAM_EVENT_RESPONSE; and ASB_EVENT_RESPONSE) from each one of the shared resources, e.g., SRAM 26a, SDRAM 26b, or processor core 20, control and status registers, and so forth. These messages provide information on whether a requested task has completed. Based on whether or not a task requested by a program thread has completed and signaled completion, the program thread needs to wait for that completion signal, and if the program thread is enabled to operate, then the program thread is placed on an available program thread list (not shown).


In addition to event signals that are local to an executing program thread, the microengines 22 employ signaling states that are global. With signaling states, an executing program thread can broadcast a signal state to the microengines 22. The program thread in the microengines can branch on these signaling states. These signaling states can be used to determine availability of a resource or whether a resource is due for servicing.


The context event logic 74 has arbitration for the program threads. In one embodiment, the arbitration is a round robin mechanism. Other techniques could be used including priority queuing or weighted fair queuing. The microengine 22f also includes an execution box (EBOX) data path 76 that includes an arithmetic logic unit 76a and general purpose register set 76b. The arithmetic logic unit 76a performs arithmetic and logic operation as well as shift operations. The registers set 76b has a relatively large number of general purpose registers. In this implementation there are 64 general purpose registers in a first bank, Bank A and 64 in a second bank, Bank B. The general purpose registers are windowed so that they are relatively and absolutely addressable.


The microengine 22f also includes a write transfer register stack 78 and a read transfer stack 80. These registers are also windowed so that they are relatively and absolutely addressable. Write transfer register stack 78 is where write data to a resource is located. Similarly, read register stack 80 is for return data from a shared resource. Subsequent to or concurrent with data arrival, an event signal from the respective shared resource e.g., the SRAM controller 26a, SDRAM controller 26b or core processor 20 will be provided to context event arbiter 74 which will then alert the program thread that the data is available or has been sent. Both transfer register banks 78 and 80 are connected to the execution box (EBOX) 76 through a data path. In one implementation, the read transfer register has 64 registers and the write transfer register has 64 registers.


Each microengine 22a-22f supports multi-threaded execution of multiple contexts. One reason for this is to allow one program thread to start executing just after another program thread issues a memory reference and must wait until that reference completes before doing more work. This behavior maintains efficient hardware execution of the microengines because memory latency is significant.


Special techniques such as inter-thread communications to communicate status and a thread_done register to provide a global program thread communication scheme is used for packet processing. The thread_done register can be implemented as a control and status register. Network operations are implemented in the network processor using a plurality of program threads e.g., contexts to process network packets. For example, scheduler program threads could be executed in one of the microprogram engines e.g., 22a whereas, processing program threads could execute in the remaining engines e.g., 22b-22f. The program threads (processing or scheduling program threads) use inter-thread communications to communicate status.


Program threads are assigned specific tasks such as receive and transmit scheduling, receive processing, and transmit processing, etc. Task assignment and task completion are communicated between program threads through the inter-thread signaling, registers with specialized read and write characteristics, e.g., the thread-done register, SRAM 16b and data stored in the internal scratchpad memory resulting from operations such as bit set, and bit clear.


Referring to FIG. 3, the packet dispatcher 302 resides on a processor inside the network processor and requests packets from the network interface. The packet dispatcher 302 is notified when a packet segment (e.g., 128 bytes) has been received by a packet receiver buffer 304. The packet dispatcher 302 moves the packet segment payload into DRAM 306. The packet dispatcher 302 stores packet reassembly state information to reassemble the packet. As successive segments are received for a packet, the dispatcher 302 uses the state information to direct and assemble the segments in space allocated in DRAM 306 by the packet dispatcher 302.


Each packet received is assigned a sequence number, in ascending order. The sequence number allows the packets to be dequeued in the order they were received. The sequence number range corresponds to a slot in a ring in memory called an Asynchronous Insert Synchronous Remove (AISR) 308 ring. When a thread 310 in the pool of threads has taken its assigned packet and finished processing the packet, the thread 310 sends the processed packet to DRAM 306. The thread also signals completion of the processed packet to the indexed location in the AISR 308, based on the packet's sequence number. This ensures that the results are stored in ascending addresses by order of packet arrival. The reorder dequeue 312 reads the AISR 308 in ascending order, checking to see if packet information has been assigned to the slot. The reorder dequeue 312 will continue checking the slot in the AISR 308 until packet information is found in the slot. The system provides a First In First Out (FIFO) routine while efficiently processing packets out of order.


When a packet is received, the dispatcher 302 assigns the packet to a thread 310 in the pool of threads. Each thread in the pool makes itself available by signaling the dispatcher via either a thread mailbox 314 or a message CSR 316. Each thread 310 has a memory that allows the thread to work on a presently assigned packet and store the next assigned packet in memory. The thread 310 communicates its memory and processing availability and location of the thread to the packet dispatcher 302. The dispatcher 302 communicates select packet state information back to the assigned threads. The packet state information can include, for example, the packet payload's address in DRAM 306 and the sequence number.


There are multiple methods by which the thread 310 can communicate its availability and the packet dispatcher 302 can assign a packet to that thread 310. A thread 310 can communicate its availability through a Control and Status Register (CSR) 316. Each thread can write to a few bits of the CSR 316. The packet dispatcher 302 can read and clear the CSR 316, thus providing the status of many threads at one time. Alternatively, the dispatcher 302 and threads 310 can communicate via “mailboxes” 314. The thread 310 can signal its availability by flagging or placing an identifier in the mailbox 314. The dispatcher polls each thread mailbox until it identifies an available thread. The dispatcher 302 can write the packet state information to the mailbox 314 for the available thread.


The threads 310 in the pool can finish their assignment at any time. Some will take a long time, probing deep into the packet header. Others will finish early. Once the thread 310 is finished processing the packet, the thread sends the packet information to the AISR ring 308 in the location of the sequence number given to the packet during initial processing. The thread 310 is now available to process the next packet and signals its availability to the packet dispatcher 302. The reorder dequeue 312 cycles through the AISR ring 308 and dequeues the packets to the network based on the order the packets were received.


A backlog (or bottleneck) can result when the microengine receives an above-average amount of packets that require in-depth processing. If the dispatcher 302 receives a new data packet from the network at a time when all the threads 310 are processing assigned data packets, then the dispatcher 302 is forced to drop the new packet, leave the packet in the packet receiver buffer 304 or find temporary storage for it. The dispatcher 302 has a memory 318. Similar to the AISR ring 308 discussed earlier, the dispatcher memory 318 is a ring that allows the dispatcher 302 to assign packet state information to a slot in the memory ring. The dispatcher 302 continues assigning newly enqueued packet state information sequentially in the slot of the memory ring 318. When threads 310 in the pool of threads become available the dispatcher 302 assigns packet information starting with the oldest saved slot and sequentially assigns packets to newly available threads memory 310.


If the backlog continues to the extent that all the slots of the dispatcher memory ring 318 are filled, in one embodiment the dispatcher starts to assign slots to a backup memory ring 320. This process is similar to the process of assigning and retrieving slot information from the memory ring 318. The difference is that the backup ring can use memory that would normally be allocated to other resources when there is no need for the backup ring. In another embodiment, the primary dispatcher memory ring 318 is made larger in order to handle the largest bottleneck of packet processing.


In one embodiment, the dispatcher 302 can use the microengine scratch memory 322 to store packet information. If a packet-processing bottleneck causes all the slots in the dispatcher memory 318 to become filled, the dispatcher 302 can assign packet information to the microengine scratch memory 322. Once the bottleneck is relieved the dispatcher 302 assigns the packet information in the scratch memory 322 to the available thread memory 310. The dispatcher 302 can also assign packet information to the DRAM 306 if the dispatcher memory 318 and the scratch memory 322 are filled due to the bottleneck. The dispatcher 302 can also assign packet information to the DRAM 306 if the dispatcher memory 318 is filled and the scratch memory 322 is filled with other data assigned to scratch memory by the microengine processor. The process provides for efficient storage of packet information during bottlenecks while restraining the use of DRAM 306 bandwidth and other memory resources of the microengine.


Referring to FIG. 4, the flowchart shows the processing of data packets 400 by the microengine. The data packet is received from the network into the receiver buffer 402. The dispatcher gives the data packet a packet sequence number and assigns a location in memory for the thread information 404. The sequence number allows the packets to be processed by the threads in an order independent of the order the threads will be dequeued back to the network or general processor. The threads independently communicate to the packet dispatcher regarding their available state 406. A thread 408 in the pool can make itself available even when it is busy processing a packet. The thread 408 stores the packet it is processing and stores the next packet intended for processing by the thread. This allows each thread 408 to handle two packets at a time. Once the dispatcher determines an available location in a thread 408, the packet dispatcher assigns the packet information to the memory of the available thread 416. If the dispatcher determines that there are no available threads at that time 408, the packet dispatcher stores the packet information temporarily in memory 410. The packet dispatcher continues to receive packets, process the packets (e.g. assign a sequence number, a storage location, and determine reassembly information), and store the packet information in the next sequential memory slot 412.


Once the dispatcher determines a thread is available 414, the dispatcher sends the packet information into the available thread's local memory 416. The thread processes the packet and then sends the packet information to the AISR ring in memory based on the sequence number in the packet information 420. The reorder dequeue sequentially pulls the packet information from the ring and sends the packet to the packets future destination 422. In the case of router the packet would be sent onto the network to the next router on the packet path to the packets final destination.


Referring to FIG. 5, the dispatcher determines the most efficient location to store the packet information 500. By storing the packet information in a variety of the location the dispatcher can efficiently use the microengine's memory and handle overflow produced by bottleneck of thread processing. The packet is initially received into the receiver buffer 502. The dispatcher assigns the packet payload a location in memory and a sequence number 504. The dispatcher determines if the packet has been completely received and is ready for processing 506. If the packet is complete, the dispatcher determines if there is an available thread to process the packet 508. If a thread is available the dispatcher can send the packet information directly to the available threads memory 510. However, if there are no available threads or the packet has not been completely reassembled the dispatcher determines the best location to store the packet information until both of these conditions are satisfied. The dispatcher checks the dispatcher's memory ring 512. If the memory ring is available the packet assigns the packet to a slot in the memory ring 514. If memory ring is filled and unavailable, the dispatcher checks the memory slot availability of the dispatcher's backup memory. If the backup memory has space available the packet information is assigned to a slot in the backup memory ring structure 516. When both backup and primary memory of the dispatcher are filled the dispatcher will check the scratch memory of the microengine 520. If the memory is available the dispatcher will assign the packet information to the scratch memory 522. Otherwise the dispatcher can assign the packet information to DRAM 524. The process allows the dispatcher to assign memory to a variety of memory location rather than continually sending the overflow of packet information directly to DRAM. The system provides efficient use of bandwidth of the DRAM and the scratch memory. The system also provides memory use for other processing resources when bottlenecks are not present and quickly stores packet information.


A number of embodiments of the packet processing have been described. Nevertheless, it will be understood that various modifications may be made without departing from the scope of the packet processing. Accordingly, other embodiments are within the scope of the following claims.

Claims
  • 1. A method of processing a data packet at a device, the method comprising: receiving a data packet;detecting available memory in one or more of local memories of a plurality of execution threads;when the detecting available memory in the one or more of local memories of the plurality of execution threads indicates that at least one of the plurality of execution threads is available to process the received data packet, storing packet information in an available one of the local memories of the execution threads;detecting available memory in a dispatcher memory that is separate from the local memories of the execution threads; andstoring the packet information in the detected available dispatcher memory when the detecting available memory in the one or more of local memories of the plurality of execution threads indicates that the execution threads are unavailable to process the received data packet.
  • 2. The method of claim 1 further comprises: detecting available memory in shared memory; andstoring the packet information in the detected available shared memory when the detectable available memory in the dispatcher memory indicates that there is no available memory in the dispatcher memory.
  • 3. The method of claim 2 further comprising: storing the packet information in random access memory when the detecting available memory in the shared memory indicates that there is no available memory in the shared memory.
  • 4. The method of claim 2 further comprising: storing the packet information in the detected available shared memory when detected that reassembly of the receiving data packet is not complete and the execution threads are not available to process the received data packet.
  • 5. The method of claim 2 wherein storing further comprising: storing the packet information in random access memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 6. The method of claim 1 further comprising: storing the packet information in the detected available dispatcher memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 7. The method of claim 1 further comprises: detecting available memory in the dispatcher backup memory; andstoring the packet information in the detected available dispatcher backup memory when the detecting available memory in the dispatcher memory indicates that the dispatcher memory is not available.
  • 8. The method of claim 1 wherein the data packet is received into a receiver buffer.
  • 9. A computer program product, disposed on a computer readable medium, for processing a data packet at a device, the program comprising instructions for causing a processor to: receive a data packet;detect available memory in one or more of local memories of a plurality of execution threads;when the detect available memory in one or more of local memories of a plurality of execution threads indicates that at least one of the plurality of execution threads is available to process the received data packet, store packet information in a available one of the local memories of the execution threads;detect available memory in a dispatcher memory that is separate from the local memories of the execution threads; andstore the packet information in the detected available dispatcher memory when the detect available memory in the one or more of local memories of the plurality of execution threads indicates the execution threads are unavailable to process the received data packet.
  • 10. The program of claim 9 further comprising instructions for causing a processor to: detect available memory in shared memory; andstore the packet information in the detected available shared memory when the detecting available memory in the dispatcher memory indicates that there is no available memory in the dispatcher memory.
  • 11. The program of claim 10 further comprising instructions for causing a processor to store the packet information in random access memory when the detecting available memory in the shared memory indicates there is no available memory in shared memory.
  • 12. The program of claim 10 wherein instructions for causing a processor to store further comprise instructions for causing a processor to: store the packet information in the detected available shared memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 13. The program of claim 9 further comprising instructions for causing a processor to: store the packet information in the detected available dispatcher memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 14. A system for processing a data packet, the system comprising: at least one communication port;at least one Ethernet MAC (Medium Access Control) device coupled to at least one of the at lest one communication ports; andat least one processor comprising: one or more local memories of a plurality of execution threads, anda dispatcher memory that is separate from the one or more local memories of the execution threads,wherein the least one processor is configured to access the at least one Ethernet MAC device,receive a data packet,detect available memory in the one or more of local memories of a plurality of execution threads,store packet information in an available one of the local memory of the execution threads when the detect available memory in one or more of the local memories of the plurality of execution threads indicated that at least one of the plurality execution threads is available to process the received data packet,detect available memory in the dispatcher memory that is separate from the local memories of the execution threads, andstore the packet information in the detected available dispatcher memory when the detect available memory in the one or more of local memories of the plurality of execution threads indicates that the execution threads are unavailable to process the received data packet.
  • 15. The system of claim 14 wherein the at least one processor is configured to: store the packet information in the detected available dispatcher memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 16. A system of claim 14 wherein the at least one processor comprises a shared memory, and the at least one processor is configured to: determine available memory in the shared memory, andstore the packet information in the detected available shared memory when the detecting available memory in the dispatcher memory indicates that there is no available memory in the dispatcher memory.
  • 17. The system of claim 16 wherein the at least one processor is configured to: store the packet information in random access memory when the detecting available memory in the memory indicates that there is no available memory in the shared memory.
  • 18. The system of claim 16 wherein the at least one processor is configured to: store the packet information in the detected available shared memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 19. A device for processing a data packet comprising: a packet receiver buffer to receive a data packet;one or more local memories of a plurality of execution threads coupled to the packet receiver packet;a packet dispatcher coupled to the packet receiver buffer to detect available memory in the one or more of local memories of a plurality of execution threads, and when the detect available memory in the one or more of local memories of the plurality of execution threads indicates that at least one of the plurality of execution threads are available to process the received data packet, store packet information in an available one of the local memories of the execution threads; anda dispatcher memory coupled to the packet dispatcher, wherein the packet dispatcher is configured to detect available memory in the dispatcher memory and when the detect available memory in the one or more of local memories of the plurality of execution threads indicates the execution threads are unavailable to process the received data packet, store the packet information in the detected available dispatcher memory.
  • 20. The device of claim 19 further comprises: a shared memory coupled to the packet dispatcher; andwherein the dispatcher further detects available memory in the shared memory, and stores the packet information in the detected available shared memory when the detected available memory in the dispatcher memory indicates that there is no available memory in the dispatcher memory.
  • 21. The device of claim 20 further comprising: random access memory; andwherein the dispatcher further stores the packet information in the random access memory when the detecting available memory in the shared memory indicates that there is no available memory in the shared memory.
  • 22. The device of claim 21 wherein: the dispatcher further stores the packet information in random access memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received date packet.
  • 23. The device of claim 20 wherein: the dispatcher further stores the packet information in the detected available shared memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 24. The device of claim 19 wherein: the dispatcher further stores the packet information in the detected available dispatcher memory when detected that reassembly of the received data packet is not complete and the execution threads are not available to process the received data packet.
  • 25. The device of claim 19 wherein: the dispatcher further detects available memory in a dispatcher backup memory, and stores the packet information in the dispatcher backup memory when the detecting available memory in the dispatcher memory indicates that the dispatcher memory is unavailable.
  • 26. The device of claim 19 wherein the device is a component of a network processor.
US Referenced Citations (386)
Number Name Date Kind
3373408 Ling Mar 1968 A
3478322 Evans Nov 1969 A
3623001 Kleist et al. Nov 1971 A
3736566 Anderson et al. May 1973 A
3792441 Wymore et al. Feb 1974 A
3889243 Drimak Jun 1975 A
3940745 Sajeva Feb 1976 A
4016548 Law et al. Apr 1977 A
4032899 Jenny et al. Jun 1977 A
4075691 Davis et al. Feb 1978 A
4130890 Adam Dec 1978 A
4400770 Chan et al. Aug 1983 A
4514807 Nogi Apr 1985 A
4523272 Fukunaga et al. Jun 1985 A
4658351 Teng Apr 1987 A
4709347 Kirk Nov 1987 A
4745544 Renner et al. May 1988 A
4788640 Hansen Nov 1988 A
4831358 Ferrio et al. May 1989 A
4858108 Ogawa et al. Aug 1989 A
4866664 Burkhardt, Jr. et al. Sep 1989 A
4890218 Bram Dec 1989 A
4890222 Kirk Dec 1989 A
4991112 Callemyn Feb 1991 A
5115507 Callemyn May 1992 A
5140685 Sipple et al. Aug 1992 A
5142683 Burkhardt, Jr. et al. Aug 1992 A
5155831 Emma et al. Oct 1992 A
5155854 Flynn et al. Oct 1992 A
5168555 Byers et al. Dec 1992 A
5173897 Schrodi et al. Dec 1992 A
5251205 Callon et al. Oct 1993 A
5255239 Taborn et al. Oct 1993 A
5263169 Genusov et al. Nov 1993 A
5313454 Bustini et al. May 1994 A
5347648 Stamm et al. Sep 1994 A
5367678 Lee et al. Nov 1994 A
5379295 Yonehara Jan 1995 A
5379432 Orton et al. Jan 1995 A
5390329 Gaertner et al. Feb 1995 A
5392391 Caulk, Jr. et al. Feb 1995 A
5392411 Ozaki Feb 1995 A
5392412 McKenna Feb 1995 A
5404464 Bennett Apr 1995 A
5404469 Chung et al. Apr 1995 A
5404482 Stamm et al. Apr 1995 A
5432918 Stamm Jul 1995 A
5448702 Garcia, Jr. et al. Sep 1995 A
5450351 Heddes Sep 1995 A
5452437 Richey et al. Sep 1995 A
5452452 Gaetner et al. Sep 1995 A
5459842 Begun et al. Oct 1995 A
5459843 Davis et al. Oct 1995 A
5463625 Yasrebi Oct 1995 A
5467452 Blum et al. Nov 1995 A
5475856 Kogge Dec 1995 A
5485455 Dobbins et al. Jan 1996 A
5515296 Agarwal May 1996 A
5517648 Bertone et al. May 1996 A
5539737 Lo et al. Jul 1996 A
5542070 LeBlanc et al. Jul 1996 A
5542088 Jennings, Jr. et al. Jul 1996 A
5544236 Andruska et al. Aug 1996 A
5550816 Hardwick et al. Aug 1996 A
5557766 Takiguchi et al. Sep 1996 A
5568476 Sherer et al. Oct 1996 A
5568617 Kametani Oct 1996 A
5574922 James Nov 1996 A
5581729 Nistala et al. Dec 1996 A
5592622 Isfeld et al. Jan 1997 A
5613071 Rankin et al. Mar 1997 A
5613136 Casavant et al. Mar 1997 A
5617327 Duncan Apr 1997 A
5623489 Cotton et al. Apr 1997 A
5627829 Gleeson et al. May 1997 A
5630074 Beltran May 1997 A
5630130 Perotto et al. May 1997 A
5633865 Short May 1997 A
5644623 Gulledge Jul 1997 A
5649110 Ben-Nun et al. Jul 1997 A
5649157 Williams Jul 1997 A
5651002 Van Seters et al. Jul 1997 A
5659687 Kim et al. Aug 1997 A
5680641 Sidman Oct 1997 A
5689566 Nguyen Nov 1997 A
5692126 Templeton et al. Nov 1997 A
5699537 Sharangpani et al. Dec 1997 A
5701434 Nakagawa Dec 1997 A
5717898 Kagan et al. Feb 1998 A
5721870 Matsumoto Feb 1998 A
5724574 Stratigos et al. Mar 1998 A
5740402 Bratt et al. Apr 1998 A
5742587 Zornig et al. Apr 1998 A
5742782 Ito et al. Apr 1998 A
5742822 Motomura Apr 1998 A
5745913 Pattin et al. Apr 1998 A
5751987 Mahant-Shetti et al. May 1998 A
5754764 Davis et al. May 1998 A
5761507 Govett Jun 1998 A
5761522 Hisanaga et al. Jun 1998 A
5764915 Heimsoth et al. Jun 1998 A
5768528 Stumm Jun 1998 A
5781551 Born Jul 1998 A
5781774 Krick Jul 1998 A
5784649 Begur et al. Jul 1998 A
5784712 Byers et al. Jul 1998 A
5796413 Shipp et al. Aug 1998 A
5797043 Lewis et al. Aug 1998 A
5805816 Picazo, Jr. et al. Sep 1998 A
5809235 Sharma et al. Sep 1998 A
5809237 Watts et al. Sep 1998 A
5809530 Samra et al. Sep 1998 A
5812868 Moyer et al. Sep 1998 A
5828746 Ardon Oct 1998 A
5828863 Barrett et al. Oct 1998 A
5828881 Wang Oct 1998 A
5828901 O'Toole et al. Oct 1998 A
5832215 Kato et al. Nov 1998 A
5835755 Stellwagen, Jr. Nov 1998 A
5838988 Panwar et al. Nov 1998 A
5850399 Ganmukhi et al. Dec 1998 A
5850530 Chen et al. Dec 1998 A
5854922 Gravenstein et al. Dec 1998 A
5857188 Douglas Jan 1999 A
5860138 Engebretsen et al. Jan 1999 A
5860158 Pai et al. Jan 1999 A
5886992 Raatikainen et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5890208 Kwon Mar 1999 A
5892979 Shiraki et al. Apr 1999 A
5898686 Virgile Apr 1999 A
5898701 Johnson Apr 1999 A
5905876 Pawlowski et al. May 1999 A
5905889 Wilhelm, Jr. May 1999 A
5909686 Muller et al. Jun 1999 A
5915123 Mirsky et al. Jun 1999 A
5918235 Kirshenbaum et al. Jun 1999 A
5933627 Parady Aug 1999 A
5937187 Kosche et al. Aug 1999 A
5938736 Muller et al. Aug 1999 A
5940612 Brady et al. Aug 1999 A
5940866 Chisholm et al. Aug 1999 A
5946487 Dangelo Aug 1999 A
5948081 Foster Sep 1999 A
5953336 Moore et al. Sep 1999 A
5958031 Kim Sep 1999 A
5961628 Nguyen et al. Oct 1999 A
5968169 Pickett Oct 1999 A
5970013 Fischer et al. Oct 1999 A
5974518 Nogradi Oct 1999 A
5978838 Mohamed et al. Nov 1999 A
5983274 Hyder et al. Nov 1999 A
5995513 Harrand et al. Nov 1999 A
6012151 Mano Jan 2000 A
6014729 Lannan et al. Jan 2000 A
6023742 Ebeling et al. Feb 2000 A
6032190 Bremer et al. Feb 2000 A
6032218 Lewin et al. Feb 2000 A
6047002 Hartmann et al. Apr 2000 A
6049867 Eickemeyer et al. Apr 2000 A
6058168 Braband May 2000 A
6061710 Eickemeyer et al. May 2000 A
6067300 Baumert et al. May 2000 A
6067585 Hoang May 2000 A
6070231 Ottinger May 2000 A
6072781 Feeney et al. Jun 2000 A
6073215 Snyder Jun 2000 A
6079008 Clery, III Jun 2000 A
6085215 Ramakrishnan et al. Jul 2000 A
6085248 Sambamurthy et al. Jul 2000 A
6085294 Van Doren et al. Jul 2000 A
6092127 Tausheck Jul 2000 A
6092158 Harriman et al. Jul 2000 A
6104700 Haddock et al. Aug 2000 A
6111886 Stewart Aug 2000 A
6112016 MacWilliams et al. Aug 2000 A
6122251 Shinohara Sep 2000 A
6128669 Moriarty et al. Oct 2000 A
6134665 Klein et al. Oct 2000 A
6141677 Hanif et al. Oct 2000 A
6141689 Yasrebi Oct 2000 A
6141765 Sherman Oct 2000 A
6144669 Williams et al. Nov 2000 A
6145054 Mehrotra et al. Nov 2000 A
6157955 Narad et al. Dec 2000 A
6160562 Chin et al. Dec 2000 A
6170051 Dowling Jan 2001 B1
6175927 Cromer et al. Jan 2001 B1
6182177 Harriman Jan 2001 B1
6195676 Spix et al. Feb 2001 B1
6199133 Schnell Mar 2001 B1
6201807 Prasanna Mar 2001 B1
6212542 Kahle et al. Apr 2001 B1
6212544 Borkenhagen et al. Apr 2001 B1
6212604 Tremblay Apr 2001 B1
6212611 Nizar et al. Apr 2001 B1
6216220 Hwang Apr 2001 B1
6223207 Lucovsky et al. Apr 2001 B1
6223238 Meyer et al. Apr 2001 B1
6223243 Ueda et al. Apr 2001 B1
6223274 Catthoor et al. Apr 2001 B1
6223279 Nishimura et al. Apr 2001 B1
6247025 Bacon Jun 2001 B1
6256713 Audityan et al. Jul 2001 B1
6269391 Gillespie Jul 2001 B1
6272109 Pei et al. Aug 2001 B1
6272520 Sharangpani et al. Aug 2001 B1
6272616 Fernando et al. Aug 2001 B1
6275505 O'Loughlin et al. Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6282169 Kiremidjian Aug 2001 B1
6286083 Chin et al. Sep 2001 B1
6289011 Seo et al. Sep 2001 B1
6295600 Parady Sep 2001 B1
6298370 Tang et al. Oct 2001 B1
6307789 Wolrich et al. Oct 2001 B1
6311261 Chamdani et al. Oct 2001 B1
6320861 Adam et al. Nov 2001 B1
6324624 Wolrich et al. Nov 2001 B1
6335932 Kadambi et al. Jan 2002 B2
6338078 Chang et al. Jan 2002 B1
6345334 Nakagawa et al. Feb 2002 B1
6347344 Baker et al. Feb 2002 B1
6349331 Andra et al. Feb 2002 B1
6356962 Kasper et al. Mar 2002 B1
6359911 Movshovich et al. Mar 2002 B1
6360262 Guenthner et al. Mar 2002 B1
6360277 Ruckley et al. Mar 2002 B1
6366998 Mohamed Apr 2002 B1
6373848 Allison et al. Apr 2002 B1
6377998 Noll et al. Apr 2002 B2
6389031 Chao et al. May 2002 B1
6389449 Nemirovsky et al. May 2002 B1
6393026 Irwin May 2002 B1
6393483 Latif et al. May 2002 B1
6404737 Novik et al. Jun 2002 B1
6415338 Habot Jul 2002 B1
6418488 Chilton et al. Jul 2002 B1
6424657 Voit et al. Jul 2002 B1
6424659 Viswanadham et al. Jul 2002 B2
6426940 Seo et al. Jul 2002 B1
6426943 Spinney et al. Jul 2002 B1
6427196 Adiletta et al. Jul 2002 B1
6430626 Witkowski et al. Aug 2002 B1
6434145 Opsasnick et al. Aug 2002 B1
6438132 Vincent et al. Aug 2002 B1
6438134 Chow et al. Aug 2002 B1
6448812 Bacigalupo Sep 2002 B1
6453404 Bereznyi et al. Sep 2002 B1
6457015 Eastham Sep 2002 B1
6463035 Moore Oct 2002 B1
6463072 Wolrich et al. Oct 2002 B1
6463480 Kikuchi et al. Oct 2002 B2
6463527 Vishkin Oct 2002 B1
6466898 Chan Oct 2002 B1
6477562 Nemirovsky et al. Nov 2002 B2
6484224 Robins et al. Nov 2002 B1
6501731 Chong et al. Dec 2002 B1
6507862 Joy et al. Jan 2003 B1
6522188 Poole Feb 2003 B1
6526451 Kasper Feb 2003 B2
6526452 Petersen et al. Feb 2003 B1
6529983 Marshall et al. Mar 2003 B1
6532509 Wolrich et al. Mar 2003 B1
6535878 Guedalia et al. Mar 2003 B1
6552826 Adler et al. Apr 2003 B2
6553406 Berger et al. Apr 2003 B1
6560667 Wolrich et al. May 2003 B1
6570850 Gutierrez et al. May 2003 B1
6577542 Wolrich et al. Jun 2003 B2
6584522 Wolrich et al. Jun 2003 B1
6587906 Wolrich et al. Jul 2003 B2
6604125 Belkin Aug 2003 B1
6606704 Adiletta et al. Aug 2003 B1
6625654 Wolrich et al. Sep 2003 B1
6628668 Hutzli et al. Sep 2003 B1
6629147 Grow Sep 2003 B1
6629236 Aipperspach et al. Sep 2003 B1
6631422 Althaus et al. Oct 2003 B1
6631430 Wolrich et al. Oct 2003 B1
6631462 Wolrich et al. Oct 2003 B1
6657963 Paquette et al. Dec 2003 B1
6658551 Berenbaum et al. Dec 2003 B1
6661774 Lauffenburger et al. Dec 2003 B1
6661794 Wolrich et al. Dec 2003 B1
6665699 Hunter et al. Dec 2003 B1
6665755 Modelski et al. Dec 2003 B2
6667920 Wolrich et al. Dec 2003 B2
6668317 Bernstein et al. Dec 2003 B1
6671827 Guilford et al. Dec 2003 B2
6675190 Schabernack et al. Jan 2004 B1
6675192 Emer et al. Jan 2004 B2
6678746 Russell et al. Jan 2004 B1
6680933 Cheesman et al. Jan 2004 B1
6681300 Wolrich et al. Jan 2004 B2
6684326 Cromer et al. Jan 2004 B1
6694380 Wolrich et al. Feb 2004 B1
6697379 Jacquet et al. Feb 2004 B1
6721325 Duckering et al. Apr 2004 B1
6724767 Chong et al. Apr 2004 B1
6728845 Adiletta Apr 2004 B2
6732187 Lougheed et al. May 2004 B1
6754211 Brown Jun 2004 B1
6754222 Joung et al. Jun 2004 B1
6768717 Reynolds et al. Jul 2004 B1
6775284 Calvignac et al. Aug 2004 B1
6792488 Wolrich et al. Sep 2004 B2
6798744 Loewen et al. Sep 2004 B1
6826615 Barrall et al. Nov 2004 B2
6834053 Stacey et al. Dec 2004 B1
6850521 Kadambi et al. Feb 2005 B1
6856622 Calamvokis et al. Feb 2005 B1
6873618 Weaver Mar 2005 B1
6876561 Wolrich et al. Apr 2005 B2
6895457 Wolrich et al. May 2005 B2
6925637 Thomas et al. Aug 2005 B2
6931641 Davis et al. Aug 2005 B1
6934780 Modelski et al. Aug 2005 B2
6934951 Wilkinson et al. Aug 2005 B2
6938147 Joy et al. Aug 2005 B1
6944850 Hooper et al. Sep 2005 B2
6947425 Hooper et al. Sep 2005 B1
6952824 Hooper et al. Oct 2005 B1
6959002 Wynne et al. Oct 2005 B2
6967963 Houh et al. Nov 2005 B1
6976095 Wolrich et al. Dec 2005 B1
6981077 Modelski et al. Dec 2005 B2
6983350 Wheeler et al. Jan 2006 B1
7006495 Hooper Feb 2006 B2
7065569 Teraslinna Jun 2006 B2
7069548 Kushlis Jun 2006 B2
7096277 Hooper Aug 2006 B2
7100102 Hooper et al. Aug 2006 B2
7111072 Matthews et al. Sep 2006 B1
7111296 Wolrich et al. Sep 2006 B2
7124196 Hooper Oct 2006 B2
7126952 Hooper et al. Oct 2006 B2
7149786 Bohringer et al. Dec 2006 B1
7181742 Hooper Feb 2007 B2
7191321 Bernstein et al. Mar 2007 B2
7206858 Hooper et al. Apr 2007 B2
7248584 Hooper Jul 2007 B2
7305500 Adiletta et al. Dec 2007 B2
7328289 Wolrich et al. Feb 2008 B2
7352769 Hooper et al. Apr 2008 B2
20010023487 Kawamoto Sep 2001 A1
20020027448 Bacigalupo Mar 2002 A1
20020041520 Wolrich et al. Apr 2002 A1
20020075878 Lee et al. Jun 2002 A1
20020118692 Oberman et al. Aug 2002 A1
20020150047 Knight et al. Oct 2002 A1
20020181194 Ho et al. Dec 2002 A1
20030043803 Hooper Mar 2003 A1
20030067934 Hooper et al. Apr 2003 A1
20030086434 Kloth May 2003 A1
20030105901 Wolrich et al. Jun 2003 A1
20030105917 Ostler et al. Jun 2003 A1
20030110166 Wolrich et al. Jun 2003 A1
20030115347 Wolrich et al. Jun 2003 A1
20030115426 Rosenbluth et al. Jun 2003 A1
20030131198 Wolrich et al. Jul 2003 A1
20030140196 Wolrich et al. Jul 2003 A1
20030145159 Adiletta et al. Jul 2003 A1
20030147409 Wolrich et al. Aug 2003 A1
20030161303 Mehrvar et al. Aug 2003 A1
20030161337 Weinman Aug 2003 A1
20030196012 Wolrich et al. Oct 2003 A1
20030210574 Wolrich et al. Nov 2003 A1
20030231635 Kalkunte et al. Dec 2003 A1
20040039895 Wolrich et al. Feb 2004 A1
20040052269 Hooper et al. Mar 2004 A1
20040054880 Bernstein et al. Mar 2004 A1
20040059828 Hooper et al. Mar 2004 A1
20040071152 Wolrich et al. Apr 2004 A1
20040073728 Wolrich et al. Apr 2004 A1
20040073778 Adiletta et al. Apr 2004 A1
20040085901 Hooper et al. May 2004 A1
20040098496 Wolrich et al. May 2004 A1
20040109369 Wolrich et al. Jun 2004 A1
20040148382 Narad et al. Jul 2004 A1
20040162933 Adiletta et al. Aug 2004 A1
20050033884 Wolrich et al. Feb 2005 A1
20050149665 Wolrich et al. Jul 2005 A1
20060007871 Welin Jan 2006 A1
20060069882 Wheeler et al. Mar 2006 A1
20060156303 Hooper et al. Jul 2006 A1
Foreign Referenced Citations (25)
Number Date Country
0 379 709 Aug 1990 EP
0 464 715 Jan 1992 EP
0 633 678 Jan 1995 EP
0 745 933 Dec 1996 EP
0 773 648 May 1997 EP
0 809 180 Nov 1997 EP
0 959 602 Nov 1999 EP
59-111533 Jun 1984 JP
WO 9415287 Jul 1994 WO
WO 9738372 Oct 1997 WO
WO 9820647 May 1998 WO
WO 0038376 Jun 2000 WO
WO 0056024 Sep 2000 WO
WO 0116718 Mar 2001 WO
WO 0116769 Mar 2001 WO
WO 0116770 Mar 2001 WO
WO 0116782 Mar 2001 WO
WO 0117179 Mar 2001 WO
WO 0131856 May 2001 WO
WO 0148596 Jul 2001 WO
WO 0148606 Jul 2001 WO
WO 0148619 Jul 2001 WO
WO 0150247 Jul 2001 WO
WO 0150679 Jul 2001 WO
WO03030461 Apr 2003 WO
Related Publications (1)
Number Date Country
20040252686 A1 Dec 2004 US