The present invention is in the area of integrated circuit microprocessors, and pertains in particular to ordering the activity of a processor in response to receipt and storage of data to be processed.
Microprocessors, as is well-known in the art, are integrated circuit (IC) devices that are enabled to execute code sequences which may be generalized as software. In the execution most microprocessors are capable of both logic and arithmetic operations, and typically modern microprocessors have on-chip resources (functional units) for such processing.
Microprocessors in their execution of software strings typically operate on data that is stored in memory. This data needs to be brought into the memory before the processing is done, and sometimes needs to be sent out to a device that needs it after its processing.
There are in the state-of-the-art two well-known mechanisms to bring data into the memory and send it out to a device when necessary. One mechanism is loading and storing the data through a sequence of Input/Output (I/O) instructions. The other is through a direct-memory access device (DMA).
In the case of a sequence of I/O instructions, the processor spends significant resources in explicitly moving data in and out of the memory. In the case of a DMA system, the processor programs an external hardware circuitry to perform the data transferring. The DMA circuitry performs all of the required memory accesses to perform the data transfer to and from the memory, and sends an acknowledgement to the processor when the transfer is completed.
In both cases of memory management in the art the processor has to explicitly perform the management of the memory, that is, to decide whether the desired data structure fits into the available memory space or does not, and where in the memory to store the data. To make such decisions the processor needs to keep track of the regions of memory wherein useful data is stored, and regions that are free (available for data storage). Once that data is processed, and sent out to another device or location, the region of memory formerly associated with the data is free to be used again by new data to be brought into memory. If a data structure fits into the available memory, the processor needs to decide where the data structure will be stored. Also, depending on the requirements of the processing, the data structure can be stored either consecutively, in which case the data structure must occupy one of the empty regions of memory; or non-consecutively, wherein the data structure may be partitioned into pieces, and the pieces are then stored into two or more empty regions of memory.
An advantage of consecutively storing a data structure into memory is that the accessing of this data becomes easier, since only a pointer to the beginning of the data is needed to access all the data.
When data is not consecutively stored into the memory, access to the data becomes more difficult because the processor needs to determine the explicit locations of the specific bytes it needs. This can be done either in software (i.e. the processor will spend its resources to do this task) or in hardware (using a special circuitry). A drawback of consecutively storing the data into memory is that memory fragmentation occurs. Memory fragmentation happens when the available chunks of memory are smaller than the data structure that needs to be stored, but the addition of the space of the available chunks is larger than the space needed by the data structure. Thus, even though enough space exists in the memory to store the data structure, it cannot be consecutively stored. This drawback does not exist if the data structure is allowed to be non-consecutively stored.
Still, a smart mechanism is needed to generate the lowest number of small regions, since the larger the number of small regions that are used by a data structure, the more complex the access to the data becomes (more specific regions need to be tracked) regardless of whether the access is managed in software or hardware as explained above.
A related problem in processing data is in the establishment of an order of processing in response to an order of receiving data to be processed. In many cases, data may be received and stored faster than a processor can process the data, and there is often good reason for processing data in an order different from the order in which the data is received. In the current art, for a processor to take priorities into account in the order in which it processes data, the processor has to expend resources on checking the nature of the data (priorities) and in re-ordering the sequence in which it will process the data.
What is clearly needed is a background system for tracking data receipt and storage for a processor system, and for ordering events for the processor.
To address the above-detailed deficiencies, it is an object of the present invention to provide a background event buffer manager (BEBM) for ordering and accounting for events in a data processing system. The BEBM offloads the responsibility of acknowledging packet processing to a device to thereby improve overall packet processing.
In one aspect, the present invention provides a processing system for processing packets received from a device, the packets having a plurality of priorities, the device requiring acknowledgements (ACKS) according to predetermined restrictions associated with the priorities of the packets. The processing system includes a processor, for processing the packets; a memory, coupled to the processor, for storing the packets; a background memory manager (BMM), coupled to the memory, for performing memory management of the memory; and a background event buffer manager (BEBM), coupled to the processor and to the BMM, the BEBM having a plurality of queues for queuing the packets according to their priorities. In one aspect, the BEBM manages the ACKS according to the predetermined restrictions.
In another aspect, the present invention provides a packet router for processing packets received from a device, the packets having a plurality of priorities, the device requiring acknowledgements (ACK's) for the packets according to predetermined restrictions related to the priorities of the packets. The router includes a processor for processing the packets; a memory, coupled to the processor, for storing the packets; and a background event buffer manager (BEBM), coupled to the processor, for managing the ACK's to the device. The BEBM includes a plurality of queues for storing the packets, the plurality of queues also storing priorities associated with the packets; wherein the BEBM tracks a plurality of ACK states for the packets, according to their stage of processing by the processor.
In yet another aspect, the present invention provides a method for managing Acknowledgements (ACK's) between a packet router, and a device, including: providing a plurality of ACK states for packets received by the router indicating the stage of processing of associated packets; providing a buffer manager for queuing packets received from the device, and for determining their priority; tracking which of the ACK states received packets are in, and sending ACK's to the device according to the ACK states; and insuring that the sent ACK states to the device are within predetermined restrictions imposed by the device.
In the system of
In these descriptions of prior art the skilled artisan will recognize that paths 204,206 and 208 are virtual representations, and that actual data transmission may be by various physical means known in the art, such as by parallel and serial bus structures operated by bus managers and the like, the bus structures interconnecting the elements and devices shown.
The present invention in several embodiments is applicable in a general way to many computing process and apparatus. For example, in a preferred embodiment the invention is applicable and advantageous in the processing of data packets at network nodes, such as in packet routers in the Internet. The packet processing example is used below as a specific example of practice of the present invention to specifically describe apparatus, connectivity and functionality.
In the embodiment of a packet router, device 106 represents input/output apparatus and temporary storage of packets received from and transmitted on a network over path 308. The network in one preferred embodiment is the well-known Internet network. Packets received from the Internet in this example are retrieved from device 106 by BMM 302, which also determines whether packets can fit into available regions in memory and exactly where to store each packet, and stores the packets in memory 102, where they are available to processor 100 for processing. Processor 100 places results of processing back in memory 102, where the processed packets are retrieved, if necessary, by BMM on path 312 and sent back out through device 106.
In the embodiment of
In another aspect of the invention methods and apparatus are provided for ordering events for a processor other than the order in which data might be received to be processed, and without expenditure of significant processor resources.
In the teachings above relative to background memory management an example of packet routing in networks such as the Internet was used extensively. The same example of Internet packet traffic is particularly useful in the present aspect of event managing for a processor, and is therefore continued in the present teaching.
In a communication session established over the Internet between any two sites there will be an exchange of a large number of packets. For the purpose of the present discussion we need to consider only flow of packets in one direction, for which we may select either of the two sites as the source and the other as the destination. In this example packets are generated by the source, and received at the destination. It is important that the packets be received at the destination in the same order as they are generated and transmitted at the source, and, if the source and destination machines were the only two machines involved with the packet flow, and all packets in the flow were to travel by the same path, there would be no problem. Packets would necessarily arrive in the order sent.
Unfortunately packets from a source to a destination may flow through a number of machines and systems on the way from source to destination, and there are numerous opportunities for packets to get disordered. Moreover, the machines handling packets at many places in the Internet are dealing with large numbers of sources and destinations, and therefore with a large number of separate packet flows, which are termed microflows in the art. It will be apparent to the skilled artisan that packets from many different microflows may be handled by a single router, and the packets may well be intermixed while the packets for each separate microflow are still in order. That is, packets from one microflow may be processed, then packets from a second and third microflow, and then more packets from the first microflow, while if only packets from one microflow are considered the flow is sequential and orderly.
The problems that can occur if microflows are allowed to be disordered are quite obvious. If a particular microflow is for an Internet telephone conversation, for example, and the flow gets out-of-order the audio rendering may be seriously affected. Systems for Internet communication are, of course, provided in the art with re-ordering systems for detecting disordered microflows, and re-ordering the packets at the destination, but such systems require a considerable expenditure of processing resources, and, in some cases, packets may be lost or discarded.
It will also be apparent to the skilled artisan that packets from a source to a destination typically pass through and are processed by a number of different machines along the way from source to destination. System 407 illustrated in
Referring now to
Now, it is well known that packets are not necessarily received in a steady flow, but may be received in bursts. Still, BMM 302 in the case of the system of
In a somewhat more general sense the process just described, sans BEBM, can be described as follows:
In some applications a processor needs to perform some kind of processing when an event generated by a device occurs, and it has to notify that device when the processing of the event is completed (henceforth this notification is named ack, for acknowledge).
An event e generated by a device may have associated a type of a priority p (henceforth named type priority). Within a type priority, events can have different priorities q (henceforth named eventpriorify). The device may generate the events in any order. However, it may impose some restrictions on the ordering of the corresponding acks. A reason why this can happen is because the device may not know the type priority nor the event priority of the event it generates, and therefore it relies on the processing of the events to figure out the type and/or event priorities. Thus, it may request the acks of the highest priority events to be received first.
More specifically, let Gen(e) be the time when event e was generated by the device; Gen(e1) <Gen(e2) indicates that event e1 was generated before than event e2. Let Ack(e) be the time when the ack is generated for event e by the processor; Ack(e1) <Ack(e2) indicates that the ack for event e1 was generated before the ack for event e2. Let e(p) and e(q) be the type priority and event priority, respectively, of event e.
The following are examples of restrictions that the device can impose on the ordering of the acks generated by the processor. The device might request, for example, that
Ack(e1)<Ack(e2) when Gen(e1)<Gen(e2) (a)
Acks are generated in the same order that the corresponding events occurred, independently on the type and event priority of the events.
Gen(e1)<Gen(e2) AND e1(p)=e2(p) (b)
Acks for the events of the same type priority are generated in the same order that the events where generated;
e1(p)>e2(p) (c)
Acks for the events with highest type priority (that the processor is currently aware of) are generated first.
e1(q)>e2(q). (d)
Acks for the events of the highest event priority (of which the processor is currently aware) are generated first.
e1(′>e2(q) AND e1(p)>e2(p) (e)
Acks for the events with highest event priority in the highest type priority (of which the processor is currently aware) are generated first.
In any case, the goal of the processor is to generate the acks as soon as possible to increase the throughput of processed events. The processor can dedicate its resources to guarantee the generation of acks following the restrictions mentioned above. However, the amount of time the processor dedicates to this task can be significant, thus diminishing the performance of the processor on the processing of the events. Moreover, depending on how frequent these events occur, and the amount of processing that each event takes, the processor will not be able to start processing them at the time they occur. Therefore, the processor will need to buffer the events and process them later on.
The skilled artisan will surely recognize that the ordering of and accounting for events, as described herein, is a considerable and significant processor load.
In preferred embodiments of the present invention, managing of the ordering of the acks is implemented in hardware and is accomplished in the background (i.e. while the processor is performing the processing of other events).
The system at the heart of embodiments of the present invention is called by the inventors a Background Event Buffer Manager (BEBM), and this is element 401 in
The BEBM performs the following tasks:
When an event is buffered in the BEBM (task 1), its corresponding ack, also buffered with the event, is in the processor-notawarestate, meaning that the processor still has no knowledge of this event and, therefore, it still has to start processing it. When the event is presented to the processor (task 2), its ack state transitions into processor-aware, meaning that the processor has started the processing of the event but it still has not finished. When the processor finishes this processing, it notifies the BEBM about this fact and the state of the ack becomes ready.
At this point, the ack becomes a candidate to be sent out to the device. When the ack state becomes processor-aware, the associated information to the event (type priority and event priority) may be provided to the processor or not, depending on whether the device that generated the event also generated this information or not. The processor can figure out this information during the processing of the event, and override the information sent by the device, if needed. This information can potentially be sent to the BEBM though some communication mechanism. The BEBM records this information, which is used to guarantee task 4.
In case the processor does not communicate this information to the BEBM, the original information provided by the device, if any, or some default information will be used to guarantee task 4. In any case, the processor always needs to communicate when the processing of an event has completed.
When the processor finishes processing an event the state of the ack associated with the event ID is changed to “ready”, as previously described. The BEBM also buffers the transmission of acks back to the device that generated the events, because the device may be busy with other tasks. As the device becomes capable of processing the acks, the BEBM sends them to the device.
Ideally, the buffering function of the BEBM is divided into as many queues (blocks) as different types of priorities exist, and each block has as many entries as needed to buffer all the events that might be encountered, which may be determined by the nature of the hardware and application. In the instant example three queues are shown, labeled 1, 2 and P, to represent an indeterminate number. In a particular implementation of the BEBM, however, the amount of buffering will be limited and, therefore several priority types might share the same block, and/or a block might get full, so no more events will be accepted. This limitation will affect how fast the acks are generated but it will not affect task 4.
In the context of the primary example in this specification, that of a packet processing engine, the data structures provided by device 106 are network packets, and the events are therefore the fact of receipt of new packets to be processed. There will typically be packets of different types, which may have type priorities, and within types there may also be event priorities. The BEBM therefore maintains queues for different packet types, and offers the queues to the processor by priority; and also orders the packets in each type queue by priority (which may be simply the order received).
In the instant example, referring now to
It will be apparent to the skilled artisan that there may be many alterations in the embodiments described above without departing from the spirit and scope of the present invention. For example, a specific case of operations in a data packet router has been illustrated. This is a single instance of a system wherein the invention may provide significant advantages. There are many other systems and processes that will benefit as well. Further, there are a number of ways a BEBM and BMM may be implemented, either alone of together, to perform the functionality described above, and there are many systems incorporating many different kinds of processors that might benefit. The present inventors are particularly interested in a system wherein a dynamic multi-streaming processor performs the functions of processor 100. For these and other reasons the invention should be limited only by the scope of the claims below.
This application is a continuation of U.S. patent application Ser. No. 09/608,750 entitled METHODS AND APPARATUS FOR MANAGING A BUFFER OF EVENTS IN THE BACKGROUND, having a common assignee and common inventors, and filed on Jun. 30, 2000, now U.S. Pat. No. 7,032,226 which is a Continuation-In-Part of U.S. application Ser. No. 09/602,279 filed Jun. 23, 2000 now U.S. Pat. No. 7,502,876.
Number | Name | Date | Kind |
---|---|---|---|
4200927 | Hughes et al. | Apr 1980 | A |
4707784 | Ryan et al. | Nov 1987 | A |
4942518 | Weatherford et al. | Jul 1990 | A |
5023776 | Gregor | Jun 1991 | A |
5121383 | Golestani | Jun 1992 | A |
5166674 | Baum et al. | Nov 1992 | A |
5271000 | Engbersen et al. | Dec 1993 | A |
5291481 | Doshi et al. | Mar 1994 | A |
5408464 | Jurkevich | Apr 1995 | A |
5465331 | Yang et al. | Nov 1995 | A |
5471598 | Quattromani et al. | Nov 1995 | A |
5521916 | Choudhury et al. | May 1996 | A |
5559970 | Sharma | Sep 1996 | A |
5619497 | Gallagher et al. | Apr 1997 | A |
5629937 | Hayter et al. | May 1997 | A |
5634015 | Chang et al. | May 1997 | A |
5659797 | Zandveld et al. | Aug 1997 | A |
5675790 | Walls | Oct 1997 | A |
5684797 | Aznar et al. | Nov 1997 | A |
5708814 | Short et al. | Jan 1998 | A |
5724565 | Dubey et al. | Mar 1998 | A |
5737525 | Picazo et al. | Apr 1998 | A |
5784649 | Begur et al. | Jul 1998 | A |
5784699 | McMahon et al. | Jul 1998 | A |
5796966 | Simcoe et al. | Aug 1998 | A |
5809321 | Hansen et al. | Sep 1998 | A |
5812810 | Sager | Sep 1998 | A |
5835491 | Davis et al. | Nov 1998 | A |
5892966 | Petrick et al. | Apr 1999 | A |
5893077 | Griffin | Apr 1999 | A |
5896517 | Wilson | Apr 1999 | A |
5918050 | Rosenthal et al. | Jun 1999 | A |
5951670 | Glew et al. | Sep 1999 | A |
5951679 | Anderson et al. | Sep 1999 | A |
5978570 | Hillis | Nov 1999 | A |
5978893 | Bakshi et al. | Nov 1999 | A |
5983005 | Monteiro et al. | Nov 1999 | A |
5987578 | Butcher | Nov 1999 | A |
6009516 | Steiss et al. | Dec 1999 | A |
6016308 | Crayford et al. | Jan 2000 | A |
6023738 | Priem et al. | Feb 2000 | A |
6047122 | Spiller | Apr 2000 | A |
6058267 | Kanai et al. | May 2000 | A |
6067608 | Perry | May 2000 | A |
6070202 | Minkoff et al. | May 2000 | A |
6073251 | Jewett et al. | Jun 2000 | A |
6088745 | Bertagna et al. | Jul 2000 | A |
6131163 | Wiegel | Oct 2000 | A |
6151644 | Wu | Nov 2000 | A |
6155840 | Sallette | Dec 2000 | A |
6157955 | Narad et al. | Dec 2000 | A |
6169745 | Liu et al. | Jan 2001 | B1 |
6173327 | De Borst et al. | Jan 2001 | B1 |
6188699 | Lang et al. | Feb 2001 | B1 |
6195680 | Goldszmidt et al. | Feb 2001 | B1 |
6219339 | Doshi et al. | Apr 2001 | B1 |
6219783 | Zahir et al. | Apr 2001 | B1 |
6223274 | Catthoor et al. | Apr 2001 | B1 |
6226680 | Boucher et al. | May 2001 | B1 |
6247040 | Born et al. | Jun 2001 | B1 |
6247105 | Goldstein et al. | Jun 2001 | B1 |
6249801 | Zisapel et al. | Jun 2001 | B1 |
6249846 | Van Doren et al. | Jun 2001 | B1 |
6253313 | Morrison et al. | Jun 2001 | B1 |
6263452 | Jewett et al. | Jul 2001 | B1 |
6377972 | Guo et al. | Apr 2002 | B1 |
6381242 | Maher, III et al. | Apr 2002 | B1 |
6389468 | Muller et al. | May 2002 | B1 |
6393028 | Leung | May 2002 | B1 |
6412004 | Chen et al. | Jun 2002 | B1 |
6438135 | Tzeng | Aug 2002 | B1 |
6453360 | Muller et al. | Sep 2002 | B1 |
6460105 | Jones et al. | Oct 2002 | B1 |
6477562 | Nemirovsky et al. | Nov 2002 | B2 |
6477580 | Bowman-Amuah | Nov 2002 | B1 |
6483804 | Muller et al. | Nov 2002 | B1 |
6502213 | Bowman-Amuah | Dec 2002 | B1 |
6523109 | Meier | Feb 2003 | B1 |
6529515 | Raz et al. | Mar 2003 | B1 |
6535905 | Kalafatis et al. | Mar 2003 | B1 |
6539476 | Marianetti et al. | Mar 2003 | B1 |
6546366 | Ronca et al. | Apr 2003 | B1 |
6549996 | Manry, IV et al. | Apr 2003 | B1 |
6567839 | Borkenhagen et al. | May 2003 | B1 |
6574230 | Almulhem et al. | Jun 2003 | B1 |
6581102 | Amini et al. | Jun 2003 | B1 |
6611724 | Buda et al. | Aug 2003 | B1 |
6614796 | Black et al. | Sep 2003 | B1 |
6618820 | Krum | Sep 2003 | B1 |
6625808 | Tarditi | Sep 2003 | B1 |
6640248 | Jorgensen | Oct 2003 | B1 |
6650640 | Muller et al. | Nov 2003 | B1 |
6721794 | Taylor et al. | Apr 2004 | B2 |
6738371 | Ayres | May 2004 | B1 |
6738378 | Tuck, III et al. | May 2004 | B2 |
6792509 | Rodriguez | Sep 2004 | B2 |
6813268 | Kalkunte et al. | Nov 2004 | B1 |
6820087 | Langendorf et al. | Nov 2004 | B1 |
6965982 | Nemawarkar | Nov 2005 | B2 |
7032226 | Nemirovsky et al. | Apr 2006 | B1 |
7042887 | Sampath et al. | May 2006 | B2 |
7058065 | Musoll et al. | Jun 2006 | B2 |
7065096 | Musoll et al. | Jun 2006 | B2 |
7072972 | Chin et al. | Jul 2006 | B2 |
7165257 | Musoll et al. | Jan 2007 | B2 |
7274659 | Hospodor | Sep 2007 | B2 |
7502876 | Nemirovsky et al. | Mar 2009 | B1 |
20010004755 | Levy et al. | Jun 2001 | A1 |
20010005253 | Komatsu | Jun 2001 | A1 |
20010024456 | Zaun et al. | Sep 2001 | A1 |
20010043610 | Nemirovsky et al. | Nov 2001 | A1 |
20010052053 | Nemirovsky et al. | Dec 2001 | A1 |
20020016883 | Musoll et al. | Feb 2002 | A1 |
20020049964 | Takayama et al. | Apr 2002 | A1 |
20020054603 | Musoll et al. | May 2002 | A1 |
20020062435 | Nemirovsky et al. | May 2002 | A1 |
20020071393 | Musoll | Jun 2002 | A1 |
20020083173 | Musoll et al. | Jun 2002 | A1 |
20020107962 | Richter et al. | Aug 2002 | A1 |
20020124262 | Basso et al. | Sep 2002 | A1 |
20020174244 | Beckwith et al. | Nov 2002 | A1 |
20030210252 | Ludtke et al. | Nov 2003 | A1 |
20040015598 | Lin | Jan 2004 | A1 |
20040049570 | Frank et al. | Mar 2004 | A1 |
20040148382 | Narad et al. | Jul 2004 | A1 |
20040148420 | Hinshaw et al. | Jul 2004 | A1 |
20040172471 | Porter | Sep 2004 | A1 |
20040172504 | Balazich et al. | Sep 2004 | A1 |
20040213251 | Tran et al. | Oct 2004 | A1 |
20050060427 | Phillips et al. | Mar 2005 | A1 |
20050061401 | Tokoro et al. | Mar 2005 | A1 |
20050066028 | Illikkal et al. | Mar 2005 | A1 |
20060036705 | Musoll et al. | Feb 2006 | A1 |
20060090039 | Jain et al. | Apr 2006 | A1 |
20060153197 | Nemirovsky et al. | Jul 2006 | A1 |
20060159104 | Nemirovsky et al. | Jul 2006 | A1 |
20060215670 | Sampath et al. | Sep 2006 | A1 |
20060215679 | Musoll et al. | Sep 2006 | A1 |
20060282544 | Monteiro et al. | Dec 2006 | A1 |
20060292292 | Brightman et al. | Dec 2006 | A1 |
20070008989 | Joglekar | Jan 2007 | A1 |
20070074014 | Musoll et al. | Mar 2007 | A1 |
20070110090 | Musoll et al. | May 2007 | A1 |
20070168748 | Musoll | Jul 2007 | A1 |
20070256079 | Musoll et al. | Nov 2007 | A1 |
20080072264 | Crayford | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
07-273773 | Oct 1995 | JP |
08-316968 | Nov 1996 | JP |
09-238142 | Sep 1997 | JP |
11-122257 | Apr 1999 | JP |
WO 9959078 | Nov 1999 | WO |
WO 03005645 | Jun 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20060225080 A1 | Oct 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09608750 | Jun 2000 | US |
Child | 11278747 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09602279 | Jun 2000 | US |
Child | 09608750 | US |