A. Field of the Invention
The present invention relates generally to data processing systems and, more particularly, to systems and methods for preserving the order of blocks of data processed by multiple processing paths in a data processing system.
B. Description of Related Art
In a data processing or communications system that must deliver high throughput in processing or communicating a stream of data, a conventional point-to-point approach is to provide n independent paths and distribute sub-streams of the data down each of the n paths. After processing by each of the n processing paths, the sub-streams are recombined to create an output stream. A problem that arises using this technique is that the different processing paths may have different delays. As a result, if a first block of data (e.g., a packet or cell) is sent down a first path at time t1 and a second block of data is sent down a second path at time t2>t1, the second block of data may nonetheless finish being processed before the first. Therefore, if nothing is done to correct for this differential delay, the recombined stream of data will be out-of-order relative to the input stream. Out-of-order blocks of data can be problematic in a number of data processing applications.
Out-of-order blocks of data are particularly difficult to prevent when there are R input streams, each connected to n processing paths, each of which is further connected to S output streams. In this “any-to-any” situation, different blocks of data from an input stream can be destined for different output streams. The blocks of data of each input stream are, thus, distributed across the processing paths and then concentrated back to the desired output stream. There are well-known algorithms for restoring order to mis-ordered streams at recombination time, based on attaching sequence numbers to consecutive blocks at input, and sorting blocks to restore consecutive sequence numbers on output. However, in the any-to-any application, a given output will not receive all sequence numbers from a given input, making the standard sorting algorithms impractical.
Therefore, there exists a need for systems and methods that preserve the order of blocks of data in data streams that have been distributed across multiple paths in a data processing system.
Systems and methods, consistent with the present invention, address this and other needs by providing mechanisms for queuing packets received in a first order from multiple parallel packet processors and re-ordering the queued packets in accordance with a determined maximum differential delay between each of the packet processors.
In accordance with the purpose of the invention as embodied and broadly described herein, a method for preserving the order of blocks of data in multiple data streams transmitted across multiple processing paths includes receiving the blocks of data on the multiple data streams; distributing the blocks of data to the multiple processing paths; receiving the blocks of data processed by the multiple processing paths; ordering the processed blocks of data based on a determined maximum differential processing time among the multiple processing paths; and transmitting the ordered blocks of data on outgoing data streams.
In another implementation consistent with the present invention, a method for preserving the order of blocks of data in multiple data streams processed by multiple processing paths includes receiving the blocks of data on the multiple data streams; distributing the blocks of data to the multiple processing paths; processing, by the multiple processing paths, the blocks of data; selectively queuing and dequeuing the processed blocks of data based on a determined maximum differential delay among each of the processing paths; and transmitting the dequeued blocks of data.
In yet another implementation consistent with the present invention, a method for preserving the order of data blocks in data streams processed by multiple processing paths includes receiving the data blocks on the multiple data streams, the data blocks arriving in a first order; distributing the data blocks to the multiple processing paths; processing, by the multiple processing paths, the data blocks; receiving the processed data blocks from the multiple processing paths, the data blocks arriving in a second order; queuing each of the data blocks; and dequeuing each of the queued data blocks in the first order based on each data block's time of receipt from the multiple processing paths and a determined maximum differential delay time among the multiple processing paths.
In a further implementation consistent with the present invention, a method for preserving the order of packets in multiple data streams received at a data processing system includes receiving the blocks of data on the multiple data streams, the blocks of data being received in a first order; distributing the blocks of data to multiple processing paths; processing, on each of the multiple processing paths, the blocks of data; receiving the blocks of data from the multiple processing paths, the blocks of data being received in a second order; arranging the processed blocks of data in the first order based on a determined maximum differential delay among the multiple processing paths; and transmitting the arranged packets.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,
The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents.
Systems and methods, consistent with the present invention, provide mechanisms for queuing blocks of data received in a first order from multiple processing paths and re-ordering the queued blocks of data in accordance with a determined maximum differential delay between each of the processing paths.
Processing paths 110 may include any number of devices that may independently process blocks of data received from any one of system input circuits 105. Such devices may be connected in series and/or parallel and may include multiple processors, switch fabrics, and/or packet routers. Each system output circuit 115 may include circuitry for re-ordering blocks of data received from the n processing paths 110 and outputting the re-ordered blocks of data as an outgoing data stream.
As illustrated in
Returning to
Each system input circuit 105 may then send each received data block across one of the n processing paths 110 according to a conventional scheme [step 715]. For example, each system input circuit 105 may transmit each received data block according to a scheme that balances the load across each of the n processing paths 110. Importantly, each system input circuit 105 does not need to have information about the destination of a data block before selecting a processing path on which to send that data block. The determination of which of the S system output circuits 115 will be the destination of the data block is performed by one of the n processing paths 110 (the one to which the input circuit sends the data block). The selected system output circuit 115 may receive each data block subsequent to its processing by one of the n processing paths 110 [step 720]. Each selected system output circuit 115 may then re-order the received data blocks using order-restoring processes consistent with the present invention, such as, for example, the exemplary process described with regard to
To begin processing, controller 305 may receive a data block from a processing path of processing paths 110 [step 805](
Controller 305 may periodically retrieve the next time stamp (ttimestamp) and stream number 2-tuple 505 from the front of FIFO queue 320 and may send the time stamp to comparator 330 [act 905](
tcurrent>ttimestamp+maxd Eqn. (1)
If tcurrent is greater than the sum of ttimestamp and maxd, then comparator 330 signals an appropriate priority encoder of priority encoders 410 to select the smallest sequence number present in its corresponding array in a round-robin sense and update its associated round robin pointer 515 with the selected sequence number [act 915]. For example, the appropriate priority encoder 410 may select sequence numbers in the following round-robin sequence: {SEQ. NO. x, SEQ. NO. x+1, . . . , SEQ. NO. x+K−1}. Controller 305 may then retrieve the data block pointer from the array, corresponding to the retrieved stream number, from the array entry sequence number equaling the round robin pointer [act 920]. For example, if the 2-tuple 605 retrieved from FIFO queue 320 contains inputstreamnumber—1 and priority encoder 410 selects a sequence number equaling the base sequence number plus a value such as 3 (base_seq_x+3), then controller 305 retrieves data block pointer db_pointer_AAx+3 from array 1505a. Controller 305 then may retrieve a data block from buffer 315 using the data block pointer retrieved from the selected array 505 [act 925]. Controller 820 may then send the retrieved data block to the transmit interface(s) (not shown) for transmission [act 930].
Systems and methods, consistent with the present invention, provide mechanisms for preserving the order of blocks of data transmitted across n processing paths through the selective queuing and dequeuing of the data blocks based on a determined maximum differential delay among each of the n processing paths.
The foregoing description of preferred embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while series of steps have been described with regard to
The scope of the invention is defined by the claims and their equivalents.
This application is a continuation of U.S. application Ser. No. 10/358,274, filed Feb. 5, 2003, which claims priority under 35 U.S.C. §119 based on U.S. Provisional Application No. 60/354,208, filed Feb. 6, 2002, the disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5282201 | Frank et al. | Jan 1994 | A |
5898873 | Lehr | Apr 1999 | A |
6246684 | Chapman et al. | Jun 2001 | B1 |
6389419 | Wong et al. | May 2002 | B1 |
6477168 | Delp et al. | Nov 2002 | B1 |
6546391 | Tsuruoka | Apr 2003 | B1 |
6600741 | Chrin et al. | Jul 2003 | B1 |
6618760 | Aramaki et al. | Sep 2003 | B1 |
6747972 | Lenoski et al. | Jun 2004 | B1 |
6788686 | Khotimsky et al. | Sep 2004 | B1 |
6816492 | Turner et al. | Nov 2004 | B1 |
6876952 | Kappler et al. | Apr 2005 | B1 |
6967951 | Alfano | Nov 2005 | B2 |
7072342 | Elnathan | Jul 2006 | B1 |
7085274 | Rahim et al. | Aug 2006 | B1 |
7120149 | Salamat | Oct 2006 | B2 |
7236488 | Kavipurapu | Jun 2007 | B1 |
7289508 | Greene | Oct 2007 | B1 |
7586917 | Ferguson et al. | Sep 2009 | B1 |
7953094 | Greene | May 2011 | B1 |
20010049729 | Carolan et al. | Dec 2001 | A1 |
20020075873 | Lindhorst-Ko et al. | Jun 2002 | A1 |
20020122424 | Kawarai et al. | Sep 2002 | A1 |
20020131414 | Hadzic | Sep 2002 | A1 |
20020147721 | Gupta et al. | Oct 2002 | A1 |
20020150043 | Perlman et al. | Oct 2002 | A1 |
20030012199 | Ornes et al. | Jan 2003 | A1 |
20030081600 | Blaker et al. | May 2003 | A1 |
20030095536 | Hu et al. | May 2003 | A1 |
20030099232 | Kudou et al. | May 2003 | A1 |
20030123447 | Smith | Jul 2003 | A1 |
20050018682 | Ferguson et al. | Jan 2005 | A1 |
20050025152 | Georgiou et al. | Feb 2005 | A1 |
20050089038 | Sugai et al. | Apr 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20110196999 A1 | Aug 2011 | US |
Number | Date | Country | |
---|---|---|---|
60354208 | Feb 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10358274 | Feb 2003 | US |
Child | 13090362 | US |