1. Field of the Invention
The present invention relates generally to data processing devices, and more particularly, to systems and methods for preserving the order of data processed by multiple processing paths in data processing devices.
2. Description of Related Art
In network devices that must deliver high throughput in forwarding a stream of data, a conventional approach is to provide n independent paths and distribute sub-streams of the data down each of the n paths. After processing by each of the n processing paths, the sub-streams are recombined to create an output stream. A problem that arises using this technique is that the different processing paths may have different delays. As a result, if a first block of data (e.g., a packet or cell) is sent down a first path at time t1 and a second block of data is sent down a second path at time t2>t1, the second block of data may nonetheless finish being processed before the first. Therefore, if nothing is done to correct for this differential delay, the recombined stream of data will be out-of-order relative to the input stream. Out-of-order blocks of data can be problematic in a number of networking applications.
There are well-known algorithms for restoring order to mis-ordered streams at recombination time, based on attaching sequence numbers to consecutive blocks at input, and sorting blocks to restore consecutive sequence numbers on output. However, in some applications, a given output will not receive all sequence numbers from a given input, making the standard sorting algorithms impractical.
Therefore, there exists a need for systems and methods that can preserve the order of blocks of data in data streams that have been distributed across multiple paths in a network device.
Systems and methods consistent with the principles of the invention address this and other needs by providing a re-ordering mechanism that re-orders, by stream, data blocks received out-of-order from multiple processing paths. The re-order mechanism, consistent with the principles of the invention, keeps track of one or more processing characteristics associated with the processing of each data block which occurs within the multiple processing paths. The one or more tracked processing characteristics, thus, may be used as a stream identifier so that re-order of data blocks within each stream prevents a later data block in a stream from being forwarded earlier than an earlier data block in the same stream. Systems and methods consistent with the principles of the invention, therefore, may correct the out-of-order data blocks within streams that result from using parallel processing paths.
One aspect consistent with principles of the invention is directed to a method for preserving the order of blocks of data in multiple data streams transmitted across multiple processing paths. The method includes receiving input blocks of data on the multiple data streams in a first order and distributing the input blocks of data to the multiple processing paths. The method further includes receiving processed blocks of data from the multiple processing paths and re-ordering the processed blocks of data in the first order based on a count for each block of data.
A second aspect consistent with principles of the invention is directed to a method of method of re-ordering data blocks in multiple data streams. The method includes receiving input data blocks in a first order and processing the input data blocks, the processing including performing one or more route look-ups. The method further includes re-ordering the processed input data blocks based on a number of the one or more route look-ups associated with each of the input data blocks.
A third aspect consistent with principles of the invention is directed to a method of routing data blocks in multiple data streams. The method includes referencing routing data one or more times for each of the data blocks to determine an appropriate routing path for each of the data blocks. The method further includes re-ordering the data blocks within, each data stream of the multiple data streams by comparing a number of routing data references associated with each of the data blocks. The method also includes routing each of the data blocks via the appropriate routing path.
A fourth aspect consistent with principles of the invention is directed to a method of re-ordering data blocks processed in multiple data streams, the processing including performing one or more route look-up operations for each of the data blocks. The method includes tracking, for each of the data blocks, a number of the one or more route look-up operations performed for each of the data blocks. The method further includes re-ordering the data blocks according to the number of route look-up operations performed for each of the data blocks.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, explain the invention. In the drawings,
The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and their equivalents.
Processing paths 120 may include any number of devices that may independently process blocks of data received from input interface 110. Such devices may be connected in series and/or parallel and may include multiple processors, such as, for example, route look-up processors. For example, each processing path 120 may perform a route look-up process for each received block of data to determine an appropriate outgoing route for the data block. Each route look-up process may include, for example, a number of references to memory (not shown) that further includes routing data accumulated through conventional routing protocols. Consistent with the principles of the invention, any processing characteristic, or combination of processing characteristics, associated with each block of data, may be used for stream identification. For example, a number of references to memory for route look-up for each block of data may be used as a stream identifier. Streams may also be identified in other ways, such as, for example, by counting the number of times context is switched for a particular block. Therefore, blocks of data with a different count can be considered as from different streams. Blocks of data with the same count may be from the same stream. A combination of multiple criteria may also be used for identifying a stream (e.g., a number of references to memory for route lookup and a number of times context is switched). When one of the processing paths 120 receives a data block from input interfaces 110, it sends a new lookup signal to output interfaces 130. Alternatively, output interfaces 130 could snoop the buses between input interfaces 110 and processing paths 120 and determine that a new lookup has started based on the snoop. When one of processing paths 120 finishes a route lookup, it sends a “lookup finished” signal to output interfaces 130. Output interfaces 130 may include circuitry for re-ordering blocks of data received from the n processing paths 120 and outputting the re-ordered blocks of data as an outgoing data stream.
As illustrated, each output interface 130 may include a group of process entries 205-1 through 205-n connected via an info bus 210 and a retire bus 215. Each process entry 205 may keep track of one or more processing characteristics, such as, for example, a number of counts for the route look-up process associated with that process entry 205 and may retire the process (i.e., send the corresponding data block out an outgoing interface) under appropriate conditions. Each process entry 205 may receive new lookup, count update and lookup finished signals from processing paths via info bus 210. Each process entry 205 may further receive data indicating which processes have retired via retire bus 215.
The count(s) contained in COUNT register(s) 310, and the corresponding process number of the current process entry 205, may be passed by MUX 330 to info bus 210 utilizing, for example, time division multiplexing (TDM). Comparator 325 may compare a count(s), such as, for example, a memory reference count, from COUNT register(s) 310 with counts received from all the other process entries 205. For each other process entry that has a count(s) greater than its own COUNT 310 register(s) value, comparator 325 may clear a corresponding bit in BUSY_VECTOR register 315. BUSY_VECTOR register 315, therefore, keeps track of which other process entries 205 have higher counts. When one of process entries 205 is completed, each other process entry 205 with a higher count may be considered to be part of a different data stream.
DEPENDENCY_VECTOR 320 register may indicate which process entries were active when the current process entry was assigned a route look-up process. By definition, each process entry that was active when the current process was assigned a route look-up has a smaller sequence number than the current process entry. For example, DEPENDENCY_VECTOR register 320 may include a bit for each process entry 205, with each bit set if the corresponding process entry is currently active. AND gate 335 may logically AND the bits of BUSY_VECTOR register 315 and DEPENDENCY_VECTOR register 320 to determine whether the current process may be retired and the corresponding data block sent out to an outgoing interface. The process's retiring condition may include the following:
A determination may then be made whether an indication of one or more processing characteristics, such as, for example, a memory reference, has been received via process memory reference line 345 (act 530). If not, the process may continue at act 615 (
A determination may then be made whether a logical AND of the bits in DEPENDENCY_VECTOR register 320 and BUSY_VECTOR register 315 produces a logical zero value (act 705)(
Consistent with the principles of the present invention, a re-ordering mechanism re-orders, by stream, data blocks received out-of-order from multiple processing paths. The re-order mechanism keeps track, for example, of a number of memory references that occur when a route look-up is performed for routing each received data block. The number of memory references, for example, may be used as a stream identifier so that re-ordering of data blocks within each stream prevents a sequentially later data block in a stream from being forwarded earlier than a sequentially earlier data block in the same stream.
The foregoing description of preferred embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. While series of acts have been described in
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used.
The scope of the invention is defined by the claims and their equivalents.
The instant application claims priority from provisional application No. 60/382,020, filed May 22, 2002, the disclosure of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5282201 | Frank et al. | Jan 1994 | A |
5898873 | Lehr | Apr 1999 | A |
6246684 | Chapman et al. | Jun 2001 | B1 |
6389419 | Wong et al. | May 2002 | B1 |
6477168 | Delp et al. | Nov 2002 | B1 |
6546391 | Tsuruoka | Apr 2003 | B1 |
6600741 | Chrin et al. | Jul 2003 | B1 |
6618760 | Aramaki et al. | Sep 2003 | B1 |
6747972 | Lenoski et al. | Jun 2004 | B1 |
6788686 | Khotimsky et al. | Sep 2004 | B1 |
6816492 | Turner et al. | Nov 2004 | B1 |
6876952 | Kappler et al. | Apr 2005 | B1 |
6967951 | Alfano | Nov 2005 | B2 |
7072342 | Elnathan | Jul 2006 | B1 |
7085274 | Rahim et al. | Aug 2006 | B1 |
7120149 | Salamat | Oct 2006 | B2 |
7289508 | Greene | Oct 2007 | B1 |
7586917 | Ferguson et al. | Sep 2009 | B1 |
7953094 | Greene | May 2011 | B1 |
20010049729 | Carolan et al. | Dec 2001 | A1 |
20020075873 | Lindhorst-Ko et al. | Jun 2002 | A1 |
20020122424 | Kawarai et al. | Sep 2002 | A1 |
20020131414 | Hadzic | Sep 2002 | A1 |
20020147721 | Gupta et al. | Oct 2002 | A1 |
20020150043 | Perlman et al. | Oct 2002 | A1 |
20030012199 | Ornes et al. | Jan 2003 | A1 |
20030081600 | Blaker et al. | May 2003 | A1 |
20030095536 | Hu et al. | May 2003 | A1 |
20030099232 | Kudou et al. | May 2003 | A1 |
20030123447 | Smith | Jul 2003 | A1 |
20050018682 | Ferguson et al. | Jan 2005 | A1 |
20050025152 | Georgiou et al. | Feb 2005 | A1 |
20050089038 | Sugai et al. | Apr 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
60382020 | May 2002 | US |