The present invention is structured to a variable size First In First Out (FIFO) memory with head and tail caching.
Communications networks now require handling of data at very high serial data rates. For example, 10 gigabits per second (Gbps) is common. When it is required to process at these speeds, high speed data parallel connections are used to increase the effective bandwidth. This may be unsatisfactory because of the resultant decrease in bandwidth due to increased overhead requirements. There is a need for effective high speed switching apparatus and the associated hardware to support such a apparatus.
It is therefore an object of the present invention to provide a variable size First In First Out (FIFO) memory.
In accordance with the above object, there is provided a variable size first in first out (FIFO) memory comprising a head FIFO memory for sequentially delivering data packets at a relatively slow rate to a plurality of switching elements whereby some latency occurs between data packets. A tail FIFO memory stores an overflow of the data packets from the head memory. Both the head and tail memories operate at a relatively high data rate equivalent to the data rate of incoming data packets. A large capacity buffer memory is provided having an effectively lower clock rate than the FIFO memories for temporarily storing data overflow from the tail memory whereby the FIFO memories in combination with the buffer memory form a variable size FIFO memory.
As disclosed in a co-pending application entitled High Speed Channels Using Multiple Parallel Lower Speed Channels having Ser. No. 09/962,056, switching of input data arriving at a relatively high data rate of, for example, 10 Gbps, may be accomplished. As illustrated in
This is provided by the overall variable FIFO memory which is a combination of a tail FIFO memory 16, a head FIFO memory 17 and the large scale off chip buffer memory 18. Variable blocks of data are formed by a receiver 11 and transferred through the tail FIFO memory to the head FIFO memory 17 until it is filled. Thus, the tail or FIFO 16 routes data to the head FIFO memory 17 which then distributes data packets to the various switching elements. If the head FIFO memory becomes full, the tail FIFO memory will start filling. The tail FIFO will buffer enough data to keep the head FIFO filled. If the tail FIFO fills due to a sudden burst, data is then written on the line of 21 to the large scale off chip memory 18. This data will be read from the large scale memory into the head FIFO when the head FIFO starts to empty.
From a practical standpoint to operate at the data rate of 10 Gbps, tail FIFO 16 and head FIFO 17 are located on a common semiconductor substrate or chip with the large scale buffer memory 18 being remotely located off chip. This is indicated by the dash line 22. When the tail FIFO memory becomes full then the large scale off chip buffer memory 18 is utilized. Uniform blocks of data are stored indicated by the dash line 23. For example, 128 bytes is transferred on the line 21 into the memory 18. This memory also includes a similar block size of 128 bytes. For example, line 21 may have a 64 bit width (meaning eight bytes) and thus, the data block of 128 bytes is transferred in 16 clock cycles (16×64=128 bytes). Optimization of the bus width in all of the FIFO and buffer memories provide, in effect, a 100 percent efficient transfer technique since for every clock cycle a maximum number of bits is transferred. However buffer memory 18 has a lower clock rate and therefore wider bus. In the present application this could be two read and two write cycles. The various write pointers and read pointers (WP and RP) are so indicated on the various memories and the overall control is accomplished by the memory controller 26. A multiplexer 27 connected to memory controller 26 provides for control of the various data routings. When a sudden burst of data packets ceases, the FIFO memory can then return to its ordinary mode of operation where the head FIFO memory 17 contains all of the inputted data packets as delivered by the tail FIFO memory. Of course, this doesn't occur until the large scale off chip buffer memory 18 is unloaded.
The foregoing operation is shown in a flow chart of
The larger external buffer memory 18 can be provisioned, using one of many allocation schemes, to support multiple head and tail FIFOs in the same manner as described.
Thus a variable FIFO memory with head and tail caching has been provided.
Number | Name | Date | Kind |
---|---|---|---|
4394725 | Bienvenu et al. | Jul 1983 | A |
4704606 | Hasley | Nov 1987 | A |
4754451 | Eng et al. | Jun 1988 | A |
5550823 | Irie | Aug 1996 | A |
5610914 | Yamada | Mar 1997 | A |
5659713 | Goodwin | Aug 1997 | A |
5845145 | James | Dec 1998 | A |
5905911 | Shimizu | May 1999 | A |
5961626 | Harrison | Oct 1999 | A |
5982749 | Daniel et al. | Nov 1999 | A |
6067408 | Runaldue et al. | May 2000 | A |
6122674 | Olnowich | Sep 2000 | A |
6172927 | Taylor | Jan 2001 | B1 |
6292878 | Morioka | Sep 2001 | B1 |
6389489 | Stone | May 2002 | B1 |
6442674 | Lee et al. | Aug 2002 | B1 |
6460120 | Bass | Oct 2002 | B1 |
6487171 | Honig | Nov 2002 | B1 |
6493347 | Sindhu | Dec 2002 | B2 |
6510138 | Pannell | Jan 2003 | B1 |
6557053 | Bass et al. | Apr 2003 | B1 |
6570876 | Aimoto | May 2003 | B1 |
6574194 | Sun et al. | Jun 2003 | B1 |
6611527 | Moriwaki | Aug 2003 | B1 |
6687768 | Horikomi | Feb 2004 | B2 |
6708262 | Manning | Mar 2004 | B2 |
6795870 | Bass | Sep 2004 | B1 |
20020054602 | Takahashi | May 2002 | A1 |
20020099855 | Bass | Jul 2002 | A1 |
20020122386 | Calvignac | Sep 2002 | A1 |