This invention is in the field of integrated circuit memory architecture, and is more specifically directed to the architecture of memories as used in first-in-first-out (FIFO) buffers.
The widespread deployment and use of computer networks over recent years has relied, in large part, on Ethernet technology. As is well known in this field, Ethernet networking involves the use of packet-based communications, by way of which each communication is transmitted in multiple packets, each packet including identification information such as the destination and source addresses for the message, and the position of the packet within the sequence that makes up the overall communication. This approach enables the efficient deployment of high-capacity networks, which may be quite complex in structure, in a manner that permits high fidelity in the successful and efficient transmitting of digital data of varying types and sizes. Indeed, Ethernet technology has played a large part in the development and deployment of the Internet.
Packet-based communications have played a large role in the realization of digital multimedia communications, and in communications at varying Quality of Service (QoS) levels. The different QoS levels specify different data performance, and can support different priority communications and, ultimately, can support different rates or tariffs that can be charged to the users. Examples of QoS levels or classes include constant bit rate (CBR), unspecified bit rate (UBR, also referred to as best-effort), and variable bit rate (VBR). These well-known categories ensure the fair allocation of the available bandwidth over a communications channel.
Typically, lossless flow control in the Ethernet context involves some amount of buffering of the transmitted data at each node in the network. This buffering is often necessary because of the variable nature of the architecture of a given network, and because of the possibility that a bottleneck may exist somewhere in the network downstream from a given network node. In order to be lossless, sufficient buffer capacity must be provided to store one or more packets, at the receive and transmit sides of a node, in the event of a downstream bottleneck; the buffered packets can then be forwarded later, when network conditions permit. This buffering is often accomplished by way of a dual-port memory, acting as a dual-port first-in first-out (FIFO) buffer. The dual-port buffer permits simultaneous, and asynchronous, reads and writes to the buffer, which is particularly beneficial considering the asynchronous nature of the communications on both side of a given network node. Given a FIFO buffer of sufficient capacity, loss-less flow control can be readily carried out.
However, modern Ethernet technology is now capable of carrying extremely high data rates, such as the 10 Gigabit Ethernet backbone networks now becoming popular. At these data rates, however, relatively large FIFO buffers are required to attain lossless flow control. The buffer size required for lossless flow control also increases with increasing distances between network nodes. This relationship between buffer size and data rate and node distance results from the handshaking manner in which flow control operations are carried out between the transmitting and receiving nodes over a network link. For example, the receiving FIFO buffer at a network node will rapidly fill up if a bottleneck occurs downstream from the receiving node. Once this FIFO buffer fills past a threshold value, the transmit side of the receiving node issues a pause request to the transmitting network node, requesting a pause in the transmission of data packets. Upon receipt of the pause request, the transmitting node finishes transmitting the current packet, the remainder of which must be buffered at the receiving node for the communication to remain lossless. Accordingly, the FIFO buffer at the receive node must have sufficient capacity to store a volume of data that is transmitted during the delay required for the receiving node to initiate the pause request, during the delay of the transmitting node in receiving and processing the pause request, and also during the remainder of the current packet. A high data rate thus necessitates a rather large buffer for lossless operation. Long distances between network nodes also contribute to the required FIFO capacity because the FIFO must also buffer the bits that are in transit over the facility.
It has been discovered, in connection with this invention, that conventional dual-port memory FIFOs are too small for reasonable cable lengths in Gigabit Ethernet Metro Area Network (MAN) implementations. This is because dual-port memory is extremely expensive from the standpoint of integrated circuit chip area (“silicon area”). For example, under current application specific integrated circuit (ASIC) technologies, dual-port RAMs or two-port register files are realistically limited to about 250 kbits in size. Unfortunately, a 250 kbit FIFO is virtually useless for loss-less Gigabit Ethernet, as a buffer of this size can support no more than about 2 km of cable length for even the most forgiving 10GE class (10 GBASE-X, supporting only normal packets).
On the contrary, realistic Metro Area Networks (MANs) should have cable lengths of on the order of 40 km, and should be capable of Gigabit Ethernet communications supporting jumbo packets according to 10 GBASE-W. According to the analysis described above, this functionality will require FIFO capacities of on the order of several megabits, which of course are prohibitively expensive to realize via dual-port RAM, especially considering the recent trend to integrate the media access control (MAC) circuitry for Ethernet and other packet-based networking into a single integrated circuit. For these reasons, a disconnect exists between the available technology for FIFO memory and the functional need of Gigabit Ethernet in the MAN environment.
By way of further background, many other applications of dual-port memories also exist in the art. Typically, dual-port memories are useful at any large data rate, or data quantity, interface between asynchronous system elements. Examples of such interfaces include data transfer interfaces between computer subsystems or input/output devices and a main computer bus, non-packet-based networking applications, interfaces between dissimilar microprocessors or CPUS in a multiprocessing environment, emulation systems, and the like.
Accordingly, a need for cost-efficient dual-port memories exists not only in high-speed packet-based network communications, but in many systems and system applications.
It is therefore an object of this invention to provide a low-cost memory architecture having dual-port capability.
It is a further object of this invention to provide such a memory architecture having an array of single-port memory cells that is accessible in dual-port fashion.
It is a further object of this invention to provide such a memory architecture that is suitable for efficient implementation as embedded memory within a large scale logic device.
It is a further object of this invention to provide a media access controller for high-speed packet-based network communications, incorporating such a memory architecture for receive and transmit FIFO buffering.
It is a further object of this invention to provide such a controller that implements lossless flow control in the Gigabit Ethernet context, for cable lengths on the order of tens of kilometers and greater, for example in the Metro Area Network (MAN) context.
Other objects and advantages of this invention will be apparent to those of ordinary skill in the art having reference to the following specification together with its drawings.
The present invention may be implemented into a memory architecture including a double width array of conventional single-port memory cells. The memory array is of a double-word-width relative to the external data width. A write buffer buffers two data words, with writes to the memory array being of double-word width. On the read side, external read requests are buffered so that reads from the memory array are of double width, effected upon two or more requests being received. Sequential logic controls the memory so that asynchronous external reads and writes are internally performed as scheduled reads from and writes to the memory array.
a is a functional data flow diagram, in block form, of a high-speed data network, including media access control (MAC) functions in two network nodes, into which the preferred embodiment of the invention is implemented.
b is an electrical diagram, in abstract block form, of the network of
a and 4b are timing diagrams illustrating an example of asynchronous reads and writes from and to a FIFO memory according to the preferred embodiment of the invention.
The present invention will be described in connection with its preferred embodiment, namely as implemented into transmit and receive buffers of high data rate Ethernet network nodes. This exemplary description is selected because of the particular benefits that this invention provides in that environment. However, it will be understood by those skilled in the art having reference to this specification that this invention may be used in connection with first-in-first-out (FIFO) buffers in general, and as implemented in a wide range of applications. Accordingly, it is to be understood that the following description is provided by way of example only, and is not intended to limit the true scope of this invention as claimed.
An example of a high data rate communication system into which the preferred embodiment of the invention is implemented is shown in
The specific hardware into which network nodes 5A, 5B of
As shown in
Line card 20 performs various functions involved in transmitting data from downstream functions 10 over fiber optic facility FO, and in receiving signals from fiber optic facility FO for downstream functions 10. The functions performed by line card 20 include those functions involved in the layer 2 protocol processing (e.g., media access control, or MAC) and physical (PHY) protocol layers. As such, line card 20 is connected on its system side to downstream functions 10, and on its line side to fiber optic facility FO via laser diodes and amplifiers 32 (for transmit) and photodiode and amplifiers 34 (for receive).
According to the preferred embodiment of the invention, line card 20 is realized in a single integrated circuit device, for reduced cost and improved performance. This implementation of line card 20 may be referred to as an application specific signal processor (ASSP), and may be realized by way of one or more digital signal processors (DSPs). Alternatively, line card 20 may be implemented in several devices as a chipset, or integrated with additional functions such as some or all of downstream functions 10.
In the example of
Beginning with the transmit side of line card 20, transmit system interface 22T interfaces with downstream functions 10, preferably according to a modern interface protocol such as the System Packet Interface (SPI) standard. Interface 22 receives data from downstream functions 10, and forwards this data to transmit FIFO 24T for eventual transmission over fiber optic facility FO. Transmit FIFO 24T, which equates to transmit MAC FIFO 2A or 2B in the functional diagram of
Transmit MAC processor 26T processes the buffered received data into the appropriate form for transmission over facility FO, according to the particular protocols in place. Typically, it is contemplated that such operations as flipping of bytes from LSB-first to MSB-first, stripping of Ethernet headers, byte counting, insertion of control bytes, character or symbol alignment, and formatting into packets, are performed by transmit MAC processor 26T. The formatted and processed data, in the appropriate format such as XGMII (10 Gbit Media Independent Interface), are then forwarded to transmit XAUI 28T.
Transmit XAUI (10 Gbit Attachment Unit Interface) 28T converts the data into the appropriate format, an example of which is a forty-bit four-channel datapath. This XAUI-formatted data are then forwarded to transmit serializer 30T, which converts the parallel channel data into serial streams that are compliant with the transmission standard. For example, where the output of transmit XAUI 28T is provided as four channels of ten-bits in parallel, transmit serializer 30T serializes these four channels into four single-bit serial channels, each operating at high speeds such as on the order of 3.125 Gbps for 10 Gbit communications. The serial data output by transmit serializer 30T are then applied to laser diode and amplifier block 32, which generates the appropriate optical signals applied to fiber optic facility FO.
The process is reversed on the receive side of line card 20. Incoming optical signals are converted by photodiode and amplifiers 34 into an electrical signal that is forwarded to deserializer 30R in line card 20. The output parallel channels from deserializer 30R to receive XAUI 28R, which reformats the signals into the desired format, for example XGMII, and applies the received signals to receive MAC processor 26R. MAC processor 26R performs such functions as MSB-first to LSB-first translation, adding Ethernet headers, byte counting, removal of control bytes, character or symbol alignment, and formatting into the desired packet structure. In this regard, transmit and receive XAUI 28T, 28R, serializer 30T, and deserializer 30R may be embodied within a single integrated circuit device, referred to in the art as a transceiver; of course, the particular boundaries of integrated circuits used in line card 20 may vary widely, depending upon the manufacturing technology and the available functionality.
Receive FIFO 24R equates to receive MAC FIFO 8A or 8B in the functional diagram of
Referring back to the functional diagram of
In order to ensure lossless communications, regardless of the particular MAC protocol being used, receive MAC FIFO 8A (e.g., receive FIFO 24 in line card 20) must have sufficient capacity, beyond the capacity threshold at which it issues a pause frame request, to store packet data that continues to be transmitted after its pause frame request but until the pause actually begins. The size of receive MAC FIFO 8A, and also the threshold at which it issues a pause frame request, must contemplate this absorption of traffic. One can estimate this capacity by estimating the delay times, preferably as a number of clock cycles, involved from the time that the pause frame request is issued until the pause is actually effected by the transmitting node, and then multiplying the sum of these delay times by the data transmission rate to arrive at the necessary capacity. This capacity determination can be more easily considered with reference to
C/2=2(Dt
where the delay times D are measured in terms of “bit times” (i.e., number of clock cycles multiplied by the bits transmitted per cycle).
Typically, as known in the art, receive FIFO 8A will initially store an amount of data that fluctuates in a range well below its “full” threshold. Once this “full” threshold is reached, however, then the amount of data stored within receive FIFO 8A will tend to fluctuate around the “full” threshold, maintained by its issuing of pause frame requests to the transmitting node.
The actual physical size of the FIFO buffers can be calculated for specific communications protocols and systems. For example, in modern 10 Gigabit Ethernet wide area networks (WANs) that implement so-called “Metro Networks”, typical packet lengths can be as long as 1526 byte payloads, with 118 overhead bytes. If so-called “jumbo” packets are permitted, the packet lengths can be as long as 10000 bytes, with 780 overhead bytes. The worst case delay time Dpkt for jumbo WAN packets is therefore about 86,240 bit times (10780*8). The internal delay component Dint can vary from 14848 to 30720 bit times, depending upon the particular type of 10GE coding (i.e., 10 GBASE-R, -W, or -X). The external delay component Dext of course depends upon the length of the cable between network nodes 5A, 5B, and the relative velocity of light within the facility:
where M is the cable length in meters, and n is the relative speed of light in the cable (relative to c, the speed of light in free space), which varies from 0.4. to 0.9.
One can evaluate the necessary memory capacity C for the receive MAC FIFOs 8A, 8B from a combination of equations (1) and (2). In addition, transmit MAC FIFOs 2A, 2B must have the same capacity, considering that the upstream data sources will continue to forward data to be transmitted during the time of the loss-less flow control pauses in traffic. It is more useful to consider the maximum distance M that can be supported by a given FIFO capacity C. It has been discovered, from such an evaluation, that a capacity C of 250 kbits or less is virtually useless for loss-less flow control functionality, as this capacity provides at most on the order of 2 km of cable length for even the most forgiving 10GE class (10 GBASE-X, for n=0.9, supporting normal packets only). A more desirable installation for a conventional Metro Area Network is a cable length M of on the order of 40 km, supporting jumbo packets according to 10 GBASE-W; for this functionality, which requires a FIFO capacity C of on the order of several megabits.
As mentioned above, FIFO buffers are typically implemented by memories of conventional dual port architecture. The dual port architecture enables efficient FIFO operation, as the two ports to each memory cell can operate in effectively an asynchronous manner, matching the asynchronous buffer function of the FIFO itself. However, also as mentioned above, dual port memories of megabit capacity are prohibitively expensive, especially when considering the integration of such memories as on-chip functions within a VLSI integrated circuit. In contrast, conventional single port memories of this size can be realized within reasonable chip area, and are suitable for implementation into complex logic circuitry, such as that logic involved in a Gigabit Ethernet transceiver function such as line card 20 of
According to the preferred embodiment of the invention, therefore, transmit and receive FIFOs 24T, 24R of line card 20 are constructed as memories having a single port memory array, for example of a capacity on the order of up to several megabits, but which appears to be a dual-port buffer to the external circuits and functions that write to and read from FIFOs 24T, 24R. FIFOs 24T, 24R are thus capable of serving as transmit and receive MAC FIFOs 2, 8 in the network arrangement of
Referring now to
However, for the example of the network node of
Referring now to
In
As shown in
Prior to the first rising edge of clock CLK in
As shown in the examples of
Referring now to
FIFO memory 40 also includes the appropriate power supply and clock control circuitry as conventional in the art.
The write interface to FIFO memory 40 is realized by write buffer 42, according to this preferred embodiment of the invention. As shown in
According to the preferred embodiment of the invention, as evident from the foregoing description, the converting of single-word-width internal input and output to and from double-word-width internal input and output enables the implementation of FIFO memory 40 using single port memory cells in SRAM array 45. The double word width domain of FIFO memory 40 resolves conflicts between asynchronous reads and writes to SRAM array 45 by assigning either a read cycle or a write cycle to each clock cycle, and by including a two-stage, two-word-wide buffer, at both of the read and write interfaces.
An example of this buffering is illustrated in
In summary, the operation of the buffering according to the preferred embodiment of the invention involves (i) registering input and output data into the buffer in the fixed order (a1, a0, b1, b0, . . . ) in each cycle, and (ii) executing a read or a write operation in the corresponding read or write clock cycles, if a column is matured in the read and write buffers, respectively. The operation of write buffer 42 in this manner will now be described relative to the state diagram of
After reset, or initialization, of FIFO memory 40, or upon a fault condition such as FIFO memory 40 becoming completely full, write buffer 42 enters “all empty” state 50. As shown in
Because no full column is registered in write buffer 42 in its state 52, no writes to SRAM array 45 are performed from this state. Additional clock cycles in which write enable line WREN is inactive will cause write buffer 42 to remain in state 52. An active signal on write enable line WREN causes write buffer 42 to register a data word in entry a0, moving write buffer 42 to state 54, in which a full column (column a) contains registered data. Because this column has now matured, a double data word becomes writable from write buffer 42 to SRAM array 45 upon the next write cycle (WCYC). If the next write cycle WCYC occurs in combination with an active level on write enable line WREN, the double data word is written from write buffer 42 from column a and a data word is also received by write buffer into entry b1, moving write buffer 42 to state 55 as shown in
In state 56, column a of write buffer 42 has matured, and entry b1 is also registered with data. Because state 56 is entered only during a read cycle RCYC, the next cycle of clock CLK is necessarily a write cycle (WCYC) according to this embodiment of the invention. The state change from state 56 thus depends upon whether an active level is received on write enable line WREN. If write enable line WREN is inactive (WREN) in this next write cycle (WCYC), no new data is received, but the double data word is written from matured column a to SRAM array 45; write buffer 42 enters state 55 as a result. On the other hand, if write enable line WREN is active, a data word is registered in entry b0, the double data word is written to SRAM array 45 from column a of write buffer 42, and write buffer 42 moves to state 57.
In state 57, column b has matured. Accordingly, the next write clock cycle (WCYC) will cause the writing of this column b to SRAM array 45. If the next write cycle (WCYC) occurs in combination with an active level at write enable line WREN, the double data word is written from column b to SRAM array 45 and a new data entry is registered in entry a1, placing write buffer 42 in state 52. If write enable line is inactive in this next write cycle, only the double data word is written from column b to SRAM array 45 and no new data word is received, placing write buffer 42 in empty state 50. Conversely, if the next cycle is a read cycle (RCYC), column b of write buffer 42 remains matured and is not written to SRAM array 45. If this read cycle occurs with an active write enable (WREN), a new data word is registered in entry a1, and write buffer 42 enters state 58; conversely, if an inactive write enable (WREN) is received in this read cycle, there is no change to the contents of write buffer 42 and it remains in state 57.
Referring back to
Also in the read context, it is often necessary for the destination of read data (e.g., transmit MAC 26T in the example of
Referring to the example of
The double-word width output from SRAM array 45 is applied to output data converter 46, which in turn applies single words of the read data to output FIFO 48 on data lines SRAM_rddata, along with an active signal on line SRAM_rdval to indicate the forwarding of valid data on lines SRAM_rddata. In this example, output FIFO 48 is preferably constructed as a small amount of dual-port RAM in combination with various control logic for effecting and controlling the read requests, and for synchronizing the output data to be applied on lines RDDATA at a relatively constant time relationship relative to the request on line RDEN. Output FIFO 48 thus effectively synchronizes the output data stream, despite the internal reads and writes within FIFO memory 40 being scheduled relative to one another. Output FIFO 48 forwards the requested data words on data lines RDDATA, and asserts an active level on line D_VAL to indicate the presence of valid data.
In the network communications context of
According to this embodiment of the invention, therefore, the combination of alternating read and write clock cycles (RCYC, WCYC), with write buffer having two double data word entries, enables asynchronous writes to FIFO memory 40 from external sources, while permitting scheduled writes to the single port SRAM array 45 within FIFO memory 40.
The foregoing construction of a FIFO memory according to the preferred embodiment of the invention is useful for all applications of FIFO buffers in electronic systems, by providing a memory that externally appears as a dual port memory, but which can be implemented by way of conventional single port RAM cells. In the network communications context, in which the data passing through FIFO memory 40 is, or is to be, arranged in packets, FIFO memory 40 preferably includes additional control functionality to manage these packets. This control functionality for packet management, according to the preferred embodiment of the invention, will now be described relative to the state diagram of
In general, packet management logic is provided to ensure the coherence of packets that are read from FIFO memory 40. In this regard, it is important for control logic 47 to comprehend the number of packets that are stored within FIFO memory 40, and to comprehend the start and end of these stored packets. In addition, FIFO memory 40 according to this embodiment of the invention has the capability of handling “cut through” packets, which are packets of a size greater than a configurable threshold. For packets of a size greater than this threshold, the reading out of the beginning of the packet from FIFO memory 40 is permitted prior to the writing in of the end of the packet. As such, the “cut through” packet will not have both a start and end within FIFO memory 40 at any given time.
The state diagram of
These, and other, state variables operate in connection with control logic 47 to manage the packets being written into and read from FIFO memory 40.
Referring now to the state diagram of
Control logic 47 remains in one_pkt_rdy state 74 after the start of a next packet is also being written into FIFO memory 40 (write_sop), during which time the writing_pkt state variable is set. Upon completion of the write of this next packet to FIFO memory 40 (write_eop), control logic 47 makes a transition to pkts_rdy state 75, clearing the writing_pkt state variable and incrementing the #pkt state variable to the new count (#pkt=2). The pkts_rdy state 75 indicates that more than one packet is stored within FIFO memory 40.
On the other hand, from one_pkt_rdy state 74, if this single packet is being read (read_sop), control logic 47 makes a transition to tx_last_pkt state 76, setting the sending_pkt state variable. As evident from its name, tx_last_pkt state 76 refers to the state in which the only or last remaining packet is in the process of being read from FIFO memory 40. Upon completion of the read of this last or only packet (read_eop, in combination with the confirming condition of #pkt−1=0), control logic 47 makes a transition to pkt_brake state 80, which will be described below. On the other hand, control logic 47 remains in tx_last_pkt state 76 if the start of a new packet is being read (write_sop), setting state variable writing_pkt. If the end of this next packet is written before the last packet is read out (write_eop, in combination with the confirming condition of #pkt+1 is greater than or equal to 2), control logic 47 makes a transition to pkts_rdy state 75, incrementing state variable #pkt and clearing state variable writing_pkt.
This operation of FIFO memory 40 continues in this manner, with control logic 47 managing the possibility of zero, one, or more than one packet being stored. To handle “cut-through” packets, control logic 47 enters cut_thru state 78 from first_sop state 72, in response to state variable pass_cut_thru_Th indicating, in the true condition, that the size of the packet for which an SOP has been written (state 72) is greater than a preselected threshold size. In connection with this transition, state variable sending_pkt is set to indicate that this packet is readable from FIFO memory 40. Both of the state variables writing_pkt and sending_pkt are set in this condition, indicating that the same large packet is both being written and being read simultaneously. In cut_thru state 78, receipt of the end of the jumbo packet (write_eop) by FIFO memory 40 causes control logic 47 to make a transition to tx_last_pkt state 76, incrementing state variable #pkt and clearing the writing_pkt state variable. The last remaining packet in FIFO memory 40 at this point is the remainder of the jumbo packet.
As mentioned above, the implementation of FIFO memory 40 in the network communications context described above relative to
The construction of output FIFO 48 according to this embodiment of the invention is illustrated in
As mentioned above, the downstream external destination of data from FIFO memory 40 asserts and deasserts a read strobe signal on read enable line RDEN to request and stop data access, respectively. According to this embodiment of the invention, and as mentioned above, one function of output FIFO 48 is to effect an immediate stop of output data upon read enable line RDEN going inactive. Referring to
Also in this case, strobe buffer 90 forwards a signal to dual port FIFO 92 in response to the deasserted read strobe on read enable line RDEN. Dual port FIFO 92 then takes line FIFO_rden to an inactive state. To the extent that reads from SRAM array 45 remain scheduled at this time, those scheduled reads continue to be executed, with the results stored in dual-port FIFO 92. In this way, the cessation of external read requests, or read strobes, results in the immediate cessation of output data externally from FIFO memory 40, while permitting the remaining scheduled reads of SRAM array 45 to take place.
Another situation in which the “read brake” function is useful occurs upon the sending out, or reading, of the end of the last full packet stored in FIFO memory 40. This amounts to a possible underflow condition at FIFO memory 40. Referring back to
Referring back to
In addition, output FIFO 48 includes threshold logic for generating its ready signal RD_RDY to the downstream function, and also to control the flow of data from SRAM array 45 into output FIFO 48. A preferred example of the thresholds according to which this threshold logic operates is schematically illustrated in
According to the preferred embodiment of the invention, therefore, an extremely cost-efficient FIFO memory is provided. The inventive FIFO memory appears, to its external functions, as a dual-port memory, capable of receiving asynchronous writes and reads. However, the FIFO memory according to this invention can be realized by way of conventional single-port memory cells, thus enabling the fabrication of an extremely large FIFO memory with dual-port functionality, well beyond the size of such a dual-port FIFO that can be feasibly implemented and integrated according to modern technology.
The size of the dual-port FIFO constructed according to this invention is especially beneficial in the network communications context, particularly in high data rate networks such as 10 Gbit Ethernet. The buffer function provided by the FIFO according to this invention permits the separation of network nodes by as much as on the order of kilometers, enabling the realization of extremely high data rate networks in the Metro context while providing an efficient and simple way to accomplish lossless flow control.
In addition, the dual-port FIFO according to the preferred embodiment of the invention includes packet management functionality, so that packet communications can be readily carried out at network nodes that implement this device.
While the present invention has been described according to its preferred embodiments, it is of course contemplated that modifications of, and alternatives to, these embodiments, such modifications and alternatives obtaining the advantages and benefits of this invention, will be apparent to those of ordinary skill in the art having reference to this specification and its drawings. It is contemplated that such modifications and alternatives are within the scope of this invention as subsequently claimed herein.
This application is a divisional of U.S. patent application Ser. No. 10/601,816, which was filed on Jun. 23, 2003 and which is incorporated by reference herein for all purposes.
Number | Date | Country | |
---|---|---|---|
Parent | 10601816 | Jun 2003 | US |
Child | 13185255 | US |