The present invention relates generally to computer communication systems and protocols, and, more particularly, to methods and systems for tracking and re-ordering TCP segments in a high speed, limited memory TCP dedicated hardware device.
TCP/IP is a protocol system—a collection of protocols, rules, and requirements that enable computer network communications. At its core, TCP/IP provides one of several universally-accepted structures for enabling information or data to be transferred and understood (e.g., packaged and unpackaged) between different computers that communicate over a network, such as a local area network (LAN), a wide area network (WAN), or a public-wide network, such as the Internet.
The “IP” part of the TCP/IP protocol stands for “Internet protocol” and is used to ensure that information or data is addressed, delivered, and routed to the appropriate entity, network, or computer system. In contrast, “TCP,” which stands for “transport control protocol,” ensures that the actual content of the information or data that is transmitted is received completely and accurately. To ensure such reliability, TCP uses extensive error control and flow control techniques. The reliability provided by TCP, however, comes at a cost—increased network traffic and slower delivery speeds—especially when contrasted with less reliable but faster protocols, such as UDP (“user datagram protocol”).
A typical network 100 is illustrated in
It is helpful to understand that the TCP/IP protocol defines discrete functions that are to be performed by compliant systems at different “layers” of the TCP/IP model. As shown in
According to TCP/IP protocol, each layer plays its own role in the communications process. For example, out-going data from the source machine is packaged first at the application layer 240, and then it is passed down the stack for additional packaging at the transport layer 230, the internet layer 220, and then finally the network access layer 210 of the source machine before it is transmitted to the destination machine. Each layer adds its own header (and/or trailer) information to the data package received from the previous higher layer that will be readable and understood by the corresponding layer of the destination machine. Thus, in-coming data received by a destination machine is unpackaged in the reverse direction (from network access layer 210 to application layer 240), with each corresponding header (and/or trailer) being read and removed from the data package by the respective layer prior to being passed up to the next layer.
The process 300 of encapsulating data at each successive layer is illustrated briefly in
It should be understood that the amount of data that needs to be transmitted between machines often exceeds the amount of space that is feasible, efficient, or permitted by universally-accepted protocols for a single frame or segment. Thus, data to be transmitted and received will typically be divided into a plurality of frames (at the IP layer) and into a plurality of segments (at the TCP layer). TCP protocols provide for the sending and receipt of variable-length segments of information enclosed in datagrams. TCP protocols provide for the proper handling (transmission, receipt, acknowledgement, and retransmission) of segments associated with a given communication.
At its lowest level, computer communications of data packages or packets of data are assumed to be unreliable. For example, packets of data may be lost or destroyed due to transmission errors, hardware failure or power interruption, network congestion, and many other factors. Thus, the TCP protocols provide a system in which to handle the transmission and receipt of data packets in such an unreliable environment. For example, based on TCP protocol, a destination machine is adapted to receive and properly order segments, regardless of the order in which they are received, regardless of delays in receipt, and regardless of receipt of duplicate data. This is achieved by assigning sequence numbers (left edge and right edge) to each segment transmitted and received. The destination machine further acknowledges correctly received data with an acknowledgment (“ACK”) or a selective acknowledgment (“SACK”) back to the source machine. An ACK is a positive acknowledgment of data up through a particular sequence number. By protocol, an ACK of a particular sequence number means that all data up to but not including the sequence number ACKed has been received. In contrast, a SACK, which is an optional TCP protocol that not all systems are required to use, is a positive acknowledgement of data up through a particular sequence number, as well as a positive acknowledgment of up to 3-4 “regions” of non-continguous segments of data (as designated by their respective sequence number ranges). From a SACK, a source machine can determine which segments of data have been lost or not yet received by the destination machine. The destination machine also advertises its “local” offer window size (i.e., a “remote” offer window size from the perspective of the source machine), which is the amount of data (in bytes) that the destination machine is able to accept from the source machine (and that the source machine can send) prior to receipt of (i.e., without having to wait for) any ACKs or SACKs back from the destination machine. Correspondingly, based on TCP protocols, a source machine is adapted to transmit segments of data to a destination machine up to the offer window size advertised by the destination machine. Further, the source machine is adapted to retransmit any segment(s) of data that have not been ACKed or SACKed by the destination machine. Other features and aspects of TCP protocols will be understood by those skilled in the art and will be explained in greater detail only as necessary to understand and appreciate the present invention. Such protocols are described in greater detail in a number of publicly-available RFCs, including RFCs 793, 2988, 1323, and 2018, which are incorporated herein by reference in their entirety.
The act of formatting and processing TCP communications at the segment level is generally handled by computer hardware and software at each end of a particular communication. Typically, software accessed by the central processing unit (CPU) of the sender and the receiver, respectively, manages the bulk of TCP processing in accordance with industry-accepted TCP protocols. However, as the demand for the transfer of greater amounts of information at faster speeds has increased and as available bandwidth for transferring data has increased, CPUs have been forced to devote more processing time and power to the handling of TCP tasks—at the expense of other processes the CPU could be handling. “TCP Offload Engines” or TOEs, as they are often called, have been developed to relieve CPUs of handling TCP communications and tasks. TOEs are typically implemented as network adapter cards or as components on a network adapter card, which free up CPUs in the same system to handle other computing and processing tasks, which, in turn, speeds up the entire network. In other words, TCP tasks are “off-loaded” from the CPU to the TOE to improve the efficiency and speed of the network that employees such TOEs.
Conventional TOEs use a combination of hardware and software to handle TCP tasks. For example, TOE network adapter cards have software and memory installed thereon for processing TCP tasks. TOE application specific integrated circuits (ASICs) are also used for improved performance; however, ASICs typically handle TCP tasks using firmware/software installed on the chip and by relying upon and making use of readily-available external memory. Using such firmware and external memory necessarily limits the number of connections that can be handled simultaneously and imposes processing speed limitations due to transfer rates between separate components. Using state machines designed into the ASIC and relying upon the limited memory capability that can be integrated directed into an ASIC improves speed, but raises a number of additional TCP task management hurdles and complications if a large number of simultaneous connections are going to be managed efficiently and with superior speed characteristics.
For these and many other reasons, there is a need for systems and methods for improving TCP processing capabilities and speed, whether implemented in a TOE or a CPU environment.
There is a need for systems and methods of improving the speed of TCP communications, without sacrificing the reliability provided by TCP.
There is a need for systems and methods that take advantage of state machine efficiency for handling TCP tasks but in a way that remains compliant and compatible with conventional TCP systems and protocols.
There is a need for systems and methods that enable state machine implemented on one or more computer chips to handle TCP communications on the order of 1000 s and 10,000 s simultaneous communications and at processing speed exceeding 10 GHz.
There is a need for a system using a hardware TOE device that is adapted to support the Selective ACK (SACK) option of TCP protocol so that a source machine is able to cut back or minimize unnecessary retransmission. In other words, a system in which the source machine only retransmits the missing segments and avoids or minimizes heavy network traffic.
There is yet a further need for a system or device having a hardware-based SACK tracking mechanism that is able to track and sort data segments at high speeds—within a few clock cycles.
There is also a need for a system in which the destination machine provides network convergence by limiting the total amount of data segments that the source machine cn inject into the network when the destination machine is in “exception processing” mode where it needs to reorder incoming data segments before it hands off data to the application layer.
For these and many other reasons, there is a general need for a method of processing and reordering out-of-order TCP segments by a high-speed TCP receiving device having limited on-chip memory, wherein in-order TCP segments received from a TCP sending device are forwarded on to an appropriate application in communication with the TCP receiving device, comprising (i) storing a first out-of-order TCP segment in the limited on-chip memory of the high-speed TCP receiving device, the first out-of-order TCP segment defining a SACK region, (ii) determining the gap between a last-received in-order TCP segment and the SACK region, (iii) for each later-received out-of-order TCP segment that is contiguous with but non-cumulative with the SACK region, (a) storing said later-received out-of-order TCP segment in the limited on-chip memory of the high-speed TCP receiving device; and (b) expanding the SACK region to include said later-received out-of-order TCP segment, and (iv) when the gap between the last received in-order TCP segment and the SACK region is filled, forwarding each out-of-order TCP segment included within the SACK region on to the appropriate application.
There is also a need for TCP offload engine for use in processing TCP segments in a high-speed data communications network, the TCP offload engine having an architecture integrated into a single computer chip, comprising: (i) a TCP connection processor for receiving incoming TCP segments, the TCP connection processor adapted to forward in-order TCP segments to an appropriate application in communication with the TCP offload engine, each in-order TCP segment having a sequence number, (ii) a memory component for storing contiguous but non-cumulative out-of-order TCP segments forwarded by the TCP connection processor, the out-of-order TCP segments defining a SACK region, wherein the SACK region is defined between a left edge and a right edge sequence number, and (iii) a database in communication with the TCP connection processor, the database storing the sequence number of the last-received in-order TCP segment and storing the left edge and right edge sequence numbers of the SACK region, wherein the SACK region is fed back to the TCP connection processor when the left edge of the SACK region matches up with the sequence number of the last received in-order TCP segment.
The present invention meets one or more of the above-referenced needs as described herein in greater detail.
The present invention relates generally to computer communication systems and protocols, and, more particularly, to methods and systems for high speed TCP communications using improved TCP Offload Engine (TOE) techniques and configurations. Briefly described, aspects of the present invention include the following.
In a first aspect of the present invention, a method of processing and reordering out-of-order TCP segments by a high-speed TCP receiving device having limited on-chip memory, wherein in-order TCP segments received from a TCP sending device are forwarded on to an appropriate application in communication with the TCP receiving device, comprises (i) storing a first out-of-order TCP segment in the limited on-chip memory of the high-speed TCP receiving device, the first out-of-order TCP segment defining a SACK region, (ii) determining the gap between a last-received in-order TCP segment and the SACK region, (iii) for each later-received out-of-order TCP segment that is contiguous with but non-cumulative with the SACK region, (a) storing said later-received out-of-order TCP segment in the limited on-chip memory of the high-speed TCP receiving device; and (b) expanding the SACK region to include said later-received out-of-order TCP segment, and (iv) when the gap between the last received in-order TCP segment and the SACK region is filled, forwarding each out-of-order TCP segment included within the SACK region on to the appropriate application.
In further features of the first aspect, the method further comprises discarding any out-of-order TCP segment that is merely cumulative with the SACK region, discarding any out-of-order TCP segment that is noncontiguous with the SACK region, and discarding any zero-payload TCP segments.
In other features, the method further comprises periodically sending a selective acknowledgment (SACK) back to the TCP sending device for the SACK region and periodically sending an acknowledgment (ACK) back to the TCP sending device for the last-received in-order TCP segment.
Generally, the gap between the last received in-order TCP segment and the SACK region is closed by receipt of an additional in-order TCP segment.
In an other feature, the TCP segments of the SACK region are re-ordered using a connection link list chain.
Preferably, in additional various features, the SACK region is defined between a left edge and a right edge sequence number and the later-received out-of-order TCP segment causes an update to the right edge sequence number, or an update to the left edge sequence number, or an update to both the left edge and right edge sequence numbers.
Preferably, during processing of out-of-order TCP segments by the TCP receiving device, the size of a local offer window of the TCP receiving device advertised to the TCP sending device is closed by an amount equivalent to the size of in-order TCP segments received thereafter.
Also preferably, after the step of forwarding each out-of-order TCP segment included within the SACK region on to the appropriate application, the size of the local offer window of the TCP receiving device advertised to the TCP sending device is returned to its default value.
In yet a further feature, a new TCP segment received during the step of forwarding each out-of-order TCP segment included within the SACK region on to the appropriate application is treated as a new first out-of-order TCP segment of a new SACK region.
In a second aspect of the present invention, a TCP offload engine for use in processing TCP segments in a high-speed data communications network, the TCP offload engine having an architecture integrated into a single computer chip, comprises: (i) a TCP connection processor for receiving incoming TCP segments, the TCP connection processor adapted to forward in-order TCP segments to an appropriate application in communication with the TCP offload engine, each in-order TCP segment having a sequence number, (ii) a memory component for storing contiguous but non-cumulative out-of-order TCP segments forwarded by the TCP connection processor, the out-of-order TCP segments defining a SACK region, wherein the SACK region is defined between a left edge and a right edge sequence number, and (iii) a database in communication with the TCP connection processor, the database storing the sequence number of the last-received in-order TCP segment and storing the left edge and right edge sequence numbers of the SACK region, wherein the SACK region is fed back to the TCP connection processor when the left edge of the SACK region matches up with the sequence number of the last received in-order TCP segment.
Preferably, the TCP connection processor sends acknowledgements for in-order TCP segments and sends selective acknowledgements for the SACK region to a TCP sending device from which the TCP segments are sent.
In a feature of the second aspect, the TCP offload engine further comprises an input buffer for receiving incoming TCP segments and pacing the TCP segments provided to the TCP connection processor.
Preferably, the memory component comprises a memory manager, a memory database, and a connection link list table.
In another feature, the TCP offload engine interfaces with a TCP microengine for processing of out-of-order TCP segments.
The present invention also encompasses computer-readable medium having computer-executable instructions for performing methods of the present invention, and computer networks, state machines, and other hardware and software systems that implement the methods of the present invention.
The above features as well as additional features and aspects of the present invention are disclosed herein and will become apparent from the following description of preferred embodiments of the present invention.
Further features and benefits of the present invention will be apparent from a detailed description of preferred embodiments thereof taken in conjunction with the following drawings, wherein similar elements are referred to with similar reference numbers, and wherein:
In conventional TCP software systems accessed by a CPU or in a conventional TOE device using firmware to perform out of order sorting, it is easy but relatively slow to manage the receipt of out-of-order segments and reorder the same prior to passing such data on to the relevant application. For example, it generally costs several hundred clock cycles to perform sorting of a segment in an out of order chain. In other words, a conventional system is only capable of running at a processing speed of approximately 1 gigabit (Gbit) per segment when out of order sorting is enabled.
In contrast, the system of the present invention performs sorting directly in the hardware and the hardware uses a messages to notify the microengine when it starts or ends a resorting process. The hardware system requests that the microengine send the entire sorted data chain back to the hardware for resorting without requiring firmware to perform such sorting. With this type of arrangement, the system of the present invention is capable of processing 10 Gbit per second or more.
In a first aspect of the present invention, a TCP receiver 400 portion of a high-speed TOE device that is adapted to receive and manage TCP segments received by a destination machine is illustrated in simplified block format in
Preferably, the receiver 400 is configured to: (i) detect out-of-order segments; (ii) link reordered out-of-order segments in a connection-based link list chain; (iii) drop all zero-payload segments without chaining; (iv) capture and link reordered out-of-order non-zero-payload segments that belong to the “first” or “current” transmit SACK range only; (v) drop all zero-payload segments to minimize memory storage per connection; (vi) provide network convergence before connection is fully recovered from reorder out-of-order exception processing; and (vii) provide minimal memory usage for each TCP connection by:
Thus, with reference still to
When the connection processor 420 receives a “first” out-of-order segment, the connection processor 420 first determines whether the out-of-order data segment has a sequence range that is within the current local offer window size. If so, then the out-of-order flag and local offer window back pressure flag variables 456,458 are both activated. The TCP connection processor 420 sends an “out of order” message to the microengine 490. The microengine 490 then causes the data segment to be sent to the segment data memory manager 430, which stores the segment in database 440 and starts a link list chain in link list table 445. This chain represents a “first” or “current” SACK region. This region may be expanded, but no new SACK regions will be stored in memory, as discussed hereinafter. The left edge and right edge (plus one) sequence numbers of the out-of-order segment are also stored in their respective variable locations 452, 454.
If the out-of-order data segment has a sequence range that is beyond the current local offer window size, it is merely dropped or discarded. As will also be apparent, the offer window advertised by the receiver 420 will continue to slide (i.e., stay the same size) in conventional manner as long as segments are received and processed in-order. Once an out-of-order segment is received, however, the offer window will begin to close to ensure that the receiver 420 does not receive more segments than it can handle with its limited memory and forward on to the relevant application in-order.
Further, if the data segment has a zero-payload, it is also dropped. Each of these measures ensures that the limited memory available to the receiver 420 is used in an efficient manner.
All in-order data segments received continue to be handled in the same manner as the first in-order data segment. Each in-order data segment is passed on to the application and the ACK sequence number is updated.
Any further out-of-order data segments are compared to the first or current SACK region. If any out-of-order segment is not contiguous with (i.e., there is no resulting gap between the sequence number of the new out-of-order segment and the sequence numbers of the current SACK region) and does not also expand either the left edge or right edge of the current chain, it is discarded. If the next out-of-order segment is contiguous with and expands the left edge of the current SACK region, the segment is stored in database 440, the SACK left edge variable 452 is updated, and the new segment is chained to the “head” of the current SACK region chain in the table 445. If the next out-of-order segment is contiguous with and expands the right edge of the current SACK region, the segment is stored in database 440, the SACK right edge variable 454 is updated, and the new segment is chained to the “tail” of the current SACK region chain in the table 445. This occurs unless adding such segment to the chain will cause the offer window size to be exceeded. Such a scenario should not occur unless the source machine sends data in excess of the offer window size, which is not permitted under TCP protocol. If the next out-of-order segment is contiguous with and expands both the right and the left edges of the current SACK region, the segment is stored in database 440, both the SACK left edge and right edge variables 452, 454 are updated, and the new segment is chained to the “head” of the current SACK region chain in the table 445.
When all segments prior to the current SACK region have been received by the receiver 420, the out-of-order flag 456 is deactivated, which triggers a SACK region feedback process. During the SACK region feedback process, an “end of sorting” message is sent from the TCP connection processor 420 to the microengine 490, which then commands the memory manager 430 to transfer all data back to the input buffer 410 for processing again. More specifically, segments from the SACK region are retrieved in-order from the database 440, based on their proper sequence arrangement dictated by the link list table 445, and are fed back to the receiver 420 along re-ordered segment data feedback path 414. Each now-in-order segment is then passed on to the application in conventional manner by the receiver 420 and the ACK sequence number is updated for each segment so processed. During the feedback process, the offer window back pressure flag 458 remains active to prevent segment volume from overwhelming the receiver 420 before it can get caught up with the feedback of the current SACK region, as will be explained in greater detail hereinafter. Once the feedback process is complete and assuming a new SACK region was not been created during the feedback process, the offer window back pressure flag is deactivated and the offer window size returns to its original value.
The above process will be more readily apparent with reference to several specific examples disclosed in a variety of ways through
We will now explain what happens as each segment is received by the TCP receiver of the present invention. In this example, segments 1-3 are received in-order and are processed in conventional manner. At time 5-A, segment 10 is received out-of-order. Segment 10 data is stored in DDRAM, and a SACK region starting with segment 10 is started. Segments 4 and 5 are then received and since they are the expected segments to follow segment 3, they are in-order and are processed normally. At time 5-B, segment 11 is received out-of-order. Segment 11 data is also stored in DDRAM, and the SACK region is updated to include segment 11 after segment 10 (i.e., the link list table is updated and segment 11 is attached to the tail of the existing chain). Segment 6 is then received in-order and processed normally. At time 5-C, segment 9 is received out-of-order. Even though segment 9 precedes the current chain comprised of segments 10 and 11, segment 9 is continguous with the existing chain; thus, segment 9 data is also stored in DDRAM, and the SACK region is updated to include segment 9 ahead of segment 10 (i.e., the link list table is updated and segment 9 is attached to the head of the existing chain). Segment 7 is then received in-order and processed normally. Segments 12-14 are then received out-of-order when compared with the last in-order segment 7, and are treated like segment 11. Segments 12-14 are stored in DDRAM, and the SACK region is updated to include segments 12, 13, and 14 after segment 11 (i.e., the link list table is sequentially updated and segments 12-14 are sequentially attached to the tail of the existing SACK region chain). At time 5-D, segment 8 is received in-order. It is processed normally. The receiver then recognizes that the SACK region currently stored in DDRAM follows the last in-order segment (i.e., segment 8) received. The receiver initiates the feedback process and requests feedback of the segments, in-order, from DDRAM starting with segment 9. Before segments 9-14 have been completely processed by the receiver and forwarded to the relevant application, segments 15-18 are received at time 5-E. Segments 15-18 are considered to be out-of-order since segment 14 has not yet been fully processed as of time 5-E. Segments 15-18 are stored in DDRAM and treated as the new or current SACK region that is stored as a link list chain, since the previous SACK region chain of segments 9-14 was already “released” by the system when the feedback process was initiated.
As with
In contrast with
On the right side of the chart/table 700 of
At the lower left side of the chart/table 700 are a plurality of potential segments that could be received. The impact of each such segment is shown by its effect on the data in each column of table 750 in the corresponding row. It is assumed that in-order segments 704 and out-of-order SACK region 706 has already been received by the system and that only that particular segment is received by the system. For example, if segment 734 (which includes non-cumulative data at the left edge of the SACK region and some cumulative data) were to be received by the system, it would be processed as shown in row 764 of table 750. As shown, if segments 734, 736, or 738 were to be received by the system, they would be handled in the same manner—the left edge sequence number would be updated, the segment data would be stored in memory, and it would be appended to the head of the current SACK region in the link list. If segment 742, 744, or 746 were to be received by the system (again, assuming only blocks 704 and 706 had been previously received), they would be handled as shown in rows 766 of table 750—the right edge sequence number would be updated, the segment data would be stored in memory, and it would be appended to the tail of the current SACK region in the link list. If segment 782 were to be received (again, assuming only blocks 704 and 706 had been previously received), it would merely be dropped or discarded since it was cumulative with the current SACK region 706. Segment 784 would be handled in the same manner as segment 782 since it provides no additional information (and even less information than segment 782) that is not already contained in SACK region 706. If segment 748 were to be received (again, assuming only blocks 704 and 706 had been previously received), as shown in row 768 of table 750, both the right edge and left edge sequence numbers would be updated, the segment data would be stored in memory, and it would be appended to the head of the current SACK region in the link list even though it contain some data that is cumulative with the current SACK region 706. If segment 786 or 788 were to be received (again, assuming only blocks 704 and 706 had been previously received), they would simply be dropped because they are not continguous with the current SACK region 706. Segments 790 illustrate zero-payload segments received out-of order. Such segments are merely dropped or discarded to avoid tying up processing time of the TCP receiver and limited memory space. Finally, once segment (or group of segments) 792 is received (again, assuming only blocks 704 and 706 had been previously received), such segment is processed as an in-order segment and the feedback process is started to retrieve SACK region 706 from memory. As shown in row 770 of table 750, the out-of-order flag is deactivated and the segments of the current SACK region 706 are forwarded in-order to the receive processor to be handled as in-order segments.
Turning now to
Turning now to
Time block 1030 shows that the feedback process is still underway, which causes the local offer window flag 1020 to remain activated. At time t3, while the previous SACK region is still being processed through feedback, a new out-of-order segment is received. Even though this segment may be in-order right after the previous SACK region, it is treated as out-of-order because the feedback process has not yet completed. This starts a new, current SACK region and cause the out-of-order flag 1010 to reactivate. At time t4, all previous segments from the original out-of-order chain are finished the feedback process. The current SACK region then begins its own feedback process. The out-of-order flag 1010 deactivates but the back pressure window flag 1020 remain activated because of the on-going feedback process, as indicated by block 1030. Finally, at time t5, the feedback process is complete, as shown by block 1030. The back pressure window flag 1020 is deactivated and the local offer window returns to its normal advertised size.
It should be apparent to those skilled in the art that this process will converge as the local offer window is closed and the remote side does not have any new window available to transmit new data segments. Normally, the loop back path or feedback process is significantly faster then the receipt and processing of new data received from the source machine at a physical input port. Use of the back pressure flag 1020 to cause the offer window to close, however, ensures that the system will converge and that the receiver will not be overloaded with incoming segments before it can process out-of-order segments in the single SACK region that is being stored by the system.
The above process is further illustrated by the example shown in
The graph 1100 of
In-order segments 1-8 are handled in the same manner as was described in association with
Turning now to
In view of the foregoing detailed description of preferred embodiments of the present invention, it readily will be understood by those persons skilled in the art that the present invention is susceptible to broad utility and application. While various aspects have been described in the context of a preferred embodiment, additional aspects, features, and methodologies of the present invention will be readily discernable therefrom. Many embodiments and adaptations of the present invention other than those herein described, as well as many variations, modifications, and equivalent arrangements and methodologies, will be apparent from or reasonably suggested by the present invention and the foregoing description thereof, without departing from the substance or scope of the present invention. Furthermore, any sequence(s) and/or temporal order of steps of various processes described and claimed herein are those considered to be the best mode contemplated for carrying out the present invention. It should also be understood that, although steps of various processes may be shown and described as being in a preferred sequence or temporal order, the steps of any such processes are not limited to being carried out in any particular sequence or order, absent a specific indication of such to achieve a particular intended result. In most cases, the steps of such processes may be carried out in a variety of different sequences and orders, while still falling within the scope of the present inventions. In addition, some steps may be carried out simultaneously. Accordingly, while the present invention has been described herein in detail in relation to preferred embodiments, it is to be understood that this disclosure is only illustrative and exemplary of the present invention and is made merely for purposes of providing a full and enabling disclosure of the invention. The foregoing disclosure is not intended nor is to be construed to limit the present invention or otherwise to exclude any such other embodiments, adaptations, variations, modifications and equivalent arrangements, the present invention being limited only by the claims appended hereto and the equivalents thereof.
This application claims the benefit under 35 U.S.C. § 119(e) of U.S. provisional patent application No. 60/583,310, entitled “TOE METHODS AND SYSTEMS,” filed Jun. 28, 2004, which is incorporated herein in its entirety by reference.
Number | Date | Country | |
---|---|---|---|
60583310 | Jun 2004 | US |