The present invention relates to information technology systems that receive and process electronic messages. The present invention more particularly relates to restoring an arrival order of a plurality of packets after receipt and processing of the plurality of packets by an information technology system
Electronic messages, such as though transmitted over the Internet and conforming to the Transmission Control Protocol and Internet Protocol, or “TCP/IP”, are often separated into data packets. In the prior art, a plurality of packets may be derived from a same originating electronic message and may be transmitted over the Internet as a data flow to a receiving network address. Each data packet may include a header, wherein the header of each data packet identifies a data flow that comprises the plurality of packets that are generated from a same originating electronic message.
A receiving computer will receive the data packets of a same data flow in an arrival order and then often attempt to process the associated plurality of data packets in the arrival order. In particular, in the prior art when a receiving computer forwards on a plurality of packets of a same data flow, the receiving computer will typically be programmed to transmit the data packets of this data flow in the same order as the data packets were received. As packets may be processed at different rates within the receiving computer, prior art methods require that memory resources be committed to collecting the packets of a data flow in a dedicated queue, and then writing the assembled data packets from each separate queue after each and every packet of a same data flow has been written into a particular dedicated memory queue. There is, therefore, a long felt need to reduce the memory resources used by a receiving computer in processing data packets and forwarding data packets to another network element.
The header of a data packet, such as a data packet conforming to the Internet Protocol, may further specify a data type of a payload of the packet, a packet number, a total number of packets comprising a data flow, and a packet sender's network address and an intended receiver's network address.
In particular, a data packet conforming to the message format Internet Protocol (hereafter, “IP packet”) may include (a.) 4 bits that specify the Internet Protocol version to which the IP packet conforms, e.g., version 4 or version 6 of the Internet Protocol; (b.) 4 bits that contain specify the length of the header; (c.) 8 bits that identify the Quality of Service (or “QoS”), which identifies the priority at which the packet should be process; (d.) 16 bit's that specify the length of the comprising packet in bytes; (e.) 13 bits that contain a fragment offset, i.e., a field that identifies to which fragment the comprising packet is attached; (f.) 8 bits that identify a communications protocol to which the comprising packet conforms, e.g., the Transmission Control Protocol, the User Datagram Protocol, or the Internet Control Message Protocol; (g.) 32 bits that contain a source Internet Protocol address, or “IP address”; and (h.) 32 bits that contain an intended destination IP address.
A receiving computer is thus provided information within each IP packet that is useful in processing and associating a plurality of packets comprised within a same data flow. In the processing of a data flow, a first number of received packets may be analyzed and processed by a slower process than the remaining packets. This transition from a high latency path to a lower latency path may occur because of determinations made by the receiving computer after the first received number of packets are analyzed and processed. There may thus be only one transition in the latency period in processing the plurality of packet of a data flow, yet this latency may cause the processed data packets of a same data flow to be organized within the receiving computer out the arrival order of the data packets. The prior art teaches that a receiving computer must devote computational resources to restoring the processed data packets of a same data flow into the arrival order of the received data packets. Yet the prior art fails to best reduce the amount of memory that must be applied restore the processed packets into the arrival order.
It is an object of the present invention to provide a method to transmit a plurality of packets of a same dataflow by a network element in an order in which the network element received the data packets.
It is a further optional object of the present invention to provide a system that is configured to reduce the computational resources applied to transmit a plurality of packets of a same data flow in the same order that the plurality of packets were received.
Additional objects and advantages of the present invention will be set forth in the description that follows, and in part will be obvious from the description, or may be learned by practice of the present invention. The objects and advantages of the present invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
Towards this object and other objects that will be made obvious in light of this disclosure, a first version of the method of the present invention provides a system for restoring the arrival order of a plurality of packets after receipt of the packets and prior to a retransmission of the plurality of packets.
The invented system is configured to process a first number of packets through a high latency path, and then process all remaining packets through a lower latency path. The received packets are stored after processing in a queue memory until either (a.) all of the packets processed through the high latency path are fully processed through the high latency path, or (b.) a time period of packet processing has expired. The time period is specified to avoid storing the packets when a packet is dropped by the high latency path.
The packets stored in the queue are transmitted from the invented system in the order in which the packets were received by the invented system, and the additional data packets are retransmitted without storage in the queue memory.
In certain alternate embodiments of the method of the present invention, a buffer memory resource of a network computer that is communicatively coupled with an electronics communications network includes one or more of the following aspects of: (a.) determining a maximum packet latency T of a processing of a packet; (b.) determining a maximum data rate G of the network computer; (c.) assigning a memory buffer for temporarily storing data packets in the range of 0.8 to 1.2 of the result of T times G; (d.) determining a maximum rate R of a single data flow; (e.) determining a quantity Q of resource queues; and/or (f.) assigning a memory buffer for temporarily storing data packets of a capacity in the range of 0.8 to 1.2 T times Q times R.
The foregoing and other objects, features and advantages will be apparent from the following description of the preferred embodiment of the invention as illustrated in the accompanying drawings.
All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.
United States Patent Application Publication No. 20030229839 (Inventors: Wang, Xiaolin, et al.; published Dec. 11, 2003) entitled “Method of and apparatus for protecting against and correcting errors in data packet flow streams in closed ring sequential address generators and the like and in other data pack flow paths, without data flow stream interruption”; United States Patent Application Publication No. 20040141510 (Inventors: Blanc, Alain, et al.; published Jul. 22, 2004) entitled “CAM based system and method for re-sequencing data packets”; United States Patent Application Publication No. 20060159104 (Inventors: Nemirovsky, Mario, et al.; published Jul. 20, 2006) entitled “Queueing system for processors in packet routing operation”; United States Patent Application Publication No. 20060153197 (Inventors: Nemirovsky, Mario, et al.; published Jul. 13, 2006) entitled “Queueing system for processors in packet routing operations”; United States Patent Application Publication No. 20060036705 (Inventors: Musoll, Enrique, et al.; published Feb. 16, 2006) entitled “Method and apparatus for overflowing data packets to a software-controlled memory when they do not fit into a hardware-controlled memory”, United States Patent Application Publication No. 20080050118 (Inventors: Haran, Onn, et al.; published Feb. 28, 2008) entitled “Methods and Systems for Bandwidths Doubling in an Ethernet Passive Optical Network”; and United States Patent Application Publication Serial No. 20060064508 (Inventors: Panwar, Ramesh, et al.; published on Mar. 23, 2006) entitled “Method and system to store and retrieve message packet data in a communications network” are incorporated herein by reference in their entirety and for all purposes.
In addition, U.S. Pat. No. 7,360,217 (Inventors: Melvin, et al.; issued Apr. 15, 2008) entitled “Multi-threaded packet processing engine for stateful packet processing”; U.S. Pat. No. 7,039,851 (Inventor: Wang, et al.; issued May 2, 2006) entitled “Method of and apparatus for correcting errors in data packet flow streams as in closed ring sequential address generators and the like without data flow stream interruption”; U.S. Pat. No. 6,859,824 (Inventors: Yamamoto, et al.; issued Feb. 22, 2005) entitled “Storage system connected to a data network with data integrity”; and U.S. Pat. No. 6,456,782 (Inventors: Kubota, et al.; issued Sep. 24, 2002) entitled “Data processing device and method for the same” are incorporated herein by reference in their entirety and for all purposes.
These, and further features of the invention, may be better understood with reference to the accompanying specification and drawings depicting the preferred embodiment, in which:
In describing the preferred embodiments, certain terminology will be utilized for the sake of clarity. Such terminology is intended to encompass the recited embodiment, as well as all technical equivalents, which operate in a similar manner for a similar purpose to achieve a similar result.
Referring now generally to the Figures and particularly to
A plurality of message servers 8 transmit electronic messages through the network 4 and meant for delivery to elements of the internal network 6. These message may be or comprise Internet Protocol packets P.1-P.X that conform to the Internet Protocol version four or version six.
Referring now generally to the Figures and particularly to
The plurality Internet Protocol packets P.1-P.X, or “IP packets” P.1-P.X, comprise a data flow transmitted from one or more message servers 8 of the network 4. A control logic 12 directs the operations of a packet sequencer 14, a fast path logic 16, a slow path logic 18, a plurality of packet queues 20, an egress interface 22 and a system memory. The egress interface 22 bi-directionally communicatively couples the network computer with the internal network 6. The control logic 12 may be or comprise and applications specific integrated circuit, a microcontroller and/or a microprocessor programmed or configured to direct the elements 10-24 to process the IP packets P.1-P.X in accordance with the method of the present invention. A system memory 13 contains system software that directs and enables the control logic to program and/or configure fast path logic 16, the slow path logic 18 and other elements 10, 14, 20 & 22 to process the IP packets P.1-P.X in accordance with the method of the present invention.
The fast path logic 16 and/or the slow path logic 18 may be or comprise random access memory, programmable logic devices, and/or firmware. The control logic 12 may direct each received packets P.1-P.X to from the network interface 10 and to the fast path logic 16 or the sequencer 18. The sequencer adds order of arrival markers to each IP packet P.1-P.X directed by the control logic 12 to the sequencer 14.
Each IP packet P.1-P.X sent to the queue 20 flows through the sequencer 14 and either the fast path logic 16 or the slow path logic 18. The control logic 12 may direct the fast path logic to process and transfer IP packets P.1-.X to either the egress interface 22 or the queue 20. The plurality of queues 20 comprises reprogrammable memory circuit, such as random access memory. The control logic 12 may additionally direct the slow path logic to process and transfer IP packets P.1-.X to the egress interface 22.
Referring now generally to the Figures, and particularly to
When the control logic 12 determines in step 3.4 that a fast path logic 16 is available for IP packets P.1-P.X of the first data flow D, the network computer proceeds from step 3.4 to step 3.6. The network computer 2 determines in step 3.6 whether a memory queue 20 is assigned and available to temporarily store IP packets P.1-P.X of the first data flow D. When the network computer 2 determines in step 3.6 that a memory queue 20 is assigned and available to temporarily store IP packets P.1-P.X of the first data flow D, the network computer 2 proceeds from step 3.6 to Step 3.8 wherein the packet P.1 the sequencer .x assigns an order of arrival marker to the IP packet P.1. Proceeding from step 3.8 to step 3.10, the network computer 2 processes the packet P.1 through the fast path logic 16 in step 3.10 and then stores the packet P.1 in step 3.12 within the assigned and available queue 20 after processing through the fast logic path 16 of step 3.10.
When the network computer 2 determines in step 3.6 that a memory queue 20 is not available to temporarily store IP packets P.1-P.X of the first data flow D, the network computer 2 proceeds from step 3.6 to step 3.14 and to process the packet P.1 through the fast path logic 16. The network computer 2 proceeds from step 3.14 to step 3.16 wherein the IP packet P.1 is transmitted to the egress interface 22. It is understood that the IP packet P.1 may be transmitted by the network computer 2 to the internal network 6 after the transfer of the IP packet P.1 to the egress interface 22.
The network computer 2 proceeds from either step 3.12 or step 3.16 to step 3.18, wherein the network computer determines whether to continue processing IP packets P.1-P.2 or to proceed on to other operations of step 3.20.
Referring now generally to the Figures and particularly to
In step 4.10 a sequence number is added to, or associated with the IP packet P.1-P.X examined in step 4.2, and in step 4.12 the IP packet P.1-P.x of steps 4.2 and 4.10 is transmitted to, and processed by, the slow path logic 16. In step 4.14 the IP packet is stored in the queue 20 assigned to and available for storage of IP packets P.1-P.X. Optionally the sequence number assigned in step 4.10 may be stored in the assigned queue 20 or otherwise made available to the control logic 12.
Referring now generally to the Figures and particularly to
Receipt of an IP Packet P.1-P.X having a sequence number of N−1 therefore indicates that all IP packets P.1-P.X assigned for processing by the slow path logic 18 have been fully processed and written into the assigned queue 20, and that the queue 20 may now write its contents to the egress interface 22, whereupon the queue 20 may be assigned to store packet of another data flow.
When the sequence number of the IP packet P.1-P.X is determined in step 5.2 to be equal to N−1, the control logic 12 directs the queue 20 in step 5.6 to write its contents to the egress interface 22. The control logic 12 then directs the queue 20 to be closed to and unavailable for receiving any remaining or additional IP packets P.1-P.x of the data flow D. In other words, after the execution of step 5.8, all remaining IP packets P.1-P.X of the data flow D are transmitted directly from the network interface 10 to the fast path logic 16, and from the fast path logic 16 to the egress interface 22 without mediation by the sequencer 14 or the queue 20.
When the sequence number of the IP packet P.1-P.X is determined in step 5.2 to not be equal to N−1, the control logic 12 compares the time counter TF with a maximum time value V. It is understood that the time value V is set to be equal to a value that indicates a failure to detect a completed processing of all IP packets P.1-P.X assigned to the slow logic path within a time period starting from a first assignment of the queue 20 to the data flow D indicates that one or more IP packet P.1-P.x assigned for processing by either the slow path logic 18 or the fast path logic 16 have been dropped in processing by the network computer 2. Step 5.10 thereby guards against hanging up the queue 20 and the network computer 2 when an IP packet P.1-P.X assigned is dropped by either the slow path logic 18 or the fast path logic 16.
When the time counter TF is determined by the network computer 2 in step 5.10 to exceed the max time value V, the network computer 2 proceeds from step 5.10 to step 5.6 and process step 5.6 and 5.8 as discussed above.
The network computer 2 proceeds from step 5.8 or step 5.10 to return to step 3.0.
Referring now generally to the Figures and particularly to
In optional step 6.8 the memory allocation capacity NQ may be modified by multiplication with a design factor F. The design factor F is preferably approximate to the unity value of one and within the range of values 0.95 to 1.05. Alternatively, the design fact F may be a value selected from the value range of 0.8 to 1.20.
Referring now generally to the Figures and particularly to
In optional step 7.8 the memory allocation capacity NQ may be modified by multiplication with a design factor F. The design factor F is preferably approximate to the unity value of one and within the range of values 0.95 to 1.05. Alternatively, the design fact F may be a value selected from the value range of 0.8 to 1.20.
Referring now generally to the Figures and particularly to
A memory capacity allocation NQ is calculated in step 8.10 as equal to the product of multiplying the maximum time latency T with the quantity Q of resource queues and with the quantity Q of resource queues.
In optional step 8.10 the memory allocation capacity NQ may be modified by multiplication with a design factor F. The design factor F is preferably approximate to the unity value of one and within the range of values 0.95 to 1.05. Alternatively, the design fact F may be a value selected from the value range of 0.8 to 1.20.
The foregoing disclosures and statements are illustrative only of the Present Invention, and are not intended to limit or define the scope of the Present Invention. Although the examples given include many specificities, they are intended as illustrative of only certain possible embodiments of the Present Invention. The examples given should only be interpreted as illustrations of some of the preferred embodiments of the Present Invention, and the full scope of the Present Invention should be determined by the appended claims and their legal equivalents. Those skilled in the art will appreciate that various adaptations and modifications of the just-described preferred embodiments can be configured without departing from the scope and spirit of the Present Invention. Therefore, it is to be understood that the Present Invention may be practiced other than as specifically described herein. The scope of the Present Invention as disclosed and claimed should, therefore, be determined with reference to the knowledge of one skilled in the art and in light of the disclosures presented above.