DYNAMICALLY ASSIGNING PACKET FLOWS

Information

  • Patent Application
  • 20160344634
  • Publication Number
    20160344634
  • Date Filed
    May 24, 2016
    8 years ago
  • Date Published
    November 24, 2016
    8 years ago
Abstract
In general, in one aspect, the disclosure describes a method includes accessing data of an egress packet belonging to a flow, storing data associating the flow with at least one queue based on a source of the data of the egress packet. The method also includes accessing an ingress packet belonging to the flow, performing a lookup of the at least one queue associated with the flow, and enqueueing data of the ingress packet to the at least one queue associated with the flow.
Description
BACKGROUND

Networks enable computers and other devices to communicate. For example, networks can carry data representing video, audio, e-mail, and so forth. Typically, data sent across a network is carried by smaller messages known as packets. By analogy, a packet is much like an envelope you drop in a mailbox. A packet typically includes “payload” and a “header”. The packet's “payload” is analogous to the letter inside the envelope. The packet's “header” is much like the information written on the envelope itself. The header can include information to help network devices handle the packet appropriately.


A number of network protocols cooperate to handle the complexity of network communication. For example, a transport protocol known as Transmission Control Protocol (TCP) provides “connection” services that enable remote applications to communicate. TCP provides applications with simple mechanisms for establishing a connection and transferring data across a network. Behind the scenes, TCP transparently handles a variety of communication issues such as data retransmission, adapting to network traffic congestion, and so forth.


To provide these services, TCP operates on packets known as segments. Generally, a TCP segment travels across a network within (“encapsulated” by) a larger packet such as an Internet Protocol (IP) datagram. Frequently, an IP datagram is further encapsulated by an even larger packet such as an Ethernet frame. The payload of a TCP segment carries a portion of a stream of data sent across a network by an application. A receiver can restore the original stream of data by reassembling the received segments. To permit reassembly and acknowledgment (ACK) of received data back to the sender, TCP associates a sequence number with each payload byte.


Many computer systems and other devices feature host processors (e.g., general purpose Central Processing Units (CPUs)) that handle a wide variety of computing tasks. Often these tasks include handling network traffic such as TCP/IP connections. The increases in network traffic and connection speeds have placed growing demands on host processor resources. To at least partially alleviate this burden, some have developed TCP Off-load Engines (TOEs) dedicated to off-loading TCP protocol operations from the host processor(s).





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A-1C illustrate assignment of packet flows.



FIG. 2 is a diagram of a network interface controller.



FIGS. 3 and 4 are flow-charts of packet receive and transmit operations.



FIG. 5 is a diagram of a computer system.





DETAILED DESCRIPTION

As described above, increases in network traffic and connection speeds have increased the burden of packet processing on host systems. In short, more packets need to be processed in less time. Fortunately, processor speeds have continued to increase, partially absorbing these increased demands. Improvements in the speed of memory, however, have generally failed to keep pace. Each memory operation performed during packet processing represents a potential delay as a processor waits for the memory operation to complete. For example, in Transmission Control Protocol (TCP), the state of each connection is stored in a block of data known as a TCP control block (TCB). Many TCP operations require access to a connection's TCB. Frequent memory accesses to retrieve TCBs can substantially degrade system performance. One way to improve system performance is to keep TCB and other connection related data in a processor cache that stores a quickly accessible copy of data. In a multi-processor system, however, the TCB of a connection may, potentially, be accessed by different processors. Efforts to maintain consistency in the TCB data (e.g., cache invalidation and locking) while the different agents vie for access may undermine the efficiency of caching.



FIG. 1A shows a system that delivers received packets belonging to the same flow to the same destination. This increases the likelihood that flow-related data for a given flow will remain in cache.


In greater detail, the system of FIG. 1A features multiple processors 104a-104n that share access to a network interface controller 100 (a.k.a. network adaptor). The controller 100 provides access to communications media (e.g., a cable and/or wireless radio). The controller 100 handles transmission of egress packets out to the network via the communications media and, in the other direction, handles ingress packets received from the network.


The processors 104a-104n exchange data with the controller 100 via queues 112a, 112b, 114a, 114b, 116a, 116b. For example, in FIG. 1A, each processor 104a-104n has an associated queue pair 102a-102n that features a transmit queue (Tx) and a receive queue (Rx) pair. For instance, to transmit packet data out of the host, processor 104a can enqueue the packet data in transmit queue 112a in queue pair 102a associated with the processor 104a. The enqueued data is subsequently transferred to the controller 100 for transmission. Similarly, the controller 100 delivers received packet data by enqueuing packet data in a receive queue, e.g., 112b.


As indicated above, packets often form part of a packet flow. For example, a series of Asynchronous Transfer Mode (ATM) cells may travel within an ATM virtual circuit. Similarly, a collection of TCP segments may travel within a TCP connection. A given flow can be identified by a collection of information in a packet's header(s). For example, the flow of a TCP/IP packet can be identified by a combination of, at least, the packet's IP source and destination addresses, source and destination ports, and a protocol identifier (a.k.a. a TCP/IP tuple). Likewise, for an IPv6 or ATM packet, the flow may be identified by a flow identifier field.


As shown, to determine where to enqueue a received packet, the controller 100 accesses data 110 that associates a packet flow (arbitrarily labeled “flow 1” and “flow 2”) with a destination (e.g., a processor, queue pair, and/or queue). For example, as shown in FIG. 1A, after receiving a packet 104, the controller 100 can identify a flow identifier for the packet 104 (e.g., by hashing the TCP/IP tuple). The controller 100 can use the flow identifier to lookup a destination for packets in the flow in data 110. As shown, the packet 104 belongs to flow “2” which is associated with queue pair 102b. Based on this lookup, the controller 100 enqueues the packet 104 to the receive queue 114b in the queue pair 102b, for example, by performing a Direct Memory Access (DMA) of the packet 104 into a memory 106 location in the queue specified by a driver operating on processor 104b.


The data 110 used to identify where to deliver received packets can be set by a driver operating on the processors 104a-104n. For example, the processors 104a-104n can send configuration messages to the controller 100 indicating the destinations for different flows. These configuration messages, however, can consume significant bandwidth between the processors 104a-104n and the controller 100. Additionally, these configuration messages represent an on-going traffic burden as connections are created and destroyed, and as flows are redirected to different destinations.



FIG. 1B depicts a technique that enables the controller 100 to learn how to direct ingress packets by identifying the sources of an egress packets. For example, as shown in FIG. 1B, processor 104n enqueues egress packet data in a transmit queue 116b associated with the processor 104n. As shown, the network interface controller 100 receives the packet data, for example, after receiving a packet descriptor identifying the location of the packet data in memory 100. The descriptor or other data can identify the source (e.g., a transmit queue, queue pair, and/or processor) of the egress packet data. In the case shown, the egress packet data belongs to flow “3” and has a source of queue pair 102n. Thus, the controller 100 updates its data 110 to direct ingress packets that are part of flow “3” to the receive queue 116b of the same queue pair 102n. This updating may include modifying previously existing data for an on-going flow or adding a new entry for a flow that is just starting. As shown in FIG. 1C, a subsequently received ingress packet 108 belonging to flow “3” is routed to the same queue pair 102n used in transferring the egress packet data for flow “3” to the controller 100.


The technique illustrated above can greatly reduce and/or eliminate the amount of run-time configuration performed, decreasing bus traffic that may otherwise be used for configuration messages. Additionally, the technique quickly adapts to a changing environment. For example if a TCP connection is assigned to a different processor and/or queue, this technique can begin routing packets to the new destination immediately after a packet was sent from the new source.


The system show in FIGS. 1A-1C is merely an example and a wide variety of variations and implementations can feature the techniques described above. For example, FIGS. 1A-1C depicted a single queue pair 102a-102n associated with each processor 104a-104n. However, in other implementations a processor 104 may have multiple associated queue pairs. For example, a processor 104 can implement a policy for assigning flows to many different transmit queues based on a variety of criteria (e.g., priority, flow, Virtual Local Area Network (VLAN) identifier, and so forth). Since the controller 100 mirrors the directing of ingress packets based on the host source of egress packets, the controller 100 can correctly deliver ingress packets in accordance with a given policy being implemented by a processor without explicit programming of the policy. This permits the policies being used to be easily and instantly altered without controller modification.


Additionally, though the queues shown in FIGS. 1A-1C were exclusively associated with a single processor, a given queue need not be exclusively associated with a single processor. For example, a queue pair may service multiple processors.



FIG. 2 illustrates a sample network interface controller 100 implementing techniques described above. In this illustration, the solid line denotes the transmit (Tx) path traveled by egress packet data and the dashed line denotes the receive (Rx) path traveled by ingress packet data.


As shown, the controller 100 features a physical layer device 200 that translates between the signals of a physical communications medium (e.g., electrical signals of a cable or radio signals of a wireless connection) and digital bits. The PHY 200 is coupled to a media access controller (MAC) that performs layer 2 operations such as encapsulating/de-encapsulation of TCP/IP packets within Ethernet frames and computing checksums to verify correct transmission. The MAC 200 is coupled to a classification engine 204 (e.g., an Application-Specific Integrated Circuit (ASIC) and/or a programmable processor). The classification engine 204 can perform tasks described above. Namely, for ingress packets, the engine 204 can match a packet to a flow and forward the packet to the associated destination queue. For egress packet data, the engine 204 can identify the flow of an out-bound data, identify the source of the packet (e.g., the transmit queue, queue pair, and/or processor), and update its flow/destination mapping to deliver subsequently received packets in the flow based on the source.


As shown in FIG. 2, the controller 100 features a receive queue distributor 208. The distributor 208 can DMA ingress packet data to the receive queue in memory identified by the classification engine 204. For example, the controller 100 may receive pointers to packet descriptors in memory from a controller driver operating on one or more of the processors. The packet descriptors, in turn, reference entries in the different receive queues 112b, 114b, 116b the controller 100 can use to enqueue the ingress packet data. After accessing a packet descriptor for the desired receive queue 112b, 114b, 116b, the controller 100 can use Direct Memory Access (DMA) to enqueue the received ingress packet data. These descriptors are recycled by the driver for reuse after dequeueing of the data by processors 104a-104n.


As shown, the controller 100 also features a transmit queue multiplexer 206 that dequeues entries of egress packet data from the different transmit queues. The multiplexer 206 can access packet descriptors identified by driver software that identify the next packet to retrieve from a transmit queue. Based on the descriptor, the multiplexer 206 can perform a DMA of the enqueued egress packet data to the controller 100 for subsequent transmission to the network (e.g., via the MAC 202 and PHY 200). Instead of relying on packet descriptors, the multiplexer 206 can instead independently consume transmit queue entries, for example, by performing a round-robin among the transmit queues and/or implementing a priority scheme.


Again, the controller implementation shown in FIG. 2 is merely an example. Other controllers can feature different designs and components.



FIG. 3 illustrates a sample transmit process implemented by a controller to handle egress packets. As shown, the controller determines 302 a flow that an ingress packet data received 300 from the network belongs to. Based on the determined flow, the process may store 304 data identifying a destination for received ingress packets in the flow. The process also transmits 306 the egress packet.



FIG. 4 illustrates a sample receive process implemented by a controller to handle ingress packets. In the process, the controller determines 310 the flow associated with an ingress packet received 308 over a communications network. The process performs a lookup 312 of the flow to determine the destination associated with the flow and enqueues 314 the received ingress packet in the determined destination queue.



FIG. 5 depicts a computer system that can implement the techniques described above. As shown, the system features multiple processors 104a-104n. The processors 104a-104n may be Central Processor Units (CPUs), a collection of programmable processor cores integrated within the same die, and so forth. The processors 104a-104n are coupled to a chipset 130. The chipset 130 provides access to memory 132 (e.g., randomly accessible memory) and at least one network interface controller 100, for example, by providing an Input/Output (I/O) controller hub. The chipset 130 may also feature other circuitry such as a graphics card.


The system shown in FIG. 5 is merely exemplary and a wide variety of variations are possible. For example, instead of being a separate component, the controller may be integrated into the chipset 120 or a processor 104.


While the above described specific examples, the techniques may be implemented in a variety of architectures including processors and network devices having designs other than those shown. The term packet can apply to IP (Internet Protocol) datagrams, TCP (Transmission Control Protocol) segments, ATM (Asynchronous Transfer Mode) cells, Ethernet frames, among other protocol data units. Additionally, the above often referred to packet data instead of simply a packet. This reflects that a controller, or other component, may remove and/or add data to a packet as the packet data travels along the Rx or Tx path.


The term circuitry as used herein includes hardwired circuitry, digital circuitry, analog circuitry, programmable circuitry, and so forth. The programmable circuitry may operate on executable instructions disposed on an article of manufacture. For example, the instructions may be disposed on a Read-Only-Memory (ROM) such as a Programmable Read-Only-Memory (PROM) or other medium such as a Compact Disk (CD) and other volatile or non-volatile storage.


Other embodiments are within the scope of the following claims.

Claims
  • 1. A method, comprising: accessing data of an egress packet belonging to a flow;storing data associating the flow with at least one queue based on a source of the data of the egress packet;accessing an ingress packet belonging to the flow;performing a lookup of the at least one queue associated with the flow; andenqueueing data of the ingress packet to the at least one queue associated with the flow.
  • 2. The method of claim 1, wherein the ingress packet comprises a Transmission Control Protocol/Internet Protocol (TCP/IP) packet and wherein the flow comprises a TCP/IP connection.
  • 3. The method of claim 1, wherein the queue comprises one of multiple queues associated with multiple, respective, processors.
  • 4. The method of claim 3, wherein the processors comprise processor cores, at least some of the cores being integrated on the same die.
  • 5. The method of claim 3, wherein the at least one queue comprises a part of one of multiple queue pairs, individual queue pairs including a transmit queue and a receive queue.
  • 6. The method of claim 5, wherein some, but not all, of the multiple queue pairs are associated with the same processor.
  • 8. The method of claim 1, wherein the source comprises at least one selected from the following group: a processor, a queue pair, and a queue.
  • 9. A network interface controller, comprising: at least one interface to a communications medium;at least one media access controller communicatively coupled to the at least one interface;circuitry to: store data associating a flow of an egress packet with at least one queue based on a source of the egress packet;perform a lookup in the stored data for at least one queue associated with a flow of an ingress packet; andenqueue data of the ingress packet to the at least one queue associated with the flow identified by the lookup.
  • 10. The controller of claim 8, wherein the circuitry comprises circuitry to identify a source queue of the egress packet data.
  • 11. The controller of claim 8, wherein the data associating flows with queues comprises data identifying a flow based, at least in part, on a combination of Internet Protocol source and destination addresses and source and destination ports.
  • 12. The controller of claim 8, wherein the at least one queue comprise a queue in a queue pair including a transmit queue and a receive queue.
  • 13. The controller of claim 11, wherein a one of the queue pairs is exclusively associated with one of a set of multiple processors.
  • 14. The controller of claim 8, wherein the source comprises at least one selected from the following group: a processor, a queue pair, and a queue.
  • 15. A system comprising: at least one processor;memory, coupled to the at least one processor; and anetwork interface controller, comprising: at least one interface to a communications medium;at least one media access controller communicatively coupled to the at least one interface;circuitry to: store data associating a flow of an egress packet with at least one queue based on a source of the egress packet;perform a lookup in the stored data for at least one queue associated with a flow of an ingress packet; andenqueue data of the ingress packet to the at least one queue associated with the flow identified by the lookup.
  • 16. The system of claim 15, wherein the circuitry comprises circuitry to identify a source queue of the egress packet.
  • 17. The system of claim 15, wherein the data associating flows with queues comprises data identifying a flow based, at least in part, on a combination of Internet Protocol source and destination addresses and source and destination ports.
  • 18. The system of claim 15, wherein the at least one queue comprises a queue in a queue pair including a transmit queue and a receive queue.
  • 19. The system of claim 18, wherein a one of the queue pairs is exclusively associated with one of a set of multiple processors.
  • 20. The system of claim 15, wherein the source comprises at least one selected from the following group: a processor, a queue pair, and a queue.
  • 21. An article of manufacture, comprising executable instructions to: access data of an egress packet belonging to a flow;store data associating the flow with at least one queue based on a source of the egress packet data;access data of an ingress packet belonging to the flow;perform a lookup of the at least one queue associated with the flow; andenqueue data of the one or more ingress packets to the at least one queue associated with the flow.
  • 22. The article of claim 21, wherein the ingress packet comprises a Transmission Control Protocol/Internet Protocol (TCP/IP) packet and wherein the flow comprises a TCP/IP connection.
  • 23. The article of claim 21, wherein the at least one queue comprises one of multiple queues associated with multiple, respective, processors.
  • 24. The article of claim 23, wherein the at least one queue comprises a part of one of multiple queue pairs, individual queue pairs including a transmit queue and a receive queue.
  • 25. The article of claim 21, wherein the source comprises at least one selected from the following group: a processor, a queue pair, and a queue.
Continuations (4)
Number Date Country
Parent 14032499 Sep 2013 US
Child 15163420 US
Parent 13079989 Apr 2011 US
Child 14032499 US
Parent 12587045 Oct 2009 US
Child 13079989 US
Parent 10957001 Sep 2004 US
Child 12587045 US