1. Field of the Invention
The present invention relates generally to switches and electronic communication. More specifically, the present invention relates to the transfer of data over switch fabrics.
2. Description of the Related Art
Diverse protocols have been used to transport digital data over switch fabrics. A protocol is generally defined by the sequence of packet exchanges used to transfer a message or data from source to destination and the feedback and configurable parameters used to ensure its goals as are met. Transport protocols have the goals of reliability, maximizing throughput, minimizing latency, and adhering to ordering requirements, among others. Design of a transport protocol requires an artful set of compromises among the often competing goals.
One aspect of the invention is a method of transferring data over a switch fabric with at least one switch with an embedded network class endpoint device is provided. A push vs. pull threshold is initialized. A device transmit driver receives a command to transfer a message. If the message length is less than the push vs. pull threshold the message is pushed. If the message length is greater than the push pull threshold, the message is pulled. Congestion at various message destinations is measured. The push vs. pull threshold is adjusted according to the measured congestion.
In another manifestation of the invention, an apparatus is provided. The apparatus comprises a switch. At least one network class device endpoint is embedded in the switch.
In another manifestation of the invention, a method of transferring data over a switch fabric with at least one switch with an embedded network class endpoint device is provided. At a device transmit driver a transfer command is received to transfer a message. If the message length is less than a threshold the message is pushed. If the message length is greater than the threshold, the message is pulled.
In another manifestation of the invention, a method of transferring data over a fabric switch is provided. A device transmit driver receives a command to transfer a message. If the message length is less than the threshold the message is pushed. If the message length is greater than the threshold, the message is pulled.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
These and other features of the present invention will be described in more detail below in the detailed description of the invention and in conjunction with the following figures.
The present invention will now be described in detail with reference to a few preferred embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention.
While the multiple protocols in use differ greatly in many respects, most have at least this property in common: that they push data from source to destination. In a push protocol, the sending of a message is initiated by the source. In a pull protocol, data transfer is initiated by the destination. When fabrics support both push and pull transfers, it is the norm to allow applications to choose whether to use push or pull semantics.
Pull protocols have been avoided primarily because at least two passes and sometimes three passes across the fabric are required in order to communicate. First a message has to be sent to request the data from the remote node and then the node has to send the data back across the fabric. A load/store fabric provides the simplest examples of pushes (writes) and pulls (reads). However, simple processor PIO reads and writes are primitive operations that don't rise to the level of a protocol. Nevertheless, even at the primitive level, reads are avoided wherever possible because of the higher latency of the read and because the processing thread containing the read blocks until it completes.
The necessity for at least a fabric round trip is a disadvantage that can't be overcome when the fabric diameter is high. However, there are compensating advantages for the use of a pull protocol that may compel its use over a small fabric diameter, such as one sufficient to interconnect a rack or a small number of racks of servers.
Given the ubiquity of push protocols at the fabric transport level, any protocol that successfully uses pull mechanisms to provide a balance of high throughput, low latency, and resiliency must be considered innovative.
Sending messages at the source's convenience leads to one of the fundamental issues with a push protocol: Push messages and data may arrive at the destination when the destination isn't ready to receive them. An edge fabric node may receive messages or data from multiple sources concurrently at an aggregate rate faster than it can absorb them. Congestion caused by these factors can spread backwards in the network causing significant delays.
Depending on fabric topology, contention, resulting in congestion, can arise at intermediate stages also and can arise due to faults as well as to an aggregating of multiple data flows through a common nexus. When a fabric has contention “hot spots” or faults, it is useful to be able to route around the faults and hot spots without or with minimum intervention by software and with a rapid reaction time. In current systems, re-routing typically requires software intervention to select alternate routes and update routing tables.
Additional time consuming steps to avoid out of order delivery may be required, as is the case, for example, with Remote Direct Memory Access (RDMA). It is frequently the case that attempts to reroute around congestion are ineffective because the congestion is transient in nature and dissipates or moves to another node before the fabric and its software can react.
Pull protocols can avoid or minimize output port contention by allowing a data-destination to regulate the movement of data into its receiving interface but innovative means must be found to take advantage of this capability. While minimizing output port contention, pull protocols can suffer from source port contention. A pull protocol should therefore include means to minimize or regulate source port contention as well. An embodiment of the invention provides a pull protocol where the data movement traffic it generates is comprised of unordered streams. This allows us to route those streams dynamically on a packet by packet basis without software intervention to meet criteria necessary for the fabric to be non-blocking.
A necessary but in itself insufficient condition for a multiple stage switches fabric to be non-blocking is that it have at least constant bisection bandwidth between stages or switch ranks. If a multi-stage switch fabric has constant bi-section bandwidth, then it can be strictly non-blocking only to the extent that the traffic load is equally divided among the redundant paths between adjacent switch ranks. Certain fabric topologies, such as Torus fabrics of various dimensions, contain redundant paths but are inherently blocking because of the oversubscription of links between switches. There is great benefit in being able to reroute dynamically so as to avoid congested links in these topologies.
Statically routed fabrics often fall far short of achieving load balance but preserve ordering and are simple to implement. Dynamically routed fabrics incur various amounts of overhead, cost, complexity, and delay in order to reroute traffic and handle ordering issues caused by the rerouting. Dynamic routing is typically used on local and wide area networks and at the boundaries between the two, but, because of cost and complexity, not on a switch fabric acting as a backplane for something of the scale of a rack of servers.
A pull protocol that not only takes full advantage of the inherent congestion avoidance potential of pull protocols but also allows dynamic routing on a packet by packet basis without software intervention would be a significant innovation.
Any switch fabric intended to be used to support clustering of compute nodes should include means to allow the TCP/IP stack to be bypassed to both reduce software latency and to eliminate the latency and processor overhead of copying transmit and receive data between intermediate buffers. It has become the norm to do this by implementing support for RDMA in conjunction with, for example, the use of the OpenFabrics Enterprise Distribution (OFED) software stack.
RDMA adds cost and complexity to fabric interface components for implementing memory registration tables, among other things. These tables could be more economically located in the memories of attached servers. However, then the latency of reading these tables, at least once and in some cases two or more times per message, would add to the latency of communications. An RDMA mechanism that uses a pull protocol such that the latency of reading buffer registration tables, sometimes called BTT for Buffer Tag Table (or Memory Region table), in host/server memory overlaps the remote reads of the pull protocol and masks this latency allowing such tables to be located in host/server memory without a performance penalty.
Embodiments of the invention provide several ways, which have been shown in which pull techniques can be used to achieve high switch fabric performance at low cost. In various embodiment, these methods have been synthesized into a complete fabric data transfer protocol and DMA engine. An embodiment is provided by describing the protocol, and its implementation, in which the transmit driver accepts both push and pull commands from higher level software but chooses to use pull methods to execute a push command on a transfer by transfer to optimize performance or in reaction to congestion feedback.
In designing a messaging system to use a mix of push and pull methods to transfer data/messages between peer compute nodes attached to a switch fabric, the messaging system must support popular Application Programming Interfaces, APIs, which most often employ a push communication paradigm. In order to obtain the benefits of a pull protocol, for avoiding congestion and having sufficient real time reroutable traffic to achieve the non-blocking condition, a method is required to transform push commands received by driver software via one of these APIs into pull data transfers. Furthermore, the driver for the messaging mechanism must, on a transfer command by transfer command basis, decide whether to use push semantics or pull semantics for the transfer and to do so in such a way that sufficient pull traffic is generated to allow loading on redundant paths to be balanced.
The problem of allowing pushes to be transformed into pulls is solved in the following manner. First, a relatively large message transmit descriptor size is employed. The preferred embodiment uses a 128 byte descriptor that can contain message parameters and routing information (source and destination IDs, in the case of ExpressFabric) plus either a complete short message of 116 bytes or a set of 10 pointers and associated transfer lengths that can be used as the gather list of a pull command. A descriptor formatted to contain a 116B message is called a short packet push descriptor. A descriptor formatted to contain a gather list is called a pull request descriptor.
When our device's transmit driver receives a transfer command from an API or a higher layer protocol, it makes a decision to use push or pull semantics based primarily on the transfer length of the command. If 116 bytes or less are to be transferred, the short pack push transfer method is used. If the transfer length is somewhat longer than 116 bytes but less than a threshold that is typically 1 KB or less, the data is sent as a sequence of short packet pushes. If the transfer length exceeds the threshold the pull transfer method is used. In the preferred embodiment, up to 640K bytes can be moved via a single pull command. Transfers too large for a single pull command are done with multiple pull commands in sequence.
In analyzing protocol efficiency, we found unsurprisingly that use of pull commands was more efficient than use of the short packet push for transfers greater than a certain amount. However, the goal of low latency competes with the goal of high efficiency, which in turn leads to higher throughput. In many applications, but not all, low latency is critical. Thus we made the threshold for choosing to use push vs. pull configurable and have the ability to adapt the threshold to fabric conditions and application priority. Where low latency is deemed important, the initial threshold is set to a relatively high value of 512 bytes or perhaps even 1K bytes. This will minimize latency only if congestion doesn't result from the resulting high percentage of push traffic. In our messaging process, each transfer command receives an acknowledgement via a Transmit Completion Queue vendor defined message, abbreviated TxCQ VDM, that contains a coarse congestion indication from the destination of the transfer it acknowledges. If the driver sees congestion at a destination when processing the TxCQ VDM, it can raise the push/pull threshold to increase the relative fraction of pull traffic. This has two desirable effects:
If low latency is not deemed to be critically important, then the push vs. pull threshold can be set at the transfer length where push and pull have equal protocol efficiency (defined as the number of bytes of payload delivered divided by the total number of bytes transferred). In reaction to congestion feedback the threshold can be reduced to the message length that can be embedded in a single descriptor.
In order to be transmit a message, its descriptor is created and added onto the tail of a transmit queue by the driver software. Eventually the transmit engine reads the queue and obtains the descriptor. In a conventional device, the transmit engine must next read server/host memory again at an address contained in the descriptor. Only when this read completes can it forward the message into the fabric. With current technology, that second read of host memory adds at least 200 ns to the transfer latency and more when there is contention for the use of memory inside the attached server/host. In a transmit engine in an embodiment of the invention that second read isn't required, eliminating that component of the latency when the push mode is used and compensating in part for additional pass(es) through the fabric needed when the pull mode is used.
In the pull mode, the pull request descriptor is forwarded to the destination and buffered there in a work request queue for the DMA engine at the destination node. When the message bubbles to the top of its queue it may be selected for execution. In the course of its execution, what we call a remote read request message is sent by the destination DMA engine back to the source node. An optional latency reducing step can be taken by the transmit engine when it forwards the pull request descriptor message: it can also send a read request for the data to be pulled. If this is done, then the data requested by the pull request can be waiting in the switch when the remote read request message arrives at the switch. This can reduce the overall transfer latency by the round trip latency for a read of host/server memory by the switch containing the DMA engine.
Any prefetched data must be capable of being buffered in the switch. Since only a limited amount of memory is available for this purpose, prefetch must be used judiciously. Prefetch is only used when buffer space is available to be reserved for this use and only for packets whose length is greater than the push vs. pull threshold and less than a second threshold. That second threshold must be consistent with the amount of buffer memory available, the maximum number of outstanding pull request messages allowed for which prefetch might be beneficial, and the perception that the importance of low latency diminishes with increasing message length. In the preferred embodiment, this threshold can range from 117B up to 4 KB.
Capella is the name given to an embodiment of the invention. With Capella, the paradigm for host to host communications on a PCIe switch fabric shifts from the conventional non-transparent bridge based memory window model to one of Network Interface Cards (NICs) embedded in the switch that tunnel data through ExpressFabric™ and implement RDMA over PCI express (PCIe). Each 16-lane module, called a station in the architecture of an embodiment of the invention includes a physical DMA messaging engine shared by all the ports in the module. Its single physical Direct Memory Access (DMA) function is enumerated and managed by the management processor. Virtual DMA functions are spawned from this physical function and assigned to the local host ports using the same Configuration Space Registers (CSR) redirection mechanism that enables ExpresssIOV™.
The messaging engine interprets descriptors given to it via transmit descriptor queues, (TxQs). Descriptors can define NIC mode operations or RDMA mode operations. For a NIC mode descriptor, the message engine transmits messages pointed to by transmit descriptor queues, TxQs, and stores received messages into buffers described by a receive descriptor ring or receive descriptor queue (RxQ). It thus emulates the operation of an Ethernet NIC and accordingly is used with a standard TCP/IP protocol stack. For RDMA mode, which requires prior connection set up to associate destination/application buffer pointers with connection parameters, the destination write address is obtained by looking up in a Buffer Tag Table (BTT) at the destination, indexed by the Buffer Tag that is sent to the destination in the Work Request Vendor Defined Message (WR VDM). RDMA layers in both the hardware and the drivers implement RDMA over PCIe with reliability and security, as standardized in the industry for other fabrics. The PLX RDMA driver sits at the bottom of the OFED protocol stack.
RDMA provides low latency after the connection setup overhead has been paid and eliminates the software copy overhead by transferring directly from source application buffer to destination application buffer. The RDMA Layer subsection describes how RDMA protocol is tunneled through the fabric.
The DMA functionality is presented to hosts as a number of DMA virtual functions (VFs) that show up as networking class endpoints in the hosts' PCIe hierarchy. In addition to the host port DMA VFs, a single DMA VF is provisioned for use by the MCPU. An addition DMA VF is provided for the MCPU and is documented in a separate subsection.
Each host DMA VF includes a single TxCQ (transmit completion queue), a single Rx Queue (receive Queues/Receive descriptor ring), Multiple RxCQs (receive completion queues), Multiple TxQs (transmit queues/transmit descriptor rings), and three MSI-X interrupt vectors, which are Vector 0 General/Error Interrupt, Vector 1 TxCQ Interrupt with time and count moderation, and Vectors 2++: RxCQ Interrupt with time and count moderation. One vector per RxCQ is configured in the VF. In other embodiments, multiple RxCQs can share a vector.
Each DMA VF appears to the host as an R-NIC (RDMA capable NIC) or network class endpoint embedded in the switch. Each VF has a synthetic configuration space created by the MCPU via CSR redirection and a set of directly accessible memory mapped registers mapped via the BARO of its synthetic configuration space header. Some DMA parameters not visible to hosts are configured in the GEP of the station. An address trap may be used to map BARs (Base Address Register) of the DMA VF engine.
The number of DMA functions in a station is configured via the DMA Function Configuration registers in the Per Station Register block in the GEP's BARO memory mapped space. The VF to Port Assignment register is in the same block. The latter register contains a port index field. When this register is written, the specified block of VFs is configured to the port identified in the port index field. While this register structure provides a great deal of flexibility in VF assignment, only those VF configurations described in Table 1 have been verified.
For HPC applications, a 4 VF configuration concentrates a port's DMA resources in the minimum number of VFs—1 per port with 4×4 ports in the station. For I/O virtualization applications, a 64 VF configuration provides a VF for each of up to 64 VMs running in the RCs above the up to 4 host ports in the station. Table 1 shows the number of queues, connections, and interrupt vectors available in each to be divided among the DMA VFs in each of the two supported VF configurations.
The DMA VF configuration is established after enumeration by the MCPU but prior to host boot, allowing the host to enumerate its VFs in the standard fashion. In systems where individual backplane/fabric slots may contain either a host or an I/O adapter, the configuration should allocate VFs for the downstream ports to allow the future hot plug of a host in their slots. Some systems may include I/O adapters or GPUs with which use can be made of DMA VFs in the downstream port to which the adapter is attached.
The DMA transmit engine may be modeled as a set of transmit queues (TxQs) for each VF, a single transmit completion queue (TxCQ) that receives completions to messages sent from all of the VF's TxQs, a Message Pusher state machine that tries to empty the TxQs by reading them so that the messages and descriptors in them may be forwarded across the fabric, a TxQ Arbiter that prioritizes the reading of the TxQs by the Message Pusher, a DMA doorbell mechanism that tracks TxQ depth, and a set of Tx Congestion avoidance mechanisms that shape traffic generated by the Transmit Engine.
TxQs are Capella's equivalent of transmit descriptor rings. Each Transmit Queue, TxQ, is a circular buffer consisting of objects sized and aligned on 128B boundaries. There are 512 TxQs in a station mapped to ports and VFs per Table 1, as a function of the DMA VF configuration of the station. TxQs are a power of two in size, from 29 to 212 objects aligned on a multiple of their size. Objects in a queue are either pull descriptors or short packet push message descriptors. Each queue is individually configurable as to depth. TxQs are managed via indexed access to the following registers defined in their VF's BARO memory mapped register space.
The driver enqueues a packet by writing it into host memory at the queue's base address plus TXQ_TAIL*size of each descriptor, where TXQ_TAIL is the tail pointer of the queue maintained by the driver software. TXQ_TAIL gets incremented after each enqueuing of a packet, to point to the next entry to be queued. Sometime after writing to the host memory, the driver does an indexed write to the TXQ_TAIL register array to point to the last object placed in that queue. The switch compares its internal TXQ_HEAD values to the TXQ_TAIL values in the array to determine the depth of each queue. The write to a TXQ_TAIL serves as a DMA doorbell, triggering the DMA engine to read the queue and transmit the work request message associated with the entry at its tail. TXQ_TAIL is one of the driver updated queue indices described in the table below.
All of the objects in a TxQ must be 128B in size and aligned on 128B boundaries, providing a good fit to the cache line size and RCBs of server class processors.
In the above Table 3, 1000h is the location for TXQ 0's TXQ_TAIL, 1008h is the location for TXQ 1's TXQ_TAIL and so on. Similarly, 1004h is the location for RXCQ 0's RXCQ_HEAD, 100Ch is the location for RXCQ 1's RXCQ_HEAD and so on.
Message pusher is the name given to the mechanism that reads work requests from the TxQs, changes the resulting read completions into ID-routed Vendor Defined Messages, adds the optional ECRC, if enabled, and then forwards the resulting work request vendor defined messages (WR VDMs) to their destinations. The Message Pusher reads the TxQs nominated by the DMA scheduler.
The DMAC maintains a head pointer for each TxQ. These are accessible to software via indexed access of the TxQ Management Register Block defined in Table 2. The Message Pusher reads a single aligned message/descriptor object at a time from the TxQ selected by a scheduling mechanism that considers fairness, priority, and traffic shaping to avoid creating congestion. When a PCIe read completion containing the TxQ message/descriptor object returns from the host/RC, the descriptor is morphed into one of the ID-routed Vendor Defined Message formats defined in the Host to Host DMA Descriptor Formats subsection for transmission. The term “object” is used for the contents of a TxQ because an entry can be either a complete short message or a descriptor of a long message to be pulled by the destination. In either case, the object is reformed into a VDM and sent to the destination. The transfer defined in a pull descriptor is executed by the destination's DMAC, which reads the message from the source memory using pointers in the descriptor. Short packet messages are written directly into a receive buffer in the destination host's memory by the destination DMA without need to read source memory.
The TxQ arbiter selects the next TxQ from which a descriptor will be read and executed from among those queues that have backlog and are eligible to compete for service. The arbiter's policies are based upon QoS principles and interact with traffic shaping/congestion avoidance mechanisms documented below.
Each of the up to 512 TxQs in a station can be classified as high, medium, or low priority via the TxQ Control Register in its VF's BARO memory mapped register space, shown in Table below. Arbitration among these classes is by strict priority with ties broken by round robin.
The descriptors in a TxQ contain a traffic class (TC) label that will be used on all the Express Fabric traffic generated to execute the work request. The TC label in the descriptor should be consistent with the priority class of its TxQ. The TCs that the VF's driver is permitted to use are specified by the MCPU in a capability structure in the synthetic CSR space of the DMA VF. The fabric also classifies traffic as low, medium, or high priority but, depending on link width, separates it into 4 egress queues based on TC. There is always at least one high priority TC queue and one best efforts (low priority) queue. The remaining egress queues provide multiple medium priority TC queues with weighted arbitration among them. The arbitration guarantees a configurable minimum bandwidth to each queue and is work conserving.
Medium and low priority TxQs are eligible to compete for service only if their port hasn't consumed its bandwidth allocation, which is metered by a leaky bucket mechanism. High priority queues are excluded from this restriction based on the assumption and driver-enforced policy that there is only a small amount of high priority traffic.
The priority of a TxQ is configured by an indexed write to the TxQ Control Register in its VF's BARO memory mapped register space via the TXQ_Priority field of the register. The TxQ that is affected by such a write is the one pointed by the QUEUE_INDEX field of the register.
A TxQ must first be enabled by its TXQ Enable bit. It then can be paused/continued by toggling its TXQ Pause bit.
Each TxQ's leaky bucket is given a fractional link bandwidth share via the TxQ_Min_Fraction field of the TxQ control Register. A value of 1 in this register guarantees a TxQ at least 1/256 of its port's link BW. Every TxQ should be configured to have at least this minimum BW in order to prevent starvation.
Each port is permitted a limited number of outstanding DMA work requests. A counter for each port is incremented when a descriptor is read from a TxQ and decremented when a TxCQ VDM for the resulting work request is returned. If the count is above a configurable threshold, the port's VF's are ineligible to compete for service. Thus, the threshold and count mechanism function as an end to end flow control.
This mechanism is controlled by the registers described in the table below. These registers are in the BARO space of each station's GEP and are accessible to the management software only. Note the “Port Index” field used to select the registers of one of the ports in the station for access and the “TxQ Index” field used to select an individual TxQ of the port. A single threshold limit is supported for each port but status can be reported on an individual queue basis.
To avoid deadlock, it's necessary that the values configured into the Work Request Thresholds not exceed the values defined below.
A VF arbiter serves eligible VF's with backlog using a round-robin policy. After the VF selected, priority arbitration is performed among its TxQ's. Ties are resolved by round-robin among TxQs of the same priority level.
Completion messages are written by the DMAC into completion queues at source and destination nodes to signal message delivery or report an uncorrectable error, a security violation, or other failure. They are used to support error free and in-order delivery guarantees. Interrupts associated with completion queues are moderated on both a number of packets and a time basis.
A completion message is returned for each descriptor/message sent from a TxQ. Received transmit completion message payloads are enqueued in a single TxCQ for the VF in host memory. The transmit driver in the host dequeues the completion messages. If a completion message isn't received, the driver eventually notices. For a NIC mode transfer, the driver policy is to report the error and let the stack recover. For an RDMA message, the driver has options: it can retry original request, or can break the connection, forcing the application to initiate recovery; this choice depends on the type of RDMA operation attempted and the error code received.
Transmit completion messages are also used to flow control the message system. Each source DMA VF maintains a single TxCQ into which completion messages returned to it from any and all destinations and traffic classes are written. A TxCQ VDM is returned by the destination DMAC for every WR VDM it executes to allow the source to maintain its counts of outstanding work request messages and to allow the driver to free the associated transmit buffer and TxQ entry. Each transmit engine limits the number of open work request messages it has in total. Once the global limit has been reached, receipt of a transmit completion queue message, TxCQ VDM, is required before the port can send another WR VDM. Limiting the number of completion messages outstanding at the source provides a guarantee that a TxCQ won't be overrun and, equally importantly, that fabric queues can't saturate. It also reduces the source injection rate when the destination node's BW is being shared with other sources.
The contents and structure of the TxCQ VDM and queue entry are defined in the Transmit Completion Message subsection. TxCQs are managed using the following registers in the VF's BARO memory mapped space.
The DMA engine reads host memory for a number of purposes, such as to fetch a descriptor from a TxQ using the TC configured for the queue in the TxQ Control Register, to complete a remote read request using the TC of the associated VDM, to fetch buffers from the RxQ using the TC specified in the Local Read Traffic Class Register, and to read the BTT when executing an RDMA transfer again using the TC specified in the Local Read Traffic Class Register. The Local Read Traffic Class Register appears in the GEP's BARO memory mapped register space and is defined in Table below.
The DMA destination engine receives and executes WR VDMs from other nodes. It may be modeled as a set of work request queues for incoming WR VDMs, a work request execution engine, a work request arbiter that feeds WR VDMs to the execution engine to be executed, a NIC Mode Receive Queue and Receive Descriptor Cache, and various scoreboards for managing open work requests and outstanding read requests (not visible at this level).
When a work request arrives at the destination DMA, the starting addresses in internal switch buffer memory of its header and payload are stored in a Work Request Queue. There are a total of 20 Work Request Queues per station. Four of the queues are dedicated to the MCPU x1 port. The remaining 16 queues are for the 4 potential host ports with each port getting four queues regardless of port configuration.
The queues are divided by traffic class per port. However, due to a bug in the initial silicon, all DMA TCs will be mapped into a single work request queue in each destination port. The Destination DMA controller will decode the Traffic Class at the interface and direct the data to the appropriate queue. Decoding the TC at the input is necessary to support the WRQ allocation based on port configuration. Work requests must be executed in order per TC. The queue structure will enforce the ordering (the source DMA controller and fabric routing rules ensure the work requests will arrive at the destination DMA controller in order).
Before a work request is processed, it must pass a number of checks designed to ensure that once execution of the work request is started, it will be able to complete. If any of these checks fail, a TxCQ VDM containing a Condition Code indicating the reason for the failure is generated and returned to the source. Table 27 RxCQ and TxCQ Completion Codes shows the failure conditions that are reported via the TxCQ.
Each work request queue, WRQ, will be assigned either a high, medium0, medium1, or low priority level and arbitrated on a fixed priority basis. Higher priority queues will always win over lower priority queues except when a low priority queue is below its minimum guaranteed bandwidth allocation. Packets from different ingress ports that target the same egress queue are subject to port arbitration. Port arbitration uses a round robin policy in which all ingress ports have the same weight.
Each VF's single receive queue, RxQ, is a circular buffer of 64-bit pointers. Each pointer points to a 4 KB page into which received messages, other than tagged RDMA pull messages, are written. A VF's RxQ is configured via the following registers in its VF's BARO memory mapped register space.
The NIC mode receive descriptor cache occupies a 1024×64 on-chip RAM. At startup, descriptors are prefetched to load 16 descriptors for each of the single RxQ's of the up to 64 VFs. Subsequently, whenever 8 descriptors have been consumed from a VF's cache, a read of 8 more descriptors is initiated.
A receive completion queue entry may be written upon execution of a received WR VDM. The RxCQ entry points to the buffer where the message was stored and conveys upper layer protocol information from the message header to the driver. Each DMA VF maintains multiple receive completion queues, RxCQs, selected by a hash of the Source Global ID (GID) and an RxCQ_hint field in the WR VDM in NIC mode, to support a proprietary Receive Side Scaling mechanism, RSS, which divides the receive processing workload over multiple CPU cores in the host.
The exact hash is:
where
MASK=(2̂(RxCQ Enable[3:0])−1), which picks up just enough of the low 1, 2, 3 . . . 8 bits of the XOR result to encode the number of enabled RxCQs. The RxCQ and GID or source ID may be used for load balancing.
In RDMA mode, an RxCQ is by default only written in case of an error. Writing of the RxCQ for a successful transfer is disabled by assertion of the NoRxCQ flag in the descriptor and message header. The RxCQ to be used for RDMA is specified in the Buffer Tag Table entry for cases where the NoRxCQ flag in the message header isn't asserted. The local completion queue writes are simply posted writes using circular pointers. The receive completion message write payloads are 20B in length and aligned on 32B boundaries. Receive completion messages and further protocol details are in the Receive Completion Message subsection.
A VF may use a maximum of from 4 to 64 RxCQs, per the VF configuration. The software may enable less than the maximum number of available RxCQs but the number enabled must be a power of two. As an example, if a VF can have a maximum of 64 RxCQs, software can enable 1/2/4/8/16/32/64 RxCQ. RxCQs are managed via indexed access to the following registers in the VF's BARO memory mapped register space.
In order to manage the link bandwidth utilization of the host port by message data pulled from a remote host, limitations are placed on the number of outstanding pull protocol remote read requests. A limit is also placed on the fraction of the link bandwidth that the remote reads are allowed to consume. This mechanism is managed via the registers defined in Table 1 below. A limit is placed on the total number of remote read request an entire port is allowed to have outstanding. Limits are also placed on the number of outstanding remote reads for each individual work request. Separate limits are used for this depending upon whether the port is considered to be busy. The intention is that a higher limit will be configured for use when the port isn't busy than when it is.
DMA interrupts are associated with TxCQ writes, RxCQ writes and DMA error events. An interrupt will be asserted following a completion queue write (which follows completion of the associated data transfer) if the IntNow field is set in the work request descriptor and the interrupt isn't masked. If the IntNow field is zero, then the interrupt moderation logic determines whether an interrupt is sent. Error event interrupts are not moderated.
Two fields in the TxCQ and RxCQ low base address registers described earlier define the interrupt moderation policy:
Interrupt moderation count defines the number of completion queue writes that have to occur before causing an interrupt. If the field is zero, an interrupt is generated for every completion queue write. Interrupt moderation timeout is the amount of time to wait before generating an interrupt for completion queue writes. The paired count and timer values are reset after each interrupt assertion based on either value.
The two moderation policies work together. For example, if the moderation count is 16 and the timeout is set to 2 us and the time elapsed between 5 and 6th completion passes 2 μs, an interrupt will be generated due to the interrupt moderation timeout. Likewise, using the same moderation setup, if 16 writes to the completion queues happen without exceeding the time limit between any 2 packets, an interrupt will be generated due to the count moderation policy.
The interrupt moderation fields are 4 bits wide each and specify a power of 2. So an entry of 2 in the count field specifies a moderation count of 4. If either field is zero, then there is no moderation policy for that function.
DMA VF Interrupts are controlled by the following registers in the VF's BARO memory mapped register space. The QUEUE_INDEX applies to writes to the RxCQ Interrupt Control array.
For all DMA VF configurations:
Software can enable as many MSI-X vectors as needed for handling RxCQ vectors (a power of 2 vectors). For example, in a system that has 4 CPU cores, it may be enough to have just 4 MSI-X vectors, one per core, for handling receive interrupts. For this case, software can enable 2+4=6 MSI-X vectors and assign MSI-X vectors 2-5 to each core using CPU Affinity masks provided by operating systems. The register RXCQ_VECTOR (0x868h) described below allows mapping of a RXCQ to a specific MSI-X vector.
The table below shows the device specific interrupt masks for the DMA VF interrupts.
MSI-X capability structures are implemented in the synthetic configuration space of each DMA VF. The MSI-X vectors and the PBA array pointed to by those capability structures are located in the VF's BARO space, as defined by the table below. While the following definition defines 258 MSI-X vectors, the number of vectors and entries in the PBA array are as per the DMA configuration mode: Only 6 vectors per VF for mode 6 and only 66 vectors per VF for mode 2. The MSI-X Capability structure will show the correct number of MSI-X vectors supported per VF based on the DMA configuration mode.
These registers are in the VF's BARO memory mapped space. The first part of the table below shows configuration space registers that are memory mapped for direct access by the host. The remainder of the table details some device specific registers that didn't fit in prior subsections.
Here the basic NIC and RDMA mode write and read operations are described by means of ladder diagrams. Descriptor and message formats are documented in subsequent subsections.
Short packet push (SPP) transfers are used to push messages or message segments less than or equal to 116B in length across the fabric embedded in a work request vendor defined message (WR VDM). Longer messages may be segmented into multiple SPPs. Spreadsheet calculation of protocol efficiency shows a clear benefit for pushing messages up to 232B in payload length. Potential congestion from an excess of push traffic argues against doing this for longer messages except when low latency is judged to be critical. Driver software chooses the length boundary between use of push or pull semantics on a packet by packet basis and can adapt the threshold in reaction to congestion feedback in TxCQ messages. A pull completion message may include congestion feedback.
A ladder diagram for the short packet push transfer is shown in
The ladder diagram assumes an Rx Q descriptor has been prefetched and is already present in the switch when the SPP WR VDM arrives and bubbles to the top of the incoming work request queue.
A ladder diagram for the NIC mode write pull transfer is shown in
The pull transfer WR VDM is a gather list with up to 10 pointers and associated lengths as specified in the Pull Mode Descriptors subsection. For each pointer in the gather list, the DMA engine sends an initial remote read request of up to 64B to align to the nearest 64B boundary. From this 64B boundary, all subsequent remote reads generated by the same work request will be 64 byte aligned. Reads will not cross a 4 KB boundary. If and when the read address is already 64 byte aligned and greater than or equal to 512B from a 4 KB boundary, the maximum read request size—512B—will be issued. In NIC mode, pointers may start and end on arbitrary byte boundaries.
Partial completions to the remote read requests are combined into a single completion at the source switch. The destination DMAC then receives a single completion to each 512B or smaller remote read request. Each such completion is written into destination memory at the address specified in the next entry of the target VF's Rx Q but at an offset within the receive buffer of up to 511B. The offset used is the offset of the pull transfer pointer's starting address from the nearest 512B boundary. This offset is passed to the receive driver in the RxCQ message. When the very last completion has been written, the destination DMA engine then sends the optional ZBR, if enabled, and writes to the RxCQ, if enabled. After the last ACK for the data writes and the completion to the ZBR have been received, the DMA engine sends a TxCQ VDM back to the source DMA. The source DMA engine then writes the TxCQ message from the VDM onto the source VF's TxQ.
Transmit and receive interrupts follow their respective completion queue writes, if not masked off or inhibited by the interrupt moderation logic.
In PCIe, receipt of the DLLP ACK for the read completion message writes into destination memory signals that the component above the switch, the RC in the usage model, has received the writes without error. If the last write is followed by a 0-byte read (ZBR) of the last address written, then the receipt of the completion for this read signals that the writes (which don't use relaxed ordering) have been pushed through to memory. The ACK and the optional zero byte read are used in our host to host protocol guarantee delivery not just to the destination DMAC but to the RC and, if ZBR is used, to the target memory in the RC.
As shown in the ladder of
The receive completion queue write, on the other hand, doesn't need to wait for the ACK because the PCIe DLL protocol ensures that if the data writes don't complete successfully, the completion queue write won't be allowed to move forward. Where the delivery guarantee isn't needed, there is some advantage to returning the TxCQ VDM at the same time that the receive completion queue is written but as yet, no mechanism has been specified for making this optional.
If the WR is an RDMA (untagged) short packet push, then the short message (up to 108B for 128B descriptor) is written directly to the destination. For the longer pull transfer, the bytes used for a short packet push message in the WR VDM and descriptor are replaced by a gather list of up to 10 pointers to the message in the source host's memory. For RDMA transfers, each pointer in the gather list, except for the first and last must be an integral multiple of 4 KB in length, up to 64 KB. The first pointer may start anywhere but must end on a 4 KB boundary. The last pointer must start on a 4 KB boundary but may end anywhere. An RDMA operation represents one application message and so, the message data represented by the pointers in an RDMA Write WR is contiguous in the application's virtual address space. It may be scattered in the physical/bus address space and so each pointer in the physical/bus address list will be page aligned as per the system page size.
If, as shown in the figure, the WR VDM contains a pull request, then the destination DMA VF sends potentially many 512B remote read request VDMs back to the source node using the physical address pointers contained in the original WR, as well as shorter read requests to deal with alignment and 4 KB boundaries. Partial completions to the 512B remote read requests are combined at the source node, the one from which data is being pulled, and are sent across the fabric as single standard PCIe 512B completion TLPs. When these completions reach the destination node, their payloads are written to destination host memory.
For NIC mode, the switch maintains a cache of receive buffer pointers prefetched from each VF's receive queue (RxQ) and simply uses the next buffer in the FIFO cache for the target VF. For the RDMA transfer shown in the figure, the destination buffer is found by indexing the VF's Buffer Tag Table (BTT) with the Buffer Tag in the WR VDM. The read of the BTT is initiated at the same time as the remote read request and thus its latency is masked by that of the remote read. In some cases, two reads of host memory are required to resolve the address—one to get the security parameters and the starting address of a linked list and a second that indexes into the linked list to get destination page addresses.
For the transfer to be allowed to complete, the following fields in both the WR VDM and the BTT entry must match:
In addition, the SEQ in the WR VDM must match the expected SEQ stored in an RDMA Connection Table in the switch. The read of the local BTT is overlapped with the remote read of the data and thus its latency is masked. If any of the security checks fail, any data already read or requested is dropped, no further reads are initiated, and the transfer is completed with a completion code indicating security check failure. The RDMA connection is then broken so no further transfers are accepted in the same connection.
After the data transfer is complete, both source and destination hosts are notified via writes into completion queues. The write to the RxCQ is enabled by a flag in the descriptor and WR VDM and by default is omitted in RDMA. Additional RDMA protocol details are in the RDMA Layer subsection.
A separate DMA function is implemented for use by the MCPU and configured/controlled via the following registers in the GEP BARO per station memory mapped space.
The following table summarizes the differences between MCPU DMA and a host port DMA as implemented in the current version of hardware (these differences may be eliminated in a future version):
MCPU DMA Registers are present in each station of a chip (as part of the station registers). In cases where x1 management port is used, Station 0 MCPU DMA registers should be used to control MCPU DMA. For in-band management (any host port serving as MCPU), the station that contains the management port also has the valid set of MCPU DMA registers for controlling the MCPU DMA.
The following types of objects may be placed in a TxQ:
The three short packet push and pull descriptor formats are treated exactly the same by the hardware and differ only in how the software processes their contents. As will be shown shortly, for RDMA, the first two DWs of the short packet payload portion of the descriptor and message generated from it contain RDMA parameters used for security check and to look up the destination application buffer based on a buffer tag.
The RDMA Read Request Descriptor is the basis for a RDMA Read Request VDM, which is a DMA engine to DMA engine message used to convert an RDMA read request into a set of RDMA write-like data transfers.
Packets, descriptors, and Vendor Defined Messages that carry them across the fabric share the common header fields defined in the following subsections. As noted, some of these fields appear in both descriptors and the VDMs created from the descriptors and others only in the VDMs.
This is the Global RID of the destination host's DMA VF.
This field appears in the VDMs only and is filled in by the hardware to identify the source DMA VF.
This field defines the length in DWs of the payload of the Vendor Defined Message that will be created from the descriptor that contains it. For a short packet push, this field, together with “Last DW BE” indirectly defines the length of the message portion of the short packet push VDM and requires that the VDM payload be truncated at the end of the DW that contains the last byte of message.
LastDW BE appears only in NIC and RDMA short packet push messages but not in their descriptors. It identifies which leading bytes of the last DW of the message are valid based on the lowest two bits of the encapsulated packet's length. (This isn't covered by the PCIe Payload Length because it resolves only down to the DW.)
The cases are:
This is the DomainID (independent bus number space) of the destination.
When the Destination Domain differs from the source's Domain, then the DMAC adds an Interdomain Routing Prefix to the fabric VDM generated from the descriptor.
The TC field of the VDM defines the fabric Traffic Class, of the work request VDM. The TC field of the work request message header is inserted into the TLP by the DMAC From the field of the same name in the descriptor.
D-Type stands for descriptor type, where the “D-” is used to differentiate it from the PCIe packet “type”. A TxQ may contain any of the object types listed in the table below. An invalid type is defined to provide robustness against some software errors that might lead to unintended transmissions. D-Type is a 4-bit wide field.
The DMAC will not process an invalid or reserved object other than to report its receipt as an error.
TxQ Index is a zero based TxQ entry number. It can be calculated as the offset from the TxQ Base Address at which the descriptor is located in the TxQ, divided by the configured descriptor size of 64B or 128B. It doesn't appear in descriptors but is inserted into the resulting VDM by the DMAC. It is passed to the destination in the descriptor/short packet and returned to the source software in the transmit completion message to facilitate identification of the object to which the completion message refers.
TxQ ID is the zero based number of the TxQ from which the work request originated. It doesn't appear in descriptors but is inserted into the resulting VDM by the DMAC. It is passed to the destination in the descriptor/short packet message and returned to the source software in the transmit completion message to facilitate processing of the TxCQ message.
The TxQ ID has the following uses:
SEQ is a sequence number passed to the destination in the descriptor/short packet message, returned to the source driver in the Tx Completion Message and passed to the Rx Driver in the Rx Completion Queue entry. A sequence number can be maintained by each source {TC, VF} for each destination VF to which it sends packets. A sequence number can be maintained by each destination VF for each source {TC, VF} from which it receives packets. The hardware's only role in sequence number processing is to convey the SEQ between source and destination as described. The software is charged with generating and checking SEQ so as to prevent out of order delivery and to replay transmissions as necessary to guarantee deliver in order and without error. A SEQ number is optional for most descriptor types, except for RDMA descriptors that have the SEQ_CHK flag set.
This 6-bit field identifies the VPF of which the source of the packet is a member. It will be checked at the receiver and the WR will be rejected if the receiver is not also a member of the same VPF. The VPFID is inserted into WR VDMs at the transmitting node.
The over-ride VPFID inserted by the Tx HW if OE is set.
Override enable for the VPFID. If this bit is set, then the Rx VLAN filtering is done based on the)_VPFID field rather than the VPFID field inserted in the descriptor by the Tx driver.
P_Choice is used by the Tx driver to indicate its choice of path for the routing of the ordered WR VDM that will be created from the descriptor.
ULP (Upper Layer Protocol) Flags is an opaque field conveyed from source to destination in all work request message packets and descriptors. ULP Flags provide protocol tunneling support. PLX provided software components use the following conventions for the ULP Flags field:
The 16-bit RDMA Buffer Tag provides a table ID and a table index used with the RDMA Starting Buffer Offset to obtain a destination address for an RDMA transfer.
The RDMA Security Key is an ostensibly random 16-bit number that is used to authenticate an RDMA transaction. The Security Key in a source descriptor must match the value stored at the Buffer Tag in the RDMA Buffer Tag Table in order for the transfer to be completed normally. A completion code indicating a security violation is entered into the completion messages sent to both source and destination VF in the event of a mismatch.
The 16-bit RxConnID identifies an RDMA connection or queue pair. The receiving node of a host to host RDMA VDM work request message uses the RxConnID to enforce ordering, through sequence number checking, and to force termination of a connection upon error. When EnSeqChk flag is set in a Work Request (WR), the RxConnID is used by hardware to validate the SEQ number field in the WR for the connection associated with the RxConnID
The RDMA Starting Buffer Offset specifies the byte offset into the buffer defined via the RDMA Buffer Tag at which transfer will start. This field contains a 64-bit value that is subtracted from the Virtual Base Address field of the BTT entry to define the offset into the buffer. This is the virtual address of the first byte of the RDMA message given by the RDMA application as per RDMA specifications. When the Virtual Base Address field in the BTT is made zero, this RDMA Starting buffer offset can denote absolute offset of the first byte of transfer in the current WR, within the destination buffer.
ZBR stands for Zero Byte Read. If this bit is a ONE, then a zero byte read of the last address written is performed by the Rx DMAC prior to returning a TxCQ message indicating success or failure of the transfer.
The following tables define the formats of the defined TxQ object types, which include short packet and several descriptors. In any TxQ, objects are sized/padded to a configured value of 64 or 128 bytes and aligned 64 or 128 byte boundaries per the same configuration. The DMA will read a single 64B or 128B object at a time from a TxQ.
If this bit is set, a completion message won't be written to the designated RxCQ and no interrupt will be asserted on receipt of the message, independent of the state of the interrupt moderation counts on any of the RxCQs.
If this bit is set in the descriptor, and NoRxCQ is clear, then an interrupt will be asserted on the designated RxCQ at the destination immediately upon delivery of the associated message, independent of the interrupt moderation state. The assertion of this interrupt will reset the moderation counters.
This 8-bit field seeds the hashing and masking operation that determines the RxCQ and interrupt used to signal receipt of the associated NIC mode message. RxCQ Hint isn't used for RDMA transfers. For RDMA, the RxCQ to be used is designated in the BTT entry.
This flag in an RDMA work request causes the referenced Buffer Tag to be invalidated upon completion of the transfer.
This flag in an RDMA work request signals the receive DMA to check the SEQ number and to perform an RxCQ write independent of the RDMA verb and the NoRxCQ flag.
The PATH parameter is used to choose among alternate paths for routing of WR and TxCQ VDMs via the DLUT.
Setting of the RO parameter in a descriptor allows the WR VDM created from the descriptor to be routed as an unordered packet. If RO is set, then the WR VDM marks the PCIe header as RO per the PCIe specification by setting ATTR[2:1] to 2′b01.
Descriptors are defined as little endian. The NIC mode short packet push descriptor is shown in the table below.
The bulk of the NIC Mode short packet descriptor is the short packet itself. This descriptor is morphed into a VDM with data that is sent to the {Destination Domain, Destination Global RID}, aka GID, where the payload is written into a NIC mode receive buffer and then the receiver is notified via a write to a receive completion queue, RxCQ. With 128B descriptors, up to 116 byte messages may be sent this way; with 64B descriptor the length is limited to 52 bytes. The VDM used to send the short packet through the fabric is defined in Table 19 NIC Mode Short Packet VDM.
The CTRL short packet is identical to the NIC Mode Short Packet, except for the D-Type code. CTRL packets are used for Tx driver to Rx driver control messaging.
Pull mode descriptors contain a gather list of source pointers. A “Total Transfer Length (Bytes)” field has been added for the convenience of the hardware in tracking the total amount in bytes of work requests outstanding. The 128B pull mode descriptor is shown in the table above and the 64B pull mode descriptor in the table below. These descriptors can be used in both NIC and RDMA modes with the RDMA information being reserved in NIC mode.
The User Defined Pull Descriptor follows the above format through the first 2 DWs. Its contents from DW2 through DW31 are user definable. The Tx engine will convert and transmit the entire descriptor RCB as a VDM.
While the provision of a separate length field for each pointer implies a more general buffer structure, this generation of hardware assumes the following re′ pointer length and alignment:
An example pull descriptor VDM is shown in Table 22 Pull Descriptor VDM with only 3 Pointers. The above table shows the maximum pull descriptor message that can be supported with a 128-byte descriptor. It contains 10 pointers. This is the maximum length. If the entire message can be described with fewer pointers, then unneeded pointers and their lengths are dropped. An example of this is shown in Table. The above table shows that the maximum pull descriptor supported with a 64B descriptor includes only 3 pointers. (64B descriptors aren't supported in Capella 2 but are documented here for completeness.)
The above descriptor formats are used for pull mode transfers of any length. In NIC mode, (also encoded in the Type field) the following RDMA fields: security keys, and starting offset, are reserved. Unused pointers and lengths in a descriptor are don't cares. (IS THIS CORRECT?)
The descriptor size is fixed at 64B or 128B as configured for the TxQ independent of the number of pointers actually used. For improved protocol efficiency, pointers and length fields not used are omitted from the vendor defined fabric messages that convey the pull descriptors to the destination node.
The following subsections define the PCIe Vendor Defined Message TLPs used in the host to host messaging. For each TxQ object defined in the previous subsection there is a definition of the fabric message into which it is morphed. The Vendor Defined Messages (VDM) are encoded as Type 0, which specifies UR instead of silent discard when received by an unsupported source, as shown in
The PCIe Message Code in the VDM identifies the message type as vendor defined Type0. The table below defines the meaning of the PLX Message Code that is inserted in the otherwise unused TAG field of the header. The table includes all the message codes defined to date. In the cases where a VDM is derived from a descriptor, the descriptor type and name are listed in the table.
The NIC mode short packet push VDM is derived from Table 15 NIC Mode Short Packet Descriptor. NIC mode short packet push VDMs are routed as unordered. Their ATTR fields should be set to 3′b010 to reflect this property (under control of a chicken bit, in this case).
For NIC mode, only the IntNow flag may be used.
The Pull Mode Descriptor VDM is derived from Table 16 128B Pull Mode Descriptor.
The above table shows the maximum pull descriptor message that can be supported with a 128-byte descriptor. It contains 10 pointers. This is the maximum length. If the entire message can be described with fewer pointers, then unneeded pointers and their lengths are dropped. An example of this is shown in Table.
RDMA parameters are reserved in NIC mode.
The above table shows the maximum pull descriptor supported with a 64B descriptor.
The above table illustrates the compaction of the message format by dropping unused Packet Pointers and Length at Pointers fields. Per the NumPtrs field, only 3 pointers were needed. Length fields are rounded up to a full DW so the 2 bytes that would have been “Length at Pointer 3” became don't care.
The remote read requests of the pull protocol are sent from destination host to the source host as ID-routed Vendor Defined Messages using the format of Table 23 Remote Read Request VDM. The address in the message is a physical address in the address space of the host that receives the message, which was also the source of the original pull request. In the switch egress port that connects to this host, the VDM is converted to a standard read request using the Address, TAG for Completion, Read Request DW Length, and first and last DW BE fields of the message. The message and read request generated from it are marked RO via the ATTR fields of the headers.
This VDM is to be routed as unordered so the ATTR fields should be set to 3′b010 to reflect its RO property.
The doorbell VDMs, whose structure is defined in the table below are sent by a hardware mechanism that is part of the TWC-H endpoint. Refer to the TWC chapter for details of the doorbell signaling operation.
A completion message is returned to the source host for each completed message (i.e. a short packet push or a pull or an RDMA read request) in the form of an ID-routed TxCQ VDM. The source host expects to receive this completion message and initiates recovery if it doesn't. To detect missing completion numbers, the Tx driver maintains a SEQ number for each {source ID, destination ID, TC}. Within each streams, completion messages are required to return in SEQ order. An out of order SEQ in an end to end defined stream indicates a missed/lost completion message and may results in a replay or recovery procedure.
The completion message includes a Condition Code (CC) that indicates either success or the reason for a failed message delivery. CCs are defined in CCode subsection.
The completion message ultimately written into the sender's Transmit Completion Queue crosses the fabric embedded in bytes 12-15 of an ID routed VDM with 1 DW of payload, as shown in Table. This VDM is differentiated from other VDMs by the PLX MSG field embedded in the PCIe TAG field. When the TxCQ VDM finally reaches its target host's egress, it is transformed into a posted write packet with payload extracted from the VDM and the address obtained from the Completion Queue Tail Pointer of the queue pointed to by the TxQ ID field in the message.
The PCIe definition of an ID routed VDM includes both Requester and Destination ID fields. They are shown in the table above as GRIDs because Global RIDs are used in these fields. Since this is a completion message, the Requester GRID field is filled with the Completer's GRID, which was the Destination GRID of the message to which the completion responds. The Destination GRID of the completion message was the Requester GRID of that original message. It is used to route the completion message back to the original message's source DMA VF TxQ.
The Completer Domain field is filled with the Domain in which the DMAC creating the completion message is located.
The VDM is routed unchanged to the host's egress pipeline and there morphed into a Posted Write to the current value of the Tx CQ pointer of the TxQ from which the message being completed was sent and sent out the link to the host. The queue pointer is then incremented by the fixed payload length of 8 byes and wrapped back to the base address at the limit+1.
The Tx Driver uses the TxQ ID field and TxQ Index field to access its original TxQ entry where it keeps the SEQ that it must check. If the SEQ check passes, the driver frees the buffer containing the original message. If not and if the transfer was RDMA, it initiates error recovery. In NIC mode, dealing with out of order completion is left to the TCP/IP stack. The Tx Driver may use the congestion feedback information to modify its policies so as to mitigate congestion.
After processing a transmit completion queue entry, the driver writes zeros into its Completion Type field to mark it as invalid. When next processing a Transmit Completion Interrupt, it reads and processes entries down the queue until it finds an invalid entry. Since TxCQ interrupts are moderated, it is likely that there are additional valid TxCQ entries in the queue to be processed.
The software prevents overflow of its Tx completion queues by limiting the number of outstanding/incomplete source descriptors, by proper sizing of TXCQ based on the number and sizes of TXQs, and by taking in to consideration the bandwidth of the link
For each completed source descriptor and short packet push, a completion message is also written into a completion queue at the receiving host. Completion messages to the receiving host are standard posted writes using one of its VF's RxCQ Pointers, per the PLX-RSS algorithm. Table shows the payload of the Completion Message written into the appropriate RxCQ for each completed source descriptor and short packet push transfer received with the NoRxCQ bit clear. The payload changes in DWs three and four for RDMA vs. NIC mode as indicated by the color highlighting in the table. The “RDMA Buffer Tag” and Security Key” fields are written with the same data (from the same fields of the original work request VDM) as for an RDMA transfer. The Tx driver sometimes conveys connection information to the Rx driver in these fields when NIC format is used.
In NIC mode, the receive buffer address is located indirectly via the Rx Descriptor Ring Index. This is the offset from the base address of the Rx Descriptor ring from which the buffer address was pulled. Again in NIC mode, one completion queue write is done for each buffer so the transfer length of each completion queue entry contains only the amount in that message's buffer, up to 4K bytes. Software uses the WR_ID and SEQ fields to associate multiple buffers of the same message with each other. The CFLAGS field indicates the start, continuation, and end of a series of buffers containing a single message. It's not necessary that messages that span multiple buffers use contiguous buffers or contiguous RxCQ entries for reporting the filling of those buffers.
The NIC/CTRL/Send form of the RxCQ entry is also used for CTRL transfers and for RDMA transfers, such as untagged SEND, that don't transfer directly into a pre-registered buffer. The RDMA parameters are always copied from the pull request VDM into the RxCQ entry as shown because for some transfers that use the NIC form, they are valid.
The RDMA pull mode completion queue entry format is shown in the table above. A single entry is created for each received RDMA pull message in which the NoRxCQ flag is de-asserted or for which it is necessary to report an error. It is defined as 32B in length but only the first 20B are valid. The DMAC creates a posted write with a payload length of 20B to place an RDMA pull completion message onto a completion queue. After each such write, the DMAC increments the queue pointer by 32B to preserve RxCQ alignment. Software is required to ignore bytes 21-31 of an RDMA RxCQ entry. An RxCQ may contain both 20B RDMA completion entries and 20B NIC mode completion entries also aligned on 32B boundaries. For tagged RDMA transfers, the destination buffer is defined via the RDMA Buffer Tag and the RDMA Starting Offset. One completion queue write is done for each message so the transfer length field contains the entire byte length received.
The previously undefined fields of completion queue entries and messages are defined here.
This definition applies to both Tx and Rx CQ entries.
indicates data missing or illegible when filed
The definition of completion codes in Table applies to both Tx and Rx CQ entries. If multiple error/failure conditions obtain, the one with the lowest completion code is reported.
The 3-bit Congestion Indicator field appears in the TxCQ entry and is the basis for end to end flow control. The contents of the field indicate the relative queue depth of the DMA Destination Queue(TC) of the traffic class of the message being acknowledged. The Destination DMA hardware fills in the CI field of the TxCQ message based on the fill level of the work request queue of its port and TC.
The 3-bit Congestion Indicator field appears in the TxCQ entry and is the basis for end to end flow control. The contents of the field indicate the relative queue depth of the DMA Destination Queue(TC) of the traffic class of the message being acknowledged. The Destination DMA hardware fills in the CI field of the TxCQ message based on the fill level of the work request queue of its port and TC.
The Congestion Indicator field can be used by the driver SW to adjust the rate at which it enqueues messages to the node that returned the feedback.
Tx Q Index in a TxCQ VDM is a copy of the TxQ Index field in the WR VDM that the TxCQ VDM completes. The Tx Q Index in a TxCQ VDM points to the original TXQ entry that is receiving the completion message.
TxQ ID is the name of the queue at the source from which the original message was sent. The TxQ ID is included in the work request VDM and returned to the sender in the TxCQ VDM. TxQ ID is a 9-bit field.
A field from the source descriptor that is returned in both Tx CQ and Rx CQ entries. It is maintained as a sequence number by the drivers at each end to enforce ordering and implement the delivery guarantee.
This bit indicates to the Rx Driver, whether the sender requested a SEQ check. For non-RDMA WR, software can implement sequence checking, as an optional feature, using this flag. Such sequence checking may also be accompanied by validating an application stream for maintaining order of operations in a specific application in a specific application flow.
Destination Domain of Message being Completed
This field identifies the bus number Domain of the source of the completion message, which was the destination of the message being completed.
The CFlags are part of the NIC mode RxCQ message and indicate to the receive driver that the message spans multiple buffers. The Start Flag is asserted in the RxCQ message written for the first buffer. The Continue Flag is asserted for intermediate buffers and the End Flag is asserted for the last buffer of a multiple buffer message. This field helps to collect all the data buffers that result from a single WR by the receiving side software.
This field appears only in the RDMA RxCQ message. The maximum RDMA message length is 10 pointers each with a length of up to 65 KB. The total fits in the 20-bit “Transfer Length of Entire Message” field. The 16 bits of this field are extended with the 4 bits of the following TTL field.
The TTL field provides the upper 4 bits of the Total Transfer Length.
Transfer Length of this Buffer (Bytes)
This field appears only in the NIC form of the RxCQ message. NIC mode buffers are fixed in length at 4 KB each.
This field appears only in the NIC form of the RxCQ message. The DMAC starts writing into the Rx buffer at an offset corresponding A[8:0] of the remote source address in order to eliminate source-destination misalignment. The offset value informs the Rx driver where the start of the data is in the buffer.
The VPF ID is inserted into the WR by HW at the Tx and delivered to the Rx driver, after HW checking at the Rx, in the RxCQ message.
ULP Flags is an opaque field conveyed from the Tx driver to the Rx driver in all short packet and pull descriptor push messages and is delivered to the Rx driver in the RxCQ message.
This section describes an RDMA transactions as the exchange of the VDMs defined in the previous section.
This table below summarizes how the descriptor and VDM formats defined in the previous section are used to implement the RDMA Verbs.
Solicited Event implies INTNOW flag and interrupt at the other end! But we should at least receive a RxCQ so software can do the event after that RDMA operation; that's the current implementation.
Hardware buffer tag security checks verify that the security key and source ID in the WR VDM match those in the BTT entry for all RDMA write and RDMA read WRs and for Send with Invalidate. If hardware receives the RDMA Send with Invalidate (with or without SE (solicited event), hardware will read the buffer tag table, check the security key and source GRID. If the security checks pass, hardware will write set the “Invalidated” bit in the buffer tag table entry after completion of the transfer. The data being transferred is written directly into the tagged buffer at the starting offset in the work request VDM.
If an RDMA transfer references a Buffer Tag Table entry marked “Invalidated”, the work request will be dropped without data transfer and a completion message will be returned with a CC indicating Invalidated BTT entry. There is no case where an RDMA write or RDMA read can cause hardware to invalidate the buffer tag entry—this can only be done via a Send With Invalidate. Other errors such as security violation do not invalidate the buffer tag.
RDMA protocol has the idea of a stream (connection) between two members of a transmit—receive queue pair. If there is any problem with messages in the stream, the stream is shut down, the connection is terminated—no subsequent messages in the stream will get through. All traffic in the stream must complete in order. Connection status can't be maintained via the BTT because untagged RDMA transfers don't use a BTT entry.
When SEQ checking is performed only in the Rx driver software, SEQ isn't checked until after the data has been transferred but before upper protocol layers or the application have been informed of its arrival via a completion message. RDMA applications by default don't rely on completion messages but peek into the receive buffer to determine when data has been transferred and thus may receive data out of order unless SEQ checking is performed in the hardware. (Note however that some of the data of a single transfer may be written out of order but it is guaranteed that the last quanta (typically a PCIe maximum payload or the remainder after the last full maximum payload is transferred) will be written last.) HW SEQ checking is provided for a limited number of connections as described in the next subsection.
SEQ checking, in HW or SW, allows out of sequence WR messages, perhaps due to a lost WR message, to be detected. In such an event, the RDMA specification dictates that the associated connection be terminated. We have the option of initiating replay in the Tx driver so that upper layers never see the ordering violation and therefore we don't need to terminate the connection. However, lost packets of any type will be extremely rare so the expedient solution of simply terminating the connection is acceptable.
Our TxCQ VDM is the equivalent of the RDMA Terminate message. Any time that there is an issue with a transfer at the Rx end of the connection, such as remote read time out or a TxCQ message reports a fabric fault, the connection is taken down. The following steps are taken:
As described earlier, the receive DMA engine maintains a SEQ number for up to at least 4K connections per x4 port, shared by the DMA VFs in that port. The receive sequence number RAM is indexed by an RxConnID that is embedded in the low half of the Security Key field. HW sequence checking is enabled/disabled for RDMA transfers per the EnSeqChk flag in the descriptor and work request VDM.
Sequence numbers increment from 01h to FFh and wrap back to 01. 00 is defined as invalid. The Rx side driver must validate a connection RAM entry before any RDMA traffic can be sent by setting its ExpectedSEQ to 01h, else it will all fail the Rx connection check. The Tx driver must do the same thing in its interal SEQ table.
If a sequence check fails, the connection will be terminated and the associated work request will be dropped/rejected with an error completion message. These completion messages are equivalent to the Terminate message described in the RDMA specification. The terminated state is stored/maintained in the SEQ RAM by changing the ExpectedSEQ to zero. No subsequent work requests will be able to use a terminated connection until software sets the expected SEQ to 01h.
If there is no receive buffer available for an untagged Send due to consumption of all entries on the buffer ring, the connection must fail. In order to support this, the Tx driver inserts an RxConnID into the descriptor for an untagged Send. The RDMA Untagged Short Push and Pull Descriptors include the full set of RDMA parameter fields. For an untagged send, the Tx Driver puts the RxConnId in the Security Key just as for tagged transfers. This allows either HW or SW SEQ checking for untagged transfers, signaled via the EnSeqChk flag. In the event of an error, the connection ID is known and so the protocol requirement to terminate the connection can be met.
Memory allocated to an application is visible in multiple address spaces:
When an application allocates memory, it gets user mode virtual address. It passes this virtual address to kernel mode driver when it wants to register this memory with the hardware for a Buffer Tag Entry. Driver converts this to DMA address using system calls and sets up the required page tables in memory and then allocates/populates the BTT entry for this memory. The BTT index is returned as a KEY (LKEY/RKEY of RDMA capable NIC) for the memory registration.
A destination buffer may be registered for use as a target of subsequent RDMA transfers by:
The BTT entry is defined by the following table.
The fields of the BTT entry are defined in the following table. The top two fields in this table define how the buffer mode is inferred from the size of the buffer and the MMU page size for the buffer.
indicates data missing or illegible when filed
Per the table above, buffers are defined in one of three ways:
1. Contiguous Buffer Mode
2. Single Page Buffer Mode
3. List of Lists BufferMode
The maximum size of a single buffer is 248 bytes or 65K times 4 GB, far larger than needed to describe any single practical physical memory. A 4 GB buffer spans 1 million 4 KB pages. A single SG list contains pointers to 512 pages. 2K SG Lists are needed to hold 1M page pointers. Thus, the List of SG Lists for a 4 GB buffer requires a physically contiguous region 20 KB in extent. If the page size is higher, this size comes down accordingly. For example, for a page size of 128 MB, a single SG list of 512 entries can cover 64 GB.
If the BType bit of the entry is a ONE, then the buffer's base address is found in the Buffer Pointer field of the entry. In this case, the starting DMA address is calculated as:
DMA Start Address=Buffer pointer+(RDMA_Starting Offset from the WR−Virtual Base address in BTTE).
That the transfer length fits within the buffer is determined by evaluating this inequality:
RDMA_STARTING_OFFSET+Total Transfer Length from WR<=Virtual base address in BTTE+NumBytes in BTTE.
If this check fails, then transfer is aborted and completion messages are sent indicating the failure. Note the difficulty in resolving last 16 bytes of TTL without summing the individual Length at Pointer fields.
If the buffer is comprised of a single memory page then the Buffer Pointer of the BTT entry is the physical base address of the first byte of the buffer, just as for Contiguous Buffer mode.
When the buffer extends to more than one page but contains less than (or equal to) 512 pages, then a the Buffer pointer in the BTT entry points to an SG List.
An SG List, as used here, is a 4 KB aligned structure containing up to 512 physical page addresses ordered in accordance with their offset from the start of the buffer. This relationship is illustrated in
The offset from the start of a buffer is given by:
Offset=RDMA Starting Buffer Offset−Virtual Base Address
where the RDMA Starting Buffer Offset is from the WR VDM and the Virtual Base Address is from the BTT entry pointed to by the WR VDM.
Offset divided by the page size gives the Page Number:
Page=Offset<<Log2PageSize
The starting offset within that page is given by:
Start Offset in Page=RDMA Starting Buffer Offset && (PageSize−1)
where && indicates a bit-wise AND.
A “small” buffer is one described by a pointer list (SG list) that fits within a single 4 KB page, and can thus span 512 4 KB pages. For a “small” buffer, a second BTT read is required to get the destination address's page pointer. A second read of host memory is required to retrieve the pointer to the memory page in which the transfer starts. Using 4 KB pages, the page number within the list is Starting Offset[20:12]. The DMA reads starting at address={BufferPointer[63:12, Starting Offset[20:12], 3′b000}, obtaining at least one 8-byte aligned pointer and more according to transfer length and how many pointers it has temporary storage for.
A Large Paged Buffer requires more than one SG List to hold all of its page pointers. For this case, Buffer Pointer in the BTT entry points to a List of SG Lists. A total of three reads are required to get the starting destination address:
In RDMA, the Security Key and Source ID in the RDMA Buffer Tag Table entry at the table index given by the Buffer Tag in the descriptor message are checked against the corresponding fields in the descriptor message. If these checks are enabled by the EnKeyChk and EnGridChk BTT entry fields, the message is allowed to complete only if each matches and, in addition, the entire transfer length fits within the buffer defined by the table entry and associated pointer lists. For pull protocol messages, these checks are done in HW by the DMAC. For RDMA short packet pushes, the validation information is passed to the software in the receive completion message and the checks are done by the Rx driver.
The table lookup process used to process an RDMA pull VDM at a destination switch is illustrated in
This singe BTT read returns the full 32 byte entry defined in Table 30 RDMA Buffer Tag Table Entry Format, illustrated by the red arrow labeled 32-byte Completion in the figure. The source RID and security key of the entry are used by the DMAC to authenticate the access. If the parameters for which checks are enabled by the BTT entry don't match the same parameters in the descriptor, completion messages are sent to both source and destination with a completion code indicating security violation. In addition, any message data read from the source is discarded and no further read requests for the message data are initiated.
If the parameters do match or the checks aren't enabled, then the process continues to determine the initial destination address for the message. The BTT entry read is followed by zero, one, or two more reads of host memory to get the destination address depending on the size and type of buffer, as defined by the BTT entry.
RDMA transfers are managed via the following control registers in the VF's BARO memory mapped register space and associated data structures.
Support for broadcast and multicast is required in Capella. Broadcast is used in support of networking (Ethernet) routing protocols and other management functions. Broadcast and multicast may also be used by clustering applications for data distribution and synchronization.
Routing protocols typically utilize short messages. Audio and video compression and distribution standards employ packets just under 256 bytes in length because short packets result in lower latency and jitter. However, while a Capella fabric might be at the heart of a video server, the multicast distribution of the video packets is likely to be done out in the Ethernet cloud rather than in the ExpressFabric.
In HPC and instrumentation, multicast may be useful for distribution of data and for synchronization (e.g. announcement of arrival at a barrier). A synchronization message would be very short. Data distribution broadcasts would have application specific lengths but can adapt to length limits
There are at best limited applications for broadcast/multicast of long messages and so these won't be supported directly. To some extent, BC/MC of messages longer than the short packet push limit may be supported in the driver by segmenting the messages into multiple SPPs sent back to back and reassembled at the receiver.
Standard MC/BC routing of Posted Memory Space requests is required to support dualcast for redundant storage adapters that use shared endpoints.
For Capella-2 we need to extend PCIe MC to support multicast of the ID-routed Vendor Defined Messages used in host to host messaging and to allow broadcast/multicast to multiple Domains.
To support broadcast and multicast of DMA VDMs in the Global ID space, we:
With these provisions, software can create and queue broadcast packets for transmission just like any others. The short MC packets are pushed just like unicast short packets but the multicast destination IDs allow them to be sent to multiple receivers.
Standard PCIe Multicast is unreliable; delivery isn't guaranteed. This fits with IP multicasting which employs UDP streams, which don't require such a guarantee. Therefore Capella will not expect to receive any completions to BC/MC packets as the sender and will not return completion messages to BC/MC VDMs as a receiver. The fabric will treat the BC/MC VDMs as ordered streams (unless the RO bit in the VDM header is set) and thus deliver them in order with exceptions due only to extremely rare packet drops or other unforeseen losses.
When a BC/MC VDM is received, the packet is treated as a short packet push with nothing special for multicast other than to copy the packet to ALL VFs that are members of its MCG, as defined by a register array in the station. The receiving DMAC and the driver can determine that the packet was received via MC by recognition of the MC value in the Destination GRID that appears in the RxCQ message.
Broadcast/multicast messages are first unicast routed using DLUT provided route Choices to a “Domain Broadcast Replication Starting Point (DBRSP)” for a broadcast or multicast confined to the home domain and a “Fabric Broadcast Replication Starting Point (FBRSP)” for a fabric consisting of multiple domains and a broadcast or multicast intended to reach destinations in multiple Domains.
Inter-Domain broadcast/multicast packets are routed using their Destination Domain of 0FFh to index the DLUT. Intra-Domain broadcast/multicast packets are routed using their Destination BUS of 0FFh to index the DLUT. PATH should be set to zero in BC/MC packets. The BC/MC route Choices toward the replication starting point are found at D-LUT[{1, 0xff}] for inter-Domain BC/MC TLPs and at D-LUT[{0, 0xff}] for intra-Domain BC/MC TLPs. Since DLUT Choice selection is based on the ingress port, all 4 Choices at these indices of the DLUT must be configured sensibly.
Since different DLUT locations are used for inter-Domain and intra-Domain BC/MC transfers, each can have a different broadcast replication starting point. The starting point for a BC/MC TLP that is confined to its home Domain, DBRSP, will typically be at a point on the Domain fabric where connections are made to the inter-Domain switches, if any. The starting point for replication for an Inter-Domain broadcast or multicast, FBRSP, is topology dependent and might be at the edge of the domain or somewhere inside an Inter-Domain switch.
At and beyond the broadcast replication starting point, this DLUT lookup returns a route Choice value of 0xFh. This signals the route logic to replicate the packet to multiple destinations.
To facilitate understanding of an embodiment of the invention,
Each switch 105 includes host ports 110 with and embedded NIC 200, fabric ports 115, an upstream port 118, and a downstream port 120. The individual host ports 110 may include PtoP (peer-to-peer) elements. In this example, a shared endpoint 125 is coupled to the downstream port and includes physical functions (PFs) and Virtual Functions (VFs). Individual servers 130 may be coupled to individual host ports. The fabric is scalable in that additional switches can be coupled together via the fabric ports. While two switches are illustrated, it will be understood that an arbitrary number may be coupled together as part of the switch fabric. While a Capella 2 switch is illustrated, it will be understood that embodiments of the present invention are not limited to the Capella 2 switch architecture.
A Management Central Processor Unit (MCPU) 140 is responsible for fabric and I/O management and may include an associated memory having management software (not shown). In one optional embodiment, a semiconductor chip implementation uses a separate control plan 150 and provides an x1 port for this use. Multiple options exist for fabric, control plane, and MCPU redundancy and fail over. The Capella 2 switch supports arbitrary fabric topologies with redundant paths and can implement strictly non-blocking fat tree fabrics that scale from 72×4 ports with nine switch chips to literally thousands of ports.
Information transferred via communications interface 914 may be in the form of signals such as electronic, electromagnetic, optical, or other signals capable of being received by communications interface 914, via a communication link that carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a radio frequency link, and/or other communication channels. With such a communications interface, it is contemplated that the one or more processors 902 might receive information from a network, or might output information to the network in the course of performing the above-described method steps. Furthermore, method embodiments of the present invention may execute solely upon the processors or may execute over a network such as the Internet in conjunction with remote processors that shares a portion of the processing.
The term “non-transient computer readable medium” is used generally to refer to media such as main memory, secondary memory, removable storage, and storage devices, such as hard disks, flash memory, disk drive memory, CD-ROM and other forms of persistent memory and shall not be construed to cover transitory subject matter, such as carrier waves or signals. Examples of computer code include machine code, such as produced by a compiler, and files containing higher level code that are executed by a computer using an interpreter. Computer readable media may also be computer code transmitted by a computer data signal embodied in a carrier wave and representing a sequence of instructions that are executable by a processor.
In other embodiments of the invention, a NIC may be replaced by another type of network class device endpoint such as a host bus adapter or a converged network adapter.
In the specification and claims, physical devices may also be implemented by software.
While this invention has been described in terms of several preferred embodiments, there are alterations, permutations, modifications, and various substitute equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and various substitute equivalents as fall within the true spirit and scope of the present invention.
Number | Date | Country | |
---|---|---|---|
Parent | 14231079 | Mar 2014 | US |
Child | 14244634 | US |