This disclosure relates generally to networking and, more particularly, to methods, apparatus, and articles of manufacture to improve bandwidth for packet timestamping.
Multi-access edge computing (MEC) is a network architecture concept that enables cloud compute capabilities and an infrastructure technology service environment at the edge of a network, such as a cellular network. Using MEC, data center cloud services and applications can be processed closer to an end user or compute device to improve network operation.
In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. The figures are not to scale. As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other.
Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc., are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.
As used herein, “about” refer to measurements that may not be exact due to measurement device tolerances and/or other real world imperfections. As used herein “substantially real time” refers to occurrence in a near instantaneous manner recognizing there may be real world delays for compute time, transmission, etc. Thus, unless otherwise specified, “substantially real time” refers to real time+/−1 second.
As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
As used herein, “processor circuitry” is defined to include (i) one or more special purpose electrical circuits structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmed with instructions to perform specific operations and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of processor circuitry include programmed microprocessors, Field Programmable Gate Arrays (FPGAs) that may instantiate instructions, Central Processor Units (CPUs), Graphics Processor Units (GPUs), Digital Signal Processors (DSPs), XPUs, or microcontrollers and integrated circuits such as Application Specific Integrated Circuits (ASICs). For example, an XPU may be implemented by a heterogeneous compute system including multiple types of processor circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more DSPs, etc., and/or a combination thereof) and application programming interface(s) (API(s)) that may assign compute task(s) to whichever one(s) of the multiple types of the processing circuitry is/are best suited to execute the compute task(s).
Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer endpoint devices, than at a base station, than at a central office). However, the closer that the edge location is to the endpoint (e.g., user equipment (UE)), the more that space and power is often constrained. For example, such processing can consume a disproportionate amount of bandwidth of processing resources closer to the end user or compute device thereby increasing latency, congestion, and power consumption of the network. Thus, edge computing attempts to reduce the amount of resources needed for network services, through the distribution of more resources which are located closer both geographically and in network access time. In this manner, edge computing attempts to bring the compute resources to the workload data where appropriate or bring the workload data to the compute resources. As used herein, data is information in any form that may be ingested, processed, interpreted and/or otherwise manipulated by processor circuitry to produce a result. The produced result may itself be data.
The following describes aspects of an edge cloud architecture that covers multiple potential deployments and addresses restrictions that some network operators or service providers may have in their own infrastructures. These include, variation of configurations based on the edge location (because edges at a base station level, for instance, may have more constrained performance and capabilities in a multi-tenant scenario); configurations based on the type of compute, memory, storage, fabric, acceleration, or like resources available to edge locations, tiers of locations, or groups of locations; the service, security, and management and orchestration capabilities; and related objectives to achieve usability and performance of end services. These deployments may accomplish processing in network layers that may be considered as “near edge,” “close edge,” “local edge,” “middle edge,” or “far edge” layers, depending on latency, distance, and timing characteristics.
Edge computing is a developing paradigm where computation is performed at or closer to the “edge” of a network, typically through the use of a compute platform (e.g., x86 or ARM compute hardware architecture) implemented at base stations, gateways, network routers, or other devices which are much closer to endpoint devices producing and consuming the data. For example, edge gateway servers may be equipped with pools of memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with standardized compute hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices. Within edge compute networks, there may be scenarios in services which the compute resource will be “moved” to the data, as well as scenarios in which the data will be “moved” to the compute resource. Or as an example, base station compute, acceleration and network resources can provide services in order to scale to workload demands on an as needed basis by activating dormant capacity (subscription, capacity on demand) in order to manage corner cases, emergencies or to provide longevity for deployed resources over a significantly longer implemented lifecycle.
In contrast to the network architecture of
Depending on the real-time requirements in a communications context, a hierarchical structure of data processing and storage nodes may be defined in an edge compute deployment. For example, such a deployment may include local ultra-low-latency processing, regional storage, and processing as well as remote cloud datacenter-based storage and processing. Key performance indicators (KPIs) may be used to identify where sensor data is best transferred and where it is processed or stored. This typically depends on the ISO layer dependency of the data. For example, lower layer (PHY, MAC, routing, etc.) data typically changes quickly and is better handled locally in order to meet latency requirements. Higher layer data such as Application Layer data is typically less time critical and may be stored and processed in a remote cloud datacenter. At a more generic level, an edge compute system may be described to encompass any number of deployments operating in the edge cloud 110, which provide coordination from client and distributed compute devices.
Examples of latency, resulting from network communication distance and processing time constraints, may range from less than a millisecond (ms) when among the endpoint layer 200, under 5 ms at the edge devices layer 210, to even between 10 to 40 ms when communicating with nodes at the network access layer 220. Beyond the edge cloud 110 are core network 230 and cloud data center 240 layers, each with increasing latency (e.g., between 50-60 ms at the core network layer 230, to 100 or more ms at the cloud data center layer 240). As a result, operations at a core network data center 235 or a cloud data center 245, with latencies of at least 50 to 100 ms or more, will not be able to accomplish many time-critical functions of the use cases 205. Each of these latency values are provided for purposes of illustration and contrast; it will be understood that the use of other access network mediums and technologies may further reduce the latencies. In some examples, respective portions of the network may be categorized as “close edge,” “local edge,” “near edge,” “middle edge,” or “far edge” layers, relative to a network source and destination. For instance, from the perspective of the core network data center 235 or a cloud data center 245, a central office or content data network may be considered as being located within a “near edge” layer (“near” to the cloud, having high latency values when communicating with the devices and endpoints of the use cases 205), whereas an access point, base station, on-premise server, or network gateway may be considered as located within a “far edge” layer (“far” from the cloud, having low latency values when communicating with the devices and endpoints of the use cases 205). It will be understood that other categorizations of a particular network layer as constituting a “close,” “local,” “near,” “middle,” or “far” edge may be based on latency, distance, number of network hops, or other measurable characteristics, as measured from a source in any of the network layers 200-240.
The various use cases 205 may access resources under usage pressure from incoming streams, due to multiple services utilizing the edge cloud. To achieve results with low latency, the services executed within the edge cloud 110 balance varying requirements in terms of: (a) Priority (throughput or latency) and Quality of Service (QoS) (e.g., traffic for an autonomous car may have higher priority than a temperature sensor in terms of response time requirement; or, a performance sensitivity/bottleneck may exist at a compute/accelerator, memory, storage, or network resource, depending on the application); (b) Reliability and Resiliency (e.g., some input streams need to be acted upon and the traffic routed with mission-critical reliability, where as some other input streams may be tolerate an occasional failure, depending on the application); and (c) Physical constraints (e.g., power, cooling and form-factor).
The end-to-end service view for these use cases involves the concept of a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements. The services executed with the “terms” described may be managed at each layer in a way to assure substantially real time, and runtime contractual compliance for the transaction during the lifecycle of the service. When a component in the transaction is missing its agreed to service level agreement (SLA), the system as a whole (components in the transaction) may provide the ability to (1) understand the impact of the SLA violation, and (2) augment other components in the system to resume overall transaction SLA, and (3) implement steps to remediate.
Thus, with these variations and service features in mind, edge computing within the edge cloud 110 may provide the ability to serve and respond to multiple applications of the use cases 205 (e.g., object tracking, video surveillance, connected cars, etc.) in real-time or near real-time, and meet ultra-low latency requirements for these multiple applications. These advantages enable a whole new class of applications (e.g., virtual network functions (VNFs), FaaS, Edge as a Service (EaaS), standard processes, etc.), which cannot leverage conventional cloud compute due to latency or other limitations.
However, with the advantages of edge computing comes the following caveats. The devices located at the edge are often resource constrained and therefore there is pressure on usage of edge resources. Typically, this is addressed through the pooling of memory and storage resources for use by multiple users (tenants) and devices. The edge may be power and cooling constrained and therefore the power usage needs to be accounted for by the applications that are consuming the most power. There may be inherent power-performance tradeoffs in these pooled memory resources, as many of them are likely to use emerging memory technologies, where more power requires greater memory bandwidth. Likewise, improved security of hardware and root of trust trusted functions are also required because edge locations may be unmanned and may even need permissioned access (e.g., when housed in a third-party location). Such issues are magnified in the edge cloud 110 in a multi-tenant, multi-owner, or multi-access setting, where many users request services and applications, especially as network usage dynamically fluctuates and the composition of the multiple stakeholders, use cases, and services changes.
At a more generic level, an edge compute system may be described to encompass any number of deployments at the previously discussed layers operating in the edge cloud 110 (network layers 210-230), which provide coordination from client and distributed compute devices. One or more edge gateway nodes, one or more edge aggregation nodes, and one or more core data centers may be distributed across layers of the network to provide an implementation of the edge compute system by or on behalf of a telecommunication service provider (“telco”, or “TSP”), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge compute system may be provided dynamically, such as when orchestrated to meet service objectives.
Consistent with the examples provided herein, a client compute node may be embodied as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. Further, the label “node” or “device” as used in the edge compute system does not necessarily mean that such node or device operates in a client or agent/minion/follower role; rather, any of the nodes or devices in the edge compute system refer to individual entities, nodes, or subsystems which include discrete or connected hardware or software configurations to facilitate or use the edge cloud 110.
As such, the edge cloud 110 is formed from network components and functional features operated by and within edge gateway nodes, edge aggregation nodes, or other edge compute nodes among network layers 210-230. The edge cloud 110 thus may be embodied as any type of network that provides edge compute and/or storage resources which are proximately located to RAN capable endpoint devices (e.g., mobile compute devices, IoT devices, smart devices, etc.), which are discussed herein. In other words, the edge cloud 110 may be envisioned as an “edge” which connects the endpoint devices and traditional network access points that serve as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks.
The network components of the edge cloud 110 may be servers, multi-tenant servers, appliance compute devices, and/or any other type of compute devices. For example, the edge cloud 110 may include an appliance compute device that is a self-contained electronic device including a housing, a chassis, a case, or a shell. In some circumstances, the housing may be dimensioned for portability such that it can be carried by a human and/or shipped. Example housings may include materials that form one or more exterior surfaces that partially or fully protect contents of the appliance, in which protection may include weather protection, hazardous environment protection (e.g., EMI, vibration, extreme temperatures), and/or enable submergibility. Example housings may include power circuitry to provide power for stationary and/or portable implementations, such as AC power inputs, DC power inputs, AC/DC or DC/AC converter(s), power regulators, transformers, charging circuitry, batteries, wired inputs and/or wireless power inputs. Example housings and/or surfaces thereof may include or connect to mounting hardware to enable attachment to structures such as buildings, telecommunication structures (e.g., poles, antenna structures, etc.) and/or racks (e.g., server racks, blade mounts, etc.). Example housings and/or surfaces thereof may support one or more sensors (e.g., temperature sensors, vibration sensors, light sensors, acoustic sensors, capacitive sensors, proximity sensors, etc.). One or more such sensors may be contained in, carried by, or otherwise embedded in the surface and/or mounted to the surface of the appliance. Example housings and/or surfaces thereof may support mechanical connectivity, such as propulsion hardware (e.g., wheels, propellers, etc.) and/or articulating hardware (e.g., robot arms, pivotable appendages, etc.). In some circumstances, the sensors may include any type of input devices such as user interface hardware (e.g., buttons, switches, dials, sliders, etc.). In some circumstances, example housings include output devices contained in, carried by, embedded therein and/or attached thereto. Output devices may include displays, touchscreens, lights, light emitting diodes (LEDs), speakers, I/O ports (e.g., universal serial bus (USB)), etc. In some circumstances, edge devices are devices presented in the network for a specific purpose (e.g., a traffic light), but may have processing and/or other capacities that may be utilized for other purposes. Such edge devices may be independent from other networked devices and may be provided with a housing having a form factor suitable for its primary purpose; yet be available for other compute tasks that do not interfere with its primary task. Edge devices include IoT devices. The appliance compute device may include hardware and software components to manage local issues such as device temperature, vibration, resource utilization, updates, power issues, physical and network security, etc. The edge cloud 110 may also include one or more servers and/or one or more multi-tenant servers. Such a server may include an operating system and a virtual compute environment. A virtual compute environment may include a hypervisor managing (spawning, deploying, destroying, etc.) one or more virtual machines, one or more containers, etc. Such virtual compute environments provide an execution environment in which one or more applications and/or other software, code or scripts may execute while being isolated from one or more other applications, software, code, or scripts.
In
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In some examples, the production controller circuitry 421, the optimizing controller circuitry 422 (e.g., performing optimal control), the process history database 423, and/or the domain controller circuitry 424 aggregate and/or process lower level data (e.g., from the level zero 402, the level one 408, and/or the level two 414) and forward the aggregated and/or processed data to upper levels of the IT/OT environment 400. In the example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated the example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
As the IT/OT environment 400 implements an ICS that controls a manufacturing and/or other production process, some of the processes may be time sensitive. Accordingly, the Institute of Electrical and Electronics Engineers (IEEE) has developed standards to handle such time sensitive processes. For example, the emerging IEEE standards for deterministic networking, referred to collectively as time sensitive networking (TSN) standards, provide extremely precise data transfer across a network. As a result, embedded designs (e.g., any of the devices of the IT/OT environment 400) in industrial and/or automotive environments (e.g., the IT/OT environment 400) are increasingly integrating TSN controllers. Other time sensitive uses cases are possible including aerospace, audio video bridging (e.g., audio and/or video studio, infotainment systems, etc.), automotive (e.g., self-driving vehicles, communication of sensor data in automotive networks, etc.), cellular network (e.g., fronthaul networks, 5G mobile networks generally, etc.) and/or utility (e.g., power automation) applications, among others.
TSN controllers may be implemented by network interface circuitry (NIC) based on the capabilities of the NIC. As used herein, NIC refers to Network Interface Circuitry. Although the term NIC does not require the use of an indefinite article (e.g., “a” or “an”) and may operate as both a singular and plural noun, in some examples indefinite articles are used with the term NIC and/or an “s” is added to the term NIC to improve readability. In some examples, a NIC may or may not be implemented on a card. In some examples, a NIC may be implemented as part of a system on a chip (SoC) and configured to operate in conjunction with main processor circuitry (e.g., a CPU) of the SoC. A NIC may include memory access control circuitry (e.g., direct memory access (DMA) control circuitry), media access control (MAC) circuitry (e.g., media access control circuitry), and one or more caches.
With the increasing convergence of IT and OT environments, workload consolidation and demand for seamless communication across many connected devices are imposing increased requirements for embedded designs. For example, such requirements include that TSN controllers be compatible with various types of data traffic, have precise scheduling of the data, and do not sacrifice latency for hard real-time applications.
To support the various types of data traffic, the “IEEE Standard for Local and Metropolitan Area Network—Bridges and Bridged Networks,” in IEEE Std 802.1Q-2018 (Revision of IEEE Std 802.1Q-2014), vol., no., pp. 1-1993, 6 Jul. 2018 (referred to hereinafter as “the IEEE 802.1Q standard”) defines eight traffic classes (e.g., TC0-TC7) for all data streams. Each traffic class is subject to different parameters (e.g., quality of service (QoS)). In industrial applications, high priority, hard real-time, traffic is classified as TC7-TC5. Similarly, non-real-time, best effort traffic (e.g., best effort data stream(s)) is classified as TC4-TC0. As used herein, real-time traffic and/or real-time data stream(s) refers to network traffic associated with a compute application in which success of the compute application is dependent on the logical correctness of the outcome of the compute application as well as whether the outcome of the compute application was provided with a specified time constraint known as a deadline. As used herein, hard real-time traffic and/or hard real-time data stream(s) refers to real-time traffic associated with a compute application where failure to meet a deadline constitutes failure of the compute application. As used herein, best effort traffic and/or best effort data stream(s) refers to network traffic associated with a compute application that does not require an outcome with a specified time constraint.
In example TSN applications, TSN capable NICs (e.g., a TSN NIC) include 8 transmit queues and 8 receive queues to accommodate the different traffic classes specified by the IEEE 802.1Q standard, where each transmit and receive queue pair is dedicated to one of the traffic classes. Payload data transmitted by a TSN NIC is associated with a descriptor. For example, to cause transmission of payload data, main processor circuitry stores payload data in a shared memory with the descriptor and the TSN NIC may access the descriptor to process the payload data for transmission.
In the illustrated example of
The example shared memory 502 may be implemented by a volatile memory (e.g., Static Random Access Memory (SRAM), SDRAM, Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory). For example, if the shared memory 502 is implemented as SRAM, the shared memory 502 may be implemented on the same die as the main processor circuitry and/or the same die as the NIC. In some examples, the shared memory 502 may be implemented by one or more mass storage devices such as hard disk drive(s) (HDD(s)), compact disk (CD) drive(s), digital versatile disk (DVD) drive(s), solid-state disk (SSD) drive(s), Secure Digital (SD) card(s), CompactFlash (CF) card(s), etc. While in the illustrated example the shared memory 502 is illustrated as a single memory, the shared memory 502 may be implemented by any number and/or type(s) of memories. Furthermore, the data stored in the shared memory 502 may be in any data format such as, for example, binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc.
In the illustrated example of
In the illustrated example of
In the illustrated example of
In some examples, one or more bits (e.g., one or more context bits) in a reserved field of the descriptor 504 may indicate a context of the descriptor 504. The one or more context bits indicate whether the header of the receiving device will be the same for a certain number (e.g., n) of payloads to be sent by the NIC. If the one or more context bits indicate the header will be consistent for the next n payloads, the NIC may not read the header information for the next n payloads but instead, store the header information for a first payload of the next n payloads and refer to the stored header information until after the next n payloads have been transmitted by the NIC.
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the example of
TSN NICs precisely schedule data packets based on available IEEE standard scheduling algorithms and precisely generate timestamps for the data packets with sub-nanosecond accuracy. TSN NICs then report the timestamps to one or more applications executing on main processor circuitry (e.g., a CPU). In a first type of existing TSN NIC, after a packet is transmitted by the NIC, the NIC records the timestamp at which a packet was sent in the memory location of a corresponding descriptor of the packet. Additionally, in the first type of existing TSN NIC, after a packet is received by the NIC, the NIC records the timestamp at which a packet was received in the memory location of a corresponding descriptor of the packet.
For example, in TSN NICs, when an application executing on main processor circuitry advances the tail pointer of a descriptor ring with updated (e.g., fresh) data by setting the control bit of a descriptor (e.g., setting the control bit to one), the TSN NIC takes control over the descriptor and its processing. In TSN NICs, DMA control circuitry fetches the descriptor from shared memory. After parsing the descriptor, the DMA control circuitry initiates an upstream read operation to fetch the payload from the shared memory (e.g., DDR) and pushes the payload into a queue of a data cache of the TSN NIC that corresponds to the traffic class of the payload. As used herein the term upstream refers to an operation where a NIC makes a request to shared memory. As used herein the term downstream refers to an operation where main processor circuitry makes a request to read data from the NIC.
MAC circuitry of TSN NICs schedules the launch time of the payload according to a scheduling algorithm. To satisfy the scheduled launch time, the MAC circuitry fetches the payload from the data cache, formats the payload into a packet (e.g., packetizes the payload), and causes transmission of the packet. To packetize a payload, the MAC circuitry formats the payload according to a standard (e.g., the IEEE 802.1Q standard, the “IEEE Standard for Ethernet,” in IEEE Std 802.3-2018 (Revision of IEEE Std 802.3-2015), vol., no., pp. 1-5600, 31 Aug. 2018 (referred to hereinafter as “the IEEE 802.3 standard”), etc.) where the packet typically includes a preamble having a start frame delimiter (SFD) field, a destination MAC address, a source MAC address, among others.
The MAC circuitry precisely timestamps the packet when the SFD crosses an interface (e.g., a 10-gigabit media-independent interface (XGMII), a gigabit media-independent interface (GMII), a media-independent interface (MII)) boundary and is passed to and/or received from physical layer (PHY) circuitry. Such PHY circuitry may be implemented by a transmitter, a receiver, and/or a transceiver. PHY circuitry is typically implemented outside of an SoC on the same printed circuitry board (PCB) as the SoC but in a separate package from the SoC. In the first type of existing TSN NICs, once the MAC circuitry generates the timestamp (e.g., the packet is transmitted), the MAC circuitry sends the timestamp and status information to the DMA control circuitry. The DMA control circuitry of the first type of existing TSN NICs then writes the timestamp and status information into the same descriptor by overwriting some of the fields of the descriptor (e.g., existing DMA control circuitry writes the timestamp to the first row 506 and/or the second row 508 and writes the status to the eighth row 520) that are no longer needed by the MAC circuitry (e.g., the data has already been consumed by the MAC circuitry). The MAC circuitry of the first type of existing TSN NICs then releases the descriptor back to the application executing on the main processor circuitry by clearing the control bit (e.g., resetting the control bit to zero) and generates an interrupt to the application to indicate that the packet has been transmitted and that the descriptor may be overwritten by the application executing on the main processor circuitry.
The first type of existing TSN NICs face at least two bottlenecks in packet processing. The first bottleneck is caused because the descriptor is not released (e.g., the control bit remains set to one) until the packet is transmitted. As such, the application executing on the main processor circuitry may not overwrite descriptors in the descriptor ring until corresponding packets are transmitted. Because packets and descriptors are prefetched by TSN NICs ahead of time this delay causes a significant bottleneck. The second bottleneck is caused by the hardware of the first type of existing TSN NICs. For example, the descriptor stored in a descriptor cache of the TSN NIC is not released until the packet is transferred to the queue corresponding to the traffic class of the packet despite the payload data having already been fetched from shared memory. The second bottleneck is caused because the DMA control circuitry of the first type of existing TSN NICs must keep the address in shared memory where the descriptor is stored so that the DMA control circuitry may write the packet timestamp and status at later time once the packet is transmitted. In other words, because the DMA control circuitry maintains the descriptor and the address to which the timestamp and status are to be written in the same cache (e.g., the descriptor cache), the DMA control circuitry cannot release the descriptor (e.g., cannot set the control bit to zero) until after the timestamp is generated.
The first type of existing TSN NICs is sufficient for lower line rates (e.g., 2.5 gigabit per second (Gbps), 1 Gbps, etc.). For example, in existing TSN NICs of the first type that operate at 1 Gbps line rate, the descriptor cache typically occupies less than 2 KB of memory and does not significantly impact silicon area. However, at higher line rates (e.g., greater than 2.5 Gbps, 10 Gbps, etc.), the first type of existing TSN NICs is subject to severe limitation on the effective transmit bandwidth of a TSN NIC and/or the silicon area required to implement the TSN NIC. Many of the disadvantages that existing TSN NICs suffer from result from the related operations on timestamp and status information by both the TSN NIC and main processor circuitry. For example, as the line rate of existing TSN NICs increases, the delays associated with closing descriptors creates backpressure and stalling which leads to lower effective bandwidth. Many of the disadvantages that existing TSN NICs suffer result from the related operations on timestamp and status information by both the TSN NIC and main processor circuitry. For example, as the line rate of existing TSN NICs increases, the delays associated with closing descriptors creates backpressure and stalling which leads to lower effective bandwidth.
To configure the first type of existing TSN NICs to operate at higher line rates, it is necessary to increase the size of the descriptor ring in shared memory, the size of the descriptor cache on the TSN NIC, and the size of the non-posted request and completion credit memory (discussed further herein) on the TSN NIC. Because the first type of existing TSN NICs do not close descriptors after fetching the payload data, but instead after the payload data is sent, the first type of existing TSN NICs (e.g., implemented in data centers) require a very large descriptor cache to meet higher line rates. For example, because each descriptor requires 32 bytes of memory, the first type of existing TSN NICs requires up to 12 KB of descriptor cache to operate at 10 Gbps. Such large cache sizes are untenable for edge computing applications, such as IoT applications and other cost sensitive applications.
A second type of existing TSN NICs do not track the transmit status. Instead, the MAC circuitry of the second type of existing TSN NICs releases descriptors as soon as the payload data is fetched from the memory but without waiting for the payload data to be transmitted. The second type of existing TSN NICs no longer track when a payload is transmitted or the status of the payload. As the status of the payload transmission is dropped in existing TSN NICs of the second type, applications executing on the main processor circuitry has no insight as to when the payload is transmitted or if the payload was transmitted without any errors. The payload timestamps and the payload status are very critical for hard real-time applications. Knowing only that a payload has been transmitted is not enough for applications executing on main processor circuitry. To operate effectively such applications executing on main processor circuitry should know precisely when the payload is precisely transmitted and the status of the payload. As such, the second type of existing TSN NICs fail to satisfy the requirements of TSN standards.
A third type of existing TSN NICs does not write the payload transmit status or the timestamp to the descriptor in the descriptor cache of the TSN NIC or in the shared memory, but instead maintains the payload transmit status and the timestamp in a 16-element cache (e.g., a timestamp/status cache) local to the third type of existing TSN NICs where each element is 64-bits in length. The timestamp/status cache local to the third type of existing TSN NICs operates as a FIFO cache. In the third type of existing TSN NICs, the DMA control circuitry releases a descriptor as soon as the DMA control circuitry fetches the payload data from the shared memory and the data is pushed to a corresponding queue of the data cache but does not wait for the payload data to be transmitted. After the timestamps and status are written to the local timestamp/status cache, an application executing on the main processor circuitry can access the timestamps and status of transmitted payloads by reading the local timestamp/status cache. In the third type of existing TSN NICs, the descriptor cache can be decreased by half (e.g., to 1 KB). However, a disadvantage of the third type of existing TSN NICs is that the application executing on main processor circuitry must read the local timestamp/status cache very quickly or the application will run the risk of losing timestamps and/or statuses of some payloads that are overwritten according to the FIFO storage format.
Additionally, in the third type of existing TSN NICs, each downstream memory-mapped input output (I/O) (MMIO) read operation takes about 2 microseconds (μs). As such, at a 10 Gbps line rate, in the third type of existing TSN NICs update the 64-byte status field at the rate of 67.2 nanoseconds (ns). Due to the quick refresh rate for the local timestamp/status cache, the third type of existing TSN NICs must implement a very large local timestamp/status cache to prevent data from being overwritten before the application executing on main processor circuitry can read such data. As such, to operate at higher line rates, the third type of existing TSN NICs requires a large silicon area.
Example methods, apparatus, and articles of manufacture disclosed herein decouple the operation writing payload timestamps and status to shared memory from the operation releasing descriptors (e.g., setting the control bit to zero). Example descriptors disclosed herein include a writeback address pointer that points to a location in shared memory to which the memory access control circuitry (e.g., DMA control circuitry) is to write timestamps and/or status of transmitted packets rather than writing the timestamps and/or status directly to the location of the descriptors in the shared memory.
As such, memory access control circuitry disclosed herein closes disclosed descriptors (e.g., by setting the control bit to zero) as soon as disclosed memory access control circuitry fetches the payload data to be transmitted without waiting for the packet to be transmitted. Accordingly, examples disclosed herein reduce backpressure and stalling and therefore allow example applications executing on main processor circuitry to overwrite descriptors (e.g., to advance the tail pointer of an example descriptor ring) with updated (e.g., fresh) data more quickly and without waiting for disclosed NICs to release the descriptors after transmission of packets. Such improvements achieved by disclosed examples increase the effective utilization of bandwidth in NICs. Additionally, examples disclosed herein are very area efficient as disclosed NICs store writeback address pointers (e.g., 8 bytes) instead of entire descriptors (e.g., 32 bytes). Also, examples disclosed herein reduce the amount of total descriptor cache as compared to existing NICs by half. Unlike some existing TSN NICs, examples disclosed herein do not need to increase storage for non-posted and completion credits that is otherwise required due to backpressure suffered by the configuration of those existing TSN NICs. Improvements achieved by disclosed methods, apparatus, and articles of manufacture are further magnified when examples disclosed herein are implemented in NICs operating at high speeds.
In some examples, the shared memory 602 may be implemented by one or more mass storage devices such as HDD(s), CD drive(s), DVD drive(s), SSD drive(s), SD card(s), CF card(s), etc. While in the illustrated example the shared memory 602 is illustrated as a single memory, the shared memory 602 may be implemented by any number and/or type(s) of memories. Furthermore, the data stored in the shared memory 602 may be in any data format such as, for example, binary data, comma delimited data, tab delimited data, SQL structures, etc. In the example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
Additionally, in the illustrated example of
The example of
Returning to the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
As described above, each queue of the example data cache 610 corresponds to a traffic class of the data stored in that queue. For example, the queue to which payload data associated with a descriptor corresponds is representative of a channel number of the memory access control circuitry 614. In some examples, the channel numbers of the memory access control circuitry 614 are programmed ahead of time (e.g., by the MAC circuitry 612). Additionally, for example, the index of the payload data in that queue is representative of a transaction identifier (ID) of the payload data. In some examples, the transaction ID starts at zero and is incremented corresponding to the number of descriptors for a channel.
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In some examples, the NIC 604 includes means for controlling media access. For example, the means for controlling media access may be implemented by the media access control circuitry 612. In some examples, the media access control circuitry 612 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the NIC 604 includes means for controlling memory access. For example, the means for controlling memory access may be implemented by the memory access control circuitry 614. In some examples, the memory access control circuitry 614 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the NIC 604 includes means for indicating. For example, the means for indicating may be implemented by the packetization circuitry 650. In some examples, the packetization circuitry 650 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the NIC 604 includes means for parsing. For example, the means for parsing may be implemented by the parsing circuitry 654. In some examples, the parsing circuitry 654 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the NIC 604 includes one or more means for storing. For example, the one or more means for storing may be implemented by the data cache 610, the descriptor cache 620, and/or the writeback address cache 622. For example, the writeback address cache 622 may implement first means for storing, the data cache 610 may implement second means for storing, and the descriptor cache 620 may implemented third means for storing. In additional or alternative examples, the data cache 610 implements means for storing data, the descriptor cache 620 implements means for storing one or more descriptors, and the writeback address cache 622 implements means for storing one or more writeback address pointers. In some examples, the data cache 610, the descriptor cache 620, and/or the writeback address cache 622 may be implemented by one or more registers, a main memory, a volatile memory (e.g., Static Random Access Memory (SRAM), Status Synchronous Dynamic Random-Access Memory (SDRAM), Dynamic Random-Access Memory (DRAM), RAMBUS® Dynamic Random-Access Memory (RDRAM®), and/or any other type of RAM device), and/or a non-volatile memory (e.g., flash memory and/or any other desired type of memory device).
In some examples, one or more of the shared memory 602, the data cache 610, the descriptor cache 620, or the writeback address cache 622 may be virtualized. For example, one or more memories or other storage media may be aggregated into a virtual memory pool and made available to the NIC 604, the main processor circuitry 606, and/or other compute circuitry by software (e.g., machine readable instructions) and/or hardware circuitry. Such memories or other storage media may be on the same chip as the compute platform 600, on a separate chip outside of the compute platform 600 but on the same device as the compute platform 600, on a separate device from the compute platform 600, among other configurations. Example software includes an application programming interface (API) that allows an application executing on the NIC 604, the main processor circuitry 606, and/or other compute circuitry to access the virtual memory pool. In another example, the software includes an operating system on a compute platform that interfaces between the virtual memory pool and an application executing on the NIC 604, the main processor circuitry 606, and/or other compute circuitry. In some examples, software and/or hardware circuitry utilizes an edge translation table to translate a virtual address in the virtual memory pool to a physical address of physical memory hosted at an edge location. In such examples, the edge translation table maps virtual addresses to physical addresses (e.g., virtual memory mapping).
Additionally, in some examples, one or more of the shared memory 602, the data cache 610, the descriptor cache 620, or the writeback address cache 622 may be referred to a storage circuitry. For example, the shared memory 602 may be referred to as shared storage circuitry, the data cache 610 may be referred to as data storage circuitry, the descriptor cache 620 may be referred to as descriptor storage circuitry, and the writeback address cache 622 may be referred to as writeback address storage circuitry. Storage resources described herein (e.g., non-transitory computer readable medium, non-transitory computer readable storage medium, storage circuitry, memory, cache, etc.) may be implemented by circuitry that is to store information (e.g., the circuitry physically stores that information) or circuitry managing media storing the information where the media includes electronically operated media and non-electronically operated media.
While an example manner of implementing the NIC 604 of
A flowchart representative of example hardware logic circuitry, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the NIC 604 of
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., as portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or compute devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc., in order to make them directly readable, interpretable, and/or executable by a compute device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and/or stored on separate compute devices, wherein the parts when decrypted, decompressed, and/or combined form a set of machine executable instructions that implement one or more operations that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc., in order to execute the machine readable instructions on a particular compute device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example operations of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc., may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, or (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B.
As used herein, singular references (e.g., “a,” “an,” “first,” “second,” etc.) do not exclude a plurality. The term “a” or “an” object, as used herein, refers to one or more of that object. The terms “a” (or “an”), “one or more,” and “at least one” are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., the same entity or object. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
In the illustrated example of
The processor platform 900 of the illustrated example includes processor circuitry 912. The processor circuitry 912 of the illustrated example is hardware. For example, the processor circuitry 912 can be implemented by one or more integrated circuits, logic circuits, FPGAs, microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry 912 may be implemented by one or more semiconductor based (e.g., silicon based) devices.
The processor circuitry 912 of the illustrated example includes a local memory 913 (e.g., a cache, registers, etc.). The processor circuitry 912 of the illustrated example is in communication with a main memory including a volatile memory 914 and a non-volatile memory 916 by a bus 918. The volatile memory 914 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of RAM device. The non-volatile memory 916 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 914, 916 of the illustrated example is controlled by a memory controller 917.
The processor platform 900 of the illustrated example also includes the example network interface circuitry (NIC) 604. The NIC 604 may be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a Bluetooth® interface, a near field communication (NFC) interface, a Peripheral Component Interconnect (PCI) interface, and/or a Peripheral Component Interconnect Express (PCIe) interface. In some examples, the NIC 604 may also be referred to as a host fabric interface (HFI). In the example of
In some examples, the NIC 604 is implemented on the same die as the processor circuitry 912. In additional or alternative examples, the NIC 604 is implemented within the same package as the processor circuitry 912. In some examples, the NIC 604 is implemented in a different package from the package in which the processor circuitry 912 is implemented. For example, the NIC 604 may be implemented as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the processor circuitry 912 to connect with another processor platform and/or other device.
In the illustrated example, one or more input devices 922 are connected to the NIC 604. The input device(s) 922 permit(s) a user to enter data and/or commands into the processor circuitry 912. The input device(s) 922 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system.
One or more output devices 924 are also connected to the NIC 604 of the illustrated example. The output device(s) 924 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The NIC 604 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU.
In the illustrated example of
The processor platform 900 of the illustrated example also includes one or more mass storage devices 928 to store software and/or data. Examples of such mass storage devices 928 include magnetic storage devices, optical storage devices, floppy disk drives, HDDs, CDs, Blu-ray disk drives, redundant array of independent disks (RAID) systems, solid state storage devices such as flash memory devices and/or SSDs, and DVD drives.
The machine executable instructions 932 of
The cores 1002 may communicate by a first example bus 1004. In some examples, the first bus 1004 may implement a communication bus to effectuate communication associated with one(s) of the cores 1002. For example, the first bus 1004 may implement at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the first bus 1004 may implement any other type of computing or electrical bus. The cores 1002 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 1006. The cores 1002 may output data, instructions, and/or signals to the one or more external devices by the interface circuitry 1006. Although the cores 1002 of this example include example local memory 1020 (e.g., Level 1 (L1) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor 1000 also includes example shared memory 1010 that may be shared by the cores (e.g., Level 2 (L2_cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 1010. The local memory 1020 of each of the cores 1002 and the shared memory 1010 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory 914, 916 of
Each core 1002 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 1002 includes control unit circuitry 1014, arithmetic and logic (AL) circuitry 1016 (sometimes referred to as an ALU and/or arithmetic and logic circuitry), a plurality of registers 1018, the L1 cache 1020, and a second example bus 1022. Other structures may be present. For example, each core 1002 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry 1014 includes semiconductor-based circuits structured to control data movement (e.g., coordinate data movement) within the corresponding core 1002. The AL circuitry 1016 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 1002. The AL circuitry 1016 of some examples performs integer based operations. In other examples, the AL circuitry 1016 also performs floating point operations. In yet other examples, the AL circuitry 1016 may include first AL circuitry that performs integer based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry 1016 may be referred to as an Arithmetic Logic Unit (ALU). The registers 1018 are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 1016 of the corresponding core 1002. For example, the registers 1018 may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 1018 may be arranged in a bank as shown in
Each core 1002 and/or, more generally, the microprocessor 1000 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 1000 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The processor circuitry may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry.
More specifically, in contrast to the microprocessor 1000 of
In the example of
The interconnections 1110 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 1108 to program desired logic circuits.
The storage circuitry 1112 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 1112 may be implemented by registers or the like. In the illustrated example, the storage circuitry 1112 is distributed amongst the logic gate circuitry 1108 to facilitate access and increase execution speed.
The example FPGA circuitry 1100 of
Although
In some examples, the processor circuitry 912 of
A block diagram illustrating an example software distribution platform 1205 to distribute software such as the example machine readable instructions 932 of
From the foregoing, it will be appreciated that example systems, methods, apparatus, and articles of manufacture have been disclosed that improve bandwidth for packet timestamping. Example systems, methods, apparatus, and articles of manufacture disclosed herein increase the effective utilization of bandwidth in NICs. Additionally, examples disclosed herein are very area efficient as disclosed NICs store writeback address pointers (e.g., 8 bytes) instead of entire descriptors (e.g., 32 bytes). Also, examples disclosed herein reduce the amount of total descriptor cache as compared to existing NICs by half. Unlike some existing TSN NICs, examples disclosed herein do not need to increase storage for non-posted and completion credits that is otherwise required due to backpressure suffered by the configuration of those existing TSN NICs. Disclosed systems, methods, apparatus, and articles of manufacture improve the efficiency of using a compute device by solving packet timestamping and status update in a more bandwidth and silicon area efficient manner than the existing techniques. Disclosed systems, methods, apparatus, and articles of manufacture are accordingly directed to one or more improvement(s) in the operation of a machine such as a computer or other electronic and/or mechanical device.
Example methods, apparatus, systems, and articles of manufacture to improve bandwidth for packet timestamping are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an apparatus to improve bandwidth for packet timestamping comprising cache to store a pointer, the pointer indicative of an address in shared storage circuitry where a timestamp is to be stored, the pointer corresponding to a descriptor of data to be transmitted to a second device, and processor circuitry including one or more of at least one of a central processor unit (CPU), a graphics processor unit (GPU), or a digital signal processor (DSP), the at least one of the CPU, the GPU, or the DSP having control circuitry to control data movement within the processor circuitry, arithmetic and logic circuitry to perform one or more first operations corresponding to instructions, and one or more registers to store a first result of the one or more first operations, the instructions in the apparatus, a Field Programmable Gate Array (FPGA), the FPGA including first logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the first logic gate circuitry and the interconnections to perform one or more second operations, the storage circuitry to store a second result of the one or more second operations, or Application Specific Integrated Circuitry (ASIC) including second logic gate circuitry to perform one or more third operations, the processor circuitry to perform at least one of the first operations, the second operations, or the third operations to instantiate memory access control circuitry to parse the descriptor to determine the pointer, cause storage of the pointer in the cache, and set a control bit of the descriptor to indicate that the descriptor may be overwritten.
Example 2 includes the apparatus of example 1, wherein the address is a first address different from a second address in the shared storage circuitry where the descriptor is stored.
Example 3 includes the apparatus of any of examples 1 or 2, wherein the processor circuitry is to perform at least one of the first operations, the second operations, or the third operations to instantiate the memory access control circuitry to, in response to transmission of the data to the second device, cause storage of the timestamp at the address in the shared storage circuitry indicated by the pointer.
Example 4 includes the apparatus of any of examples 1, 2, or 3, wherein the address is a first address, the pointer is a first pointer, the cache is to store a second pointer indicative of a second address in the shared storage circuitry where a status of transmission of the data is to be stored, and the processor circuitry is to perform at least one of the first operations, the second operations, or the third operations to instantiate the memory access control circuitry to, in response to the transmission of the data to the second device, cause storage of the timestamp at the first address in the shared storage circuitry and the status at the second address in the shared storage circuitry.
Example 5 includes the apparatus of any of examples 1, 2, 3, or 4, wherein the cache is a first cache, and the processor circuitry is to perform at least one of the first operations, the second operations, or the third operations to instantiate the memory access control circuitry to cause storage of the pointer in the first cache according to an index, the index based on at least a queue of a second cache of the apparatus and a position of the data in the queue, the queue corresponding to a traffic class of the data.
Example 6 includes the apparatus of any of examples 1, 2, 3, 4, or 5, wherein the cache is a first cache, the descriptor includes an offset indicative of a first time at which the data is to be transmitted, and the processor circuitry is to perform at least one of the first operations, the second operations, or the third operations to instantiate the memory access control circuitry to cause storage of the data in a second cache of the apparatus at a second time, the second time different from the first time.
Example 7 includes the apparatus of any of examples 1, 2, 3, 4, 5, or 6, wherein the cache is a first cache, and the processor circuitry is to perform at least one of the first operations, the second operations, or the third operations to instantiate the memory access control circuitry to set the control bit of the descriptor in response to loading the data into media access control circuitry.
Example 8 includes network interface circuitry (NIC) to improve bandwidth for packet timestamping, the NIC comprising cache to store a pointer, the pointer indicative of an address in shared memory where a timestamp is to be stored, the pointer corresponding to a descriptor of data to be transmitted to a second device, and memory access control circuitry to parse the descriptor to determine the pointer, cause storage of the pointer in the cache, and set a control bit of the descriptor to indicate that the descriptor may be overwritten.
Example 9 includes the NIC of example 8, wherein the address is a first address different from a second address in the shared memory where the descriptor is stored.
Example 10 includes the NIC of any of examples 8 or 9, wherein the memory access control circuitry is to, in response to transmission of the data to the second device, cause storage of the timestamp at the address in the shared memory indicated by the pointer.
Example 11 includes the NIC of any of examples 8, 9, or 10, wherein the address is a first address, the pointer is a first pointer, the cache is to store a second pointer indicative of a second address in the shared memory where a status of transmission of the data is to be stored, and the memory access control circuitry is to, in response to the transmission of the data to the second device, cause storage of the timestamp at the first address in the shared memory and the status at the second address in the shared memory.
Example 12 includes the NIC of any of examples 8, 9, 10, or 11, wherein the cache is a first cache, and the memory access control circuitry is to cause storage of the pointer in the first cache according to an index, the index based on at least a queue of a second cache of the NIC and a position of the data in the queue, the queue corresponding to a traffic class of the data.
Example 13 includes the NIC of any of examples 8, 9, 10, 11, or 12, wherein the cache is a first cache, the descriptor includes an offset indicative of a first time at which the data is to be transmitted, and the memory access control circuitry is to cause storage of the data in a second cache of the NIC at a second time, the second time different from the first time.
Example 14 includes the NIC of any of examples 8, 9, 10, 11, 12, or 13, wherein the cache is a first cache, and the memory access control circuitry is to set the control bit of the descriptor in response to loading the data into media access control circuitry.
Example 15 includes at least one non-transitory computer readable medium comprising instructions that, when executed, cause processor circuitry to parse a descriptor to determine a pointer, descriptor associated with data to be transmitted from a first device to a second device, the pointer indicative of an address in shared memory where a timestamp is to be stored, cause storage of the pointer in a cache, the cache local to the processor circuitry, and set a control bit of the descriptor to indicate that the descriptor may be overwritten.
Example 16 includes the at least one non-transitory computer readable medium of example 15, wherein the address is a first address different from a second address in the shared memory where the descriptor is stored.
Example 17 includes the at least one non-transitory computer readable medium of any of examples 15 or 16, wherein the processor circuitry is to, in response to transmission of the data to the second device, cause storage of the timestamp at the address in the shared memory indicated by the pointer.
Example 18 includes the at least one non-transitory computer readable medium of any of examples 15, 16, or 17, wherein the address is a first address, the pointer is a first pointer, and the processor circuitry is to, in response to transmission of the data to the second device, cause storage of the timestamp at the first address in the shared memory and a status at a second address in the shared memory, the second address indicated by a second pointer.
Example 19 includes the at least one non-transitory computer readable medium of any of examples 15, 16, 17, or 18, wherein the cache is a first cache, and the processor circuitry is to cause storage of the pointer in the first cache according to an index, the index based on at least a queue of a second cache of the processor circuitry and a position of the data in the queue, the queue corresponding to a traffic class of the data.
Example 20 includes the at least one non-transitory computer readable medium of any of examples 15, 16, 17, 18, or 19, wherein the cache is a first cache, the descriptor includes an offset indicative of a first time at which the data is to be transmitted, and the processor circuitry is to cause storage of the data in a second cache of the processor circuitry at a second time, the second time different from the first time.
Example 21 includes the at least one non-transitory computer readable medium of any of examples 15, 16, 17, 18, 19, or 20, wherein the cache is a first cache, and the processor circuitry is to set the control bit of the descriptor in response to loading the data into media access control circuitry.
Example 22 includes an apparatus to improve bandwidth for packet timestamping, the apparatus comprising means for storing a pointer, the pointer indicative of an address in shared memory where a timestamp is to be stored, the pointer corresponding to a descriptor of data to be transmitted to a second device, and means for controlling memory access to parse the descriptor to determine the pointer, cause storage of the pointer in the means for storing, and set a control bit of the descriptor to indicate that the descriptor may be overwritten.
Example 23 includes the apparatus of example 22, wherein the address is a first address different from a second address in the shared memory where the descriptor is stored.
Example 24 includes the apparatus of any of examples 22 or 23, wherein the means for controlling memory access is to, in response to transmission of the data to the second device, cause storage of the timestamp at the address in the shared memory indicated by the pointer.
Example 25 includes the apparatus of any of examples 22, 23, or 24, wherein the address is a first address, the pointer is a first pointer, the means for storing is to store a second pointer indicative of a second address in the shared memory where a status of transmission of the data is to be stored, and the means for controlling memory access is to, in response to the transmission of the data to the second device, cause storage of the timestamp at the first address in the shared memory and the status at the second address in the shared memory.
Example 26 includes the apparatus of any of examples 22, 23, 24, or 25, wherein the means for storing is first means for storing, and the means for controlling memory access is to cause storage of the pointer in the first means for storing according to an index, the index based on at least a queue of second means for storing of the apparatus and a position of the data in the queue, the queue corresponding to a traffic class of the data.
Example 27 includes the apparatus of any of examples 22, 23, 24, 25, or 26, wherein the means for storing is first means for storing, the descriptor includes an offset indicative of a first time at which the data is to be transmitted, and the means for controlling memory access is to cause storage of the data in second means for storing of the apparatus at a second time, the second time different from the first time.
Example 28 includes the apparatus of any of examples 22, 23, 24, 25, 26, or 27, wherein the means for storing is first means for storing, and the means for controlling memory access is to set the control bit of the descriptor in response to loading the data into media access control circuitry.
Example 29 includes a method for improving bandwidth for packet timestamping, the method comprising parsing a descriptor to determine a pointer, descriptor associated with data to be transmitted from a first device to a second device, the pointer indicative of an address in shared memory where a timestamp is to be stored, storing, by executing an instruction with processor circuitry, the pointer in a cache, the cache local to the processor circuitry, and setting, by executing an instruction with the processor circuitry, a control bit of the descriptor to indicate that the descriptor may be overwritten.
Example 30 includes the method of example 29, wherein the address is a first address different from a second address in the shared memory where the descriptor is stored.
Example 31 includes the method of any of examples 29 or 30, further including storing, in response to transmission of the data to the second device, the timestamp at the address in the shared memory indicated by the pointer.
Example 32 includes the method of any of examples 29, 30, or 31, wherein the address is a first address, the pointer is a first pointer, and the method further includes storing, in response to transmission of the data to the second device, the timestamp at the first address in the shared memory and a status at a second address in the shared memory, the second address indicated by a second pointer.
Example 33 includes the method of any of examples 29, 30, 31, or 32, wherein the cache is a first cache, and the method further includes storing the pointer in the first cache according to an index, the index based on at least a queue of a second cache of the processor circuitry and a position of the data in the queue, the queue corresponding to a traffic class of the data.
Example 34 includes the method of any of examples 29, 30, 31, 32, or 33, wherein the cache is a first cache, the descriptor includes an offset indicative of a first time at which the data is to be transmitted, and the method further includes storing the data in a second cache of the processor circuitry at a second time, the second time different from the first time.
Example 35 includes the method of any of examples 29, 30, 31, 32, 33, or 34, wherein the cache is a first cache, and the method further includes setting the control bit of the descriptor in response to loading the data into media access control circuitry.
The following claims are hereby incorporated into this Detailed Description by this reference. Although certain example systems, methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all systems, methods, apparatus, and articles of manufacture fairly falling within the scope of the claims of this patent.