The disclosure relates to computer networks and, more particularly, to processing information communicated, or to be communicated, over a network.
In a typical cloud-based data center, a large collection of interconnected servers provides computing and/or storage capacity for execution of various applications. For example, a data center may comprise a facility that hosts applications and services for subscribers, i.e., customers of the data center. The data center may, for example, host all of the infrastructure equipment, such as compute nodes, networking and storage systems, power systems, and environmental control systems.
In most data centers, clusters of storage systems and application servers are interconnected via a high-speed switch fabric provided by one or more tiers of physical network switches and routers. In some implementations, packets communicated or to be communicated over the switch fabric are parsed by a state machine specific to the type of network being used and are processed by a pipeline of fixed function functional blocks.
Aspects of this disclosure describes techniques for parsing network packets, processing network packets, and modifying network packets before forwarding the modified network packets over a network. The present disclosure describes a system that, in some examples, parses network packets, generates data describing or specifying attributes of the packet, identifies operations to be performed when processing a network packet, performs the identified operations, generates data describing or specifying how to modify and/or forward the packet, modifies the packet, and/or outputs the modified packet to another device or system, such as a switch.
In accordance with one or more aspects of the present disclosure, techniques described herein include parsing network packets to generate data, referred to in some examples as a parsed result vector, describing or specifying attributes of the packet. Based on the parsed result vector, a number of operations may be identified and performed to generate data, referred to in some examples as metadata, that describes how to modify and/or forward the packet. In some examples, network packets may be parsed by a set of parallel parsing devices, some of which share certain hardware used for the parsing. Further, in some examples, at least some of the operations may be performed by a series of flexible forwarding engines, where each of the flexible forwarding engines perform one or more operations that each updates metadata received from an earlier flexible forwarding engine or other block. The resulting metadata produced by the operations may, in some examples, be used to identify rewrite instructions that, when executed, modify the packet. The modified packet may be consumed by another functional block, such as one that forwards the packet on the network.
Techniques described herein further include associating a timestamp with a network packet, and carrying the timestamp or otherwise associating the timestamp with the network packet during some or all processing by one or more systems described herein. In some examples, the timestamp may be used to determine how much time has elapsed since the network packet started to be received by the system. Such information may be used to determine whether information derived from early portions of a network packet received at an ingress port can be transmitted, without causing an error, to a receiving device over an egress port before later portions of that same network packet have been received at the ingress ports. By evaluating the amount of time that has elapsed since the network packet started to be received, it may be possible to determine whether later portions of a network packet that have not yet been received will be received in a sufficient timely manner such that transmitting an early portion of the network packet will not incur an underrun error or other error. In some examples, data from the network packet may be organized into cell-sized units, which may represent a minimum amount of data that may be transmitted over an egress port at a time.
Techniques in accordance with one or more aspects of the present disclosure may provide several technical advantages. For instance, the manner in which parsing is performed in some examples may be flexible enough to enable parsing, by the same system, of multiple types of network packets. Further, aspects of one or more systems described herein may further enable high speed yet flexible parsing of multiple types of network packets in an efficient manner. For instance, in some examples, multiple parsing devices may be used in parallel, and techniques for sharing aspects of the parsing devices may be employed to attain high parsing rates. Further, by evaluating timestamp information to determine how much time has elapsed since a network packet started to be received by the system, it may be possible to start transmitting data derived from early portions of the network packet before the entire network packet has been received. Such a system may have very low latency, and may have a higher throughput and be more efficient than other systems, while ensuring little or no possibility of incurring an underrun condition or other error resulting from such early transmission.
In some examples, this disclosure describes operations performed by a low latency packet switch. In one specific example, this disclosure describes a method comprising: receiving, at an ingress port of a device, an initial portion of a network packet; storing, by the device, timestamp information associated with receiving the initial portion of the packet; determining, by the device, whether to transmit information derived from the initial portion of the network packet out an egress port by determining, based on the timestamp, that information from a later portion of the network packet will be available to be transmitted out the egress port at a later time such that transmitting the information derived from the initial portion of the network packet will not cause an underrun condition; and transmitting, by the device and out the egress port to a receiving device, the information derived from the initial portion of the network packet.
In another specific example, this disclosure describes a method comprising receiving, at an ingress port of a device, an initial portion of a network packet; storing, by the device, timestamp information associated with receiving the initial portion of the packet; identifying, by the device and based on information included within the network packet, an egress port of the device for outputting information to a destination device; determining, by the device, whether to transmit information derived from the initial portion of the network packet out the egress port by determining that information from a later portion of the network packet will be available to be transmitted out the egress port sufficiently soon such that transmitting the information derived from the initial portion of the network packet will not cause an underrun condition; transmitting, by the device and over the egress port to a receiving device, the information derived from the initial portion of the network packet; receiving, by the device and at the ingress port of the device, the later portion of the network packet; and transmitting, by the device and over the egress port to the receiving device, the information from the later portion of the network packet.
In other examples, this disclosure describes one or more systems, components, or devices that perform various operations as described herein. In still other examples, this disclosure describes a computer-readable storage medium comprising instructions that, when executed, configure processing circuitry of a computing system to perform various operations as described herein.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
In some examples, data center 10 may represent one of many geographically distributed network data centers. In the example of
In this example, data center 10 includes a set of storage systems and application servers 12 interconnected via a high-speed switch fabric 14. In some examples, servers 12 are arranged into multiple different server groups, each including any number of servers up to, for example, n servers 121-12N. Servers 12 provide computation and storage facilities for applications and data associated with customers 11 and may be physical (bare-metal) servers, virtual machines running on physical servers, virtualized containers running on physical servers, or combinations thereof.
In the example of
Although not shown, data center 10 may also include, for example, one or more non-edge switches, routers, hubs, gateways, security devices such as firewalls, intrusion detection, and/or intrusion prevention devices, servers, computer terminals, laptops, printers, databases, wireless mobile devices such as cellular phones or personal digital assistants, wireless access points, bridges, cable modems, application accelerators, or other network devices.
In the example of
In example implementations, access nodes 17 are configurable to operate in a standalone network appliance having one or more access nodes. For example, access nodes 17 may be arranged into multiple different access node groups 19, each including any number of access nodes up to, for example, x access nodes 171-17X. As such, multiple access nodes 17 may be grouped (e.g., within a single electronic device or network appliance), referred to herein as an access node group 19, for providing services to a group of servers supported by the set of access nodes internal to the device. In one example, an access node group 19 may comprise four access nodes 17, each supporting four servers so as to support a group of sixteen servers.
In the example of
As one example, each access node group 19 of multiple access nodes 17 may be configured as standalone network device, and may be implemented as a two rack unit (2 RU) device that occupies two rack units (e.g., slots) of an equipment rack. In another example, access node 17 may be integrated within a server, such as a single 1RU server in which four CPUs are coupled to the forwarding ASICs described herein on a mother board deployed within a common computing device. In yet another example, one or more of access nodes 17 and servers 12 may be integrated in a suitable size (e.g., 10 RU) frame that may, in such an example, become a network storage compute unit (NSCU) for data center 10. For example, an access node 17 may be integrated within a mother board of a server 12 or otherwise co-located with a server in a single chassis.
According to the techniques herein, example implementations are described in which access nodes 17 interface and utilize switch fabric 14 so as to provide full mesh (any-to-any) interconnectivity such that any of servers 12 may communicate packet data for a given packet flow to any other of the servers using any of a number of parallel data paths within the data center 10. For example, example network architectures and techniques are described in which access nodes, in example implementations, spray individual packets for packet flows between the access nodes and across some or all of the multiple parallel data paths in the data center switch fabric 14 and reorder the packets for delivery to the destinations so as to provide full mesh connectivity.
As described herein or in applications incorporated herein, a new data transmission protocol referred to as a Fabric Control Protocol (FCP) may be used by the different operational networking components of any of access nodes 17 to facilitate communication of data across switch fabric 14. As further described, FCP is an end-to-end admission control protocol in which, in one example, a sender explicitly requests a receiver with the intention to transfer a certain number of bytes of payload data. In response, the receiver issues a grant based on its buffer resources, QoS, and/or a measure of fabric congestion. In general, FCP enables spray of packets of a flow to all paths between a source and a destination node, and may provide any of the technical advantages and techniques described herein, including resilience against request/grant packet loss, adaptive and low latency fabric implementations, fault recovery, reduced or minimal protocol overhead cost, support for unsolicited packet transfer, support for FCP capable/incapable nodes to coexist, flow-aware fair bandwidth distribution, transmit buffer management through adaptive request window scaling, receive buffer occupancy based grant management, improved end to end QoS, security through encryption and end to end authentication and/or improved ECN marking support.
As further described herein, access nodes 17 of
In some examples, access nodes 17 may apply a timestamp to each packet received by an ingress port of an access node 17, and use the timestamp to determine whether a sufficient amount of time has elapsed such that information from the packet can be transmitted over an egress port of that access node 17 without causing an underrun condition. Use of the timestamp may enable the access node to determine that information from an initial portion of the network packet can be transmitted over an egress port, even before the entire network packet has been received by the access node. Further, in some examples, access node 17 may organize data packets in units of cells, and may transmit information in units of complete cells.
The techniques may provide certain technical advantages. For example, the techniques may enable flexible parsing of multiple types of network packets in a high-speed and efficient manner. Further, the techniques may enable efficient and high-speed parsing of packets being forwarded over switch fabric 14. In some examples, aspects of the parsing process may be performed in parallel to attain high parsing rates. Still further, the techniques may enable efficient cut-through packet switching, enabling access nodes 17 to start forwarding a network packet or other data unit before the whole network packet (or data unit) has been received by the access node. In some examples, such a procedure may enable cut-through switching even in cases where the egress port is capable of higher data rate speeds than the ingress port.
Aspects of this disclosure relate to U.S. Provisional Patent Application No. 62/566,060, filed Sep. 29, 2017, entitled “Fabric Control Protocol for Data Center Networks with Packet Spraying over Multiple Alternate Data Paths,”, and U.S. Provisional Patent Application No. 62/642,798, filed Mar. 14, 2018, entitled “Flexible Processing Of Network Packets,”, the entire content of each of these applications is incorporated herein by reference.
Access node 130 may operate substantially similar to any of the access nodes 17 of
In the illustrated example of
In this example, access node 130 represents a high performance, hyper-converged network, storage, and data processor and input/output hub. Cores 140 may comprise one or more of microprocessor without interlocked pipeline stages (MIPS) cores, advanced reduced instruction set computing (RISC) machine (ARM) cores, performance optimization with enhanced RISC—performance computing (PowerPC) cores, RISC five (RISC-V) cores, or complex instruction set computing (CISC or x86) cores. Each of cores 140 may be programmed to process one or more events or activities related to a given data packet such as, for example, a networking packet or a storage packet. Each of cores 140 may be programmable using a high-level programming language, e.g., C, C++, or the like.
In some examples, the plurality of cores 140 may be capable of processing a plurality of events related to each data packet of one or more data packets, received by networking unit 142, in a sequential manner using one or more work units. In general, work units are sets of data exchanged between cores 140 and networking unit 142 where each work unit may represent one or more of the events related to a given data packet. In some examples, in processing the plurality of events related to each data packet, a first one of the plurality of cores 140, e.g., core 140A, may process a first event of the plurality of events. Moreover, first core 140A may provide to a second one of plurality of cores 140, e.g., core 140B, a first work unit of the one or more work units. Furthermore, second core 140B may process a second event of the plurality of events in response to receiving the first work unit from first core 140B.
Access node 130 may act as a combination of a switch/router and a number of network interface cards. Networking unit 142 includes a forwarding pipeline implemented using flexible engines (e.g., a parser engine, a look-up engine, and a rewrite engine) and supports features of IP transit switching. For example, networking unit 142 may be configured to receive one or more data packets from and transmit one or more data packets to one or more external devices, e.g., network devices. Networking unit 142 may use processing cores to perform network interface card (NIC) functionality, packet switching, and the like, and may use large forwarding tables and offer programmability. Networking unit 142 may include one or more hardware direct memory access (DMA) engine instances (not shown) configured to fetch packet data for transmission. The packet data may be in buffer memory of on-chip memory unit 134 or off-chip external memory 146, or in host memory.
Networking unit 142 may expose Ethernet ports for connectivity to a network, such as switch fabric 14 of
In some examples, processor 132 may further include one or more accelerators (not shown) configured to perform acceleration for various data-processing functions, such as look-ups, matrix multiplication, cryptography, compression, regular expressions, or the like. For example, the accelerators may comprise hardware implementations of look-up engines, matrix multipliers, cryptographic engines, compression engines, regular expression interpreters, or the like.
Memory controller 144 may control access to on-chip memory unit 134 by cores 140, networking unit 142, and any number of external devices, e.g., network devices, servers, external storage devices, or the like. Memory controller 144 may be configured to perform a number of operations to perform memory management techniques. For example, memory controller 144 may be capable of mapping accesses from one of the cores 140 to a cache memory or a buffer memory of memory unit 134. In some examples, memory controller 144 may map the accesses based on one or more of an address range, an instruction or an operation code within the instruction, a special access, or a combination thereof
More details on access nodes, including their operation and example architectures, are available in U.S. Provisional Patent Application No. 62/483,844, filed Apr. 10, 2017, entitled “Relay Consistent Memory Management in a Multiple Processor System,”, U.S. Provisional Patent Application No. 62/530,591, filed Jul. 10, 2017, entitled “Data Processing Unit for Computing Devices,”, and U.S. Provisional Patent Application No. 62/559,021, filed Sep. 15, 2017, entitled “Access Node for Data Centers,”, the entire content of each of which is incorporated herein by reference.
As illustrated in
In the example shown, NU 142 includes a forwarding block 172 to forward the packets coming from the fabric ports of FPG 170 and from the endpoint ports of source agent block 180. In the receive direction, FPG 170 or forwarding block 172 may have a flexible parser to parse incoming bytes and generate a parsed result vector (PRV). In the transmit direction, FPG 170 or forwarding block 172 may have a packet rewrite sub-unit to modify the outgoing packets based on the rewrite instructions stored with the packet or otherwise associated with the packet.
Forwarding block 172 may include a pipeline configured to process one PRV, received from FPG 170 and/or source agent block 180, every cycle. The forwarding pipeline of forwarding block 172 may include the following processing sections: attributes, ingress filter, packet lookup, nexthop resolution, egress filter, packet replication, and statistics.
In the attributes processing section, different forwarding attributes, such as virtual layer 2 interface, virtual routing interface, and traffic class, are determined. These forwarding attributes are passed to further processing sections in the pipeline. In the ingress filter processing section, a search key can be prepared from different fields of a PRV and searched against programmed rules. The ingress filter block can be used to modify the normal forwarding behavior using the set of rules. In the packet lookup processing section, certain fields of the PRV are looked up in tables to determine the nexthop index. The packet lookup block supports exact match and longest prefix match lookups.
In the nexthop resolution processing section, nexthop instructions are resolved and the destination egress port and the egress queue are determined. The nexthop resolution block supports different nexthops such as final nexthop, indirect nexthop, equal cost multipath (ECMP) nexthop, and weighted cost multipath (WCMP) nexthop. The final nexthop stores the information of the egress stream and how egress packets should be rewritten. The indirect nexthop may be used by software to embed an address of the nexthop in memory, which can be used to perform an atomic nexthop update.
In the egress filter processing section, packets are filtered based on the egress port and the egress queue. In most examples, the egress filter block cannot change the egress destination or egress queue, but can sample or mirror packets using the rule sets. If any of the processing stages has determined to create a copy of a packet, the packet replication block generates its associated data. NU 142 might create one extra copy of the incoming packet. The statistics processing section has a set of counters to collect statistics for network management purpose. The statistics block also supports metering to control packet rate to some of the ports or queues.
NU 142 also includes a packet buffer 174 to store packets for port bandwidth oversubscription. Packet buffer 174 may be used to store three kinds of packets: (1) transmit packets received from processing cores 140 on the endpoint ports of source agent block 180 to be transmitted to the fabric ports of FPG 170; (2) receive packets received from the fabric ports of FPG 170 to the processing cores 140 via the endpoint ports of destination agent block 182; and (3) transit packets coming on the fabric ports of FPG 170 and leaving on the fabric ports of FPG 170.
Packet buffer 174 keeps track of memory usage for traffic in different directions and priority. Based on a programmed profile, packet buffer 174 may decide to drop a packet if an egress port or queue is very congested, assert flow control to a work unit scheduler, or send pause frames to the other end. The key features supported by packet buffer 174 may include: cut-through for transit packets, weighted random early detection (WRED) drops for non-explicit congestion notification (ECN)-aware packets, ECN marking for ECN aware packets, input and output based buffer resource management, and PFC support.
Packet buffer 174 may have the following sub-units: packet writer, packet memory, cell link list manager, packet queue manager, packet scheduler, packet reader, resource manager, and cell free pool. The packet writer sub-unit collects flow control units (flits) coming from FPG 170, creates cells and writes to the packet memory. The packet writer sub-unit gets a Forwarding Result Vector (FRV) from forwarding block 172. The packet memory sub-unit is a collection of memory banks. In one example, the packet memory is made of 16K cells with each cell having a size of 256 bytes made of four microcells each having a size of 64 bytes. Banks inside the packet memory may be of 2Pp (1 write port and 1 read port) type. The packet memory may have raw bandwidth of 1 Tbps write and 1 Tbps read bandwidth. FPG 170 has guaranteed slots to write and to read packets from the packet memory. The endpoint ports of source agent block 180 and destination agent block 182 may use the remaining bandwidth.
The cell link list manager sub-unit maintains a list of cells to represent packets. The cell link list manager may be built of 1 write and 1 read port memory. The packet queue manager sub-unit maintains a queue of packet descriptors for egress nodes. The packet scheduler sub-unit schedules a packet based on different priorities among the queues. For example, the packet scheduler may be a three-level scheduler: Port, Channel, Queues. In one example, each FPG port of FPG 170 has sixteen queues, and each endpoint port of source agent block 180 and destination agent block 182 has eight queues.
For scheduled packets, the packet reader sub-unit reads cells from packet memory and sends them to FPG 170. In some examples, the first 64 bytes of the packet may carry rewrite information. The resource manager sub-unit keeps track of usage of packet memory for different pools and queues. The packet writer block consults the resource manager block to determine if a packet should be dropped. The resource manager block may be responsible to assert flow control to a work unit scheduler or send PFC frames to the ports. The cell free pool sub-unit manages a free pool of packet buffer cell pointers. The cell free pool allocates cell pointers when the packet writer block wants to write a new cell to the packet buffer memory, and deallocates cell pointers when the packet reader block dequeues a cell from the packet buffer memory.
NU 142 includes source agent control block 180 and destination agent control block 182 that, collectively, are responsible for FCP control packets. In other examples, source agent control block 180 and destination control block 182 may comprise a single control block. Source agent control block 180 generates FCP request messages for every tunnel. In response to FCP grant messages received in response to the FCP request messages, source agent block 180 instructs packet buffer 174 to send FCP data packets based on the amount of bandwidth allocated by the FCP grant messages. In some examples, NU 142 includes an endpoint transmit pipe (not shown) that sends packets to packet buffer 174. The endpoint transmit pipe may perform the following functions: packet spraying, packet fetching from memory 178, packet segmentation based on programmed MTU size, packet encapsulation, packet encryption, and packet parsing to create a PRV. In some examples, the endpoint transmit pipe may be included in source agent block 180 or packet buffer 174.
Destination agent control block 182 generates FCP grant messages for every tunnel. In response to received FCP request messages, destination agent block 182 updates a state of the tunnel and sends FCP grant messages allocating bandwidth on the tunnel, as appropriate. In response to FCP data packets received in response to the FCP grant messages, packet buffer 174 sends the received data packets to packet reorder engine 176 for reordering and reassembly before storage in memory 178. Memory 178 may comprise an on-chip memory or an external, off-chip memory. Memory 178 may comprise RAM or DRAM, for instance. In some examples, NU 142 includes an endpoint receive pipe (not shown) that receives packets from packet buffer 174. The endpoint receive pipe may perform the following functions: packet decryption, packet parsing to create a PRV, flow key generation based on the PRV, determination of one of processing cores 140 for the incoming packet and allocation of a buffer handle in buffer memory, send the incoming FCP request and grant packets to destination agent block 182, and write the incoming data packets to buffer memory with the allocated buffer handle.
In some examples, each FPG 170 may serialize/deserialize at a rate of 25 Gbps and have a bandwidth capacity of 100 Gpbs and/or 150 million packets per second (MPPS) or higher (in the example of
Each of FPGs 170 may be capable of performing serialize/deserialize operations, and each may be capable of parsing packets and processing MAC and physical coding sublayers. In some examples, each of FPGs 170 may execute its own copy of the same parser logic or software, and each of FPGs 170 may start parsing packet bytes as the packet bytes arrive from the wire (e.g., from the switch fabric 14). As further described herein, each of FPGs 170 has separate wires for 100 Gbps, 50 Gbps, and 25 Gbps ports, thereby saving latency of multiplexing and demultiplexing at the potential cost of additional wires. One or more of FPGs 170 may create a result vector 421 (also described herein as parsed result vector 421). Each of FPGs 170 may output parsed result vector 421 (e.g., 96 bytes for each packet) to forwarding block 172. Forwarding block 172 may be shared among each of FPGs 170.
Forwarding block 172 may identify egress ports for outputting packets received by one or more of FPGs 170, and to do so, may perform destination lookups to determine an egress port for a given packet. Accordingly, forwarding block 172 determines, based on data included within each packet, one of network ports 171 to serve as an egress port. Forwarding block 172 generates forwarding vector 412 and outputs forwarding vector 412 (e.g., 64 bytes) to packet switch block 173, identifying the determined egress port. Forwarding block 172 may also apply access control policies and generate rewrite instructions to modify the egress packet. In some examples, forwarding block 172 may be capable of processing 1000 MPPS or higher.
Packet switch block 173 receives data from one or more of FPGs 170 and switches packets from an ingress port (one of network ports 171) to an egress port (another one of network ports 171). Packet switch block 173 outputs the packet to one of FPGs 170 associated with the determined egress port. In some examples, for the packet forwarding duration, data from packets received over network ports 171 (and through FPGs 170) are stored within buffer 177 in packet switch block 173. Buffer 177 within packet switch block 173 may absorb incoming packets when a port collision occurs (e.g., multiple packets coming at same time) or port (or other component) speed mismatch. In some examples, sixteen queues per port may be used, and an egress packet scheduler may be used to schedule one or more packets on egress ports. Packet switch block 173 may manage buffer 177 in units of cells. In one example, a cell is of size of 256 bytes, and in such an example, the first cell may be 64 bytes of forwarding vector 412 and the remaining 192 bytes may correspond to data. Other subsequent cells may consist of 256 bytes of data. In some examples, packet switch block 173 may have 2 terabits per second write bandwidth as well as 2 terabits per second read bandwidth.
In the example of
One or more of FPGs 170 may parse network packets. For instance, in the example of
One or more of FPGs 170 may associate a timestamp with one or more network packets received at network ports 171. For instance, FPG 170A may apply a timestamp to network packet 310 or otherwise associate a timestamp with network packet 310. In some examples, the timestamp information associated with network packet 310 reflects the time that FPG 170A received initial portion 311 of network packet 310 over network port 171A. Such a timestamp may be applied to network packet 310 (or otherwise associated with network packet 310) before, during, or after the time that FPG 170A parses initial portion 311 of network packet 310. In some examples, FPG 170A may apply a timestamp to initial portion 311 by modifying initial portion 311, thereby including data within initial portion 311 that reflects the time initial portion 311 was received at FPG 170A. In other examples, FPG 170A may apply a timestamp to initial portion 311 by storing data elsewhere within networking unit 142 (or access node 130), such as within parsed result vector 421, which may be stored on a bus, in a buffer, or in another memory location. The timestamp information may correspond to information (e.g., a 32-bit quantity) from or derived from the system clock shared by components within networking unit 142. In some examples, the timestamp information associated with network packet 310 (or initial portion 311) may carried with network packet 310 throughout networking unit 142 or otherwise made available to some or all of the functional blocks within networking unit 142 and/or access node 130. In such an example, such functional blocks may be able to determine, for each network packet being processed by networking unit 142, the timestamp associated with that network packet reflecting the time the network packet was received by one or more of FPGs 170 of
One or more of FPGs 170 may communicate information to forwarding block 172 and packet switch block 173. For instance, in the example of
Packet switch block 173 may store network packets received from one or more of FPGs 170. For instance, in the example of
Forwarding block 172 may process parsed result vector 421. For instance, still referring to
Packet switch block 173 may forward initial portion 311 to an appropriate one of FPGs 170 for transmitting over an egress port. For instance, in
Each of FPGs 170 may modify one or more network packets received for output across an egress port. For instance, still referring to the example being described in connection with
Each of FPGs 170 may transmit data over an egress port at an appropriate time. For instance, in
However, a store and forward system tends to be slow and introduce latency, since transmission of packets does not commence until the entire packet is stored and possibly processed. Accordingly, as further described herein, FPG 170F may implement a cut-through system in which, in the example of
In the low latency cut-through system being described, FPG 170H eventually transmits all remaining portions of network packet 310. For instance, in the example of
In some examples, networking unit 142 may implement a cut-through timestamp data transmission system, as described above, that also collects data into cells, and transmits data in units of complete cells. For instance, in one such example, when receiving data, packet switch block 173 may collect sufficient data to complete a cell, and then store the cell in buffer 177. A cell may be a fixed size data unit (e.g., 256 bytes in one example) that represents a minimum amount of data that is transmitted by one or more FPGs 170. In such a system, packet switch block 173 may forward complete cells to FPG 170F for transmission over network port 171W. FPG 170F may then transmit data derived from network packet 310 in units of complete cells, even if later portions of network packet 310 (e.g., intermediate portion 312 and/or final portion 313) have not yet been received by FPG 170A. In such an example, FPG 170F may determine, based on the timestamp associated with network packet 310 and other information about latency of networking unit 142, that there is little or no possibility of an underrun condition resulting from such early transmission of a cell containing data derived from initial portion 311 of network packet 310. As previously described, FPG 170F is able to determine, based on timestamp information, that remaining portions of network packet 310 (i.e., intermediate portion 312 and final portion 313) will be received by FPG 170F in sufficient time to continue uninterrupted transmission of data without causing an underrun condition. Once FPG 170F starts transmitting initial portion 311′, FPG 170A eventually receives the remaining portions of network packet 310 (i.e., intermediate portion 312 and final portion 313), and networking unit 142 processes such data. Packet switch block 173 assembles the data from intermediate portion 312 and final portion 313 into cells, and forwards such cells to FPG 170F for transmission over network port 171W. FPG 170F transmits the remaining data from network packet 310, in units of complete cells, over network port 171W to a receiving device in a sufficiently timely and/or uninterrupted manner to avoid an underrun condition or other errors.
Parser 420 parses packets and may accept input at fixed or variable rates. For instance, in the example of
Parser 420 may generate parsed result vector 421. For instance, in the example of
Forwarding pipeline 440 may process parsed result vector 421 to generate metadata 441. For instance, in the example of
Rewrite block 460 may, based on metadata 441, modify packet header 410. For instance, in the example of
In some examples, parser 420 may apply a timestamp to packet header 410 when header 410 is received as the first of a sequence of data that may constitute an entire network packet. Rewrite block 460 may, before transmitting modified packet 411, evaluate the timestamp applied by parser 420 to determine whether a sufficient amount of time has elapsed so that modified packet 411 may be transmitted to packet switch 480 without causing an underrun condition.
For instance, one or more devices of system 400 that may be illustrated as separate devices may alternatively be implemented as a single device; one or more components of system 400 that may be illustrated as separate components may alternatively be implemented as a single component. Also, in some examples, one or more devices of system 400 that may be illustrated as a single device may alternatively be implemented as multiple devices; one or more components of system 400 that may be illustrated as a single component may alternatively be implemented as multiple components. Each of the multiple devices and/or components may be directly coupled via wired or wireless communication and/or remotely coupled via one or more networks. Also, one or more devices or components that may be illustrated in
Further, certain operations, techniques, features, and/or functions may be described herein as being performed by specific components, devices, and/or modules in
Parser 420 may parse data received in header memory 502. For instance, in the example of
Content-addressable memory 508, which may be a ternary content-addressable memory, accepts as input data from both packet byte vector 504 and parser state storage 524, and performs a lookup in content-addressable memory 508. For instance, in the example of
Sequence machine 513 determines a new state and determines which bytes within packet byte vector 504 to parse. For instance, in the example of
Sequence machine 513 may store, within general purpose registers 526, information that may be used later to parse information in packet byte vector 504. For instance, in some examples, the DMAC value may be required in order to compute a new state, but the DMAC value may occur very early in the bitstream. And in some situations, the DMAC value is required well after the DMAC value occurs in the bitstream. Accordingly, the DMAC value may be stored for later use in general purpose registers 526. In such an example, therefore, sequence machine 513 identifies data within the incoming data stream being processed through packet byte vector 504 that may be used later. Sequence machine 513 stores the data within one or more of general purpose registers 526. At a later time, multiplexer 506 selects data from general purpose registers 526, and uses such information to perform a lookup in content-addressable memory 508. In some examples, sequence machine 513 controls multiplexer 506 to select data from either general purpose registers 526 or from packet byte vector 504 to use as a key for content-addressable memory 508.
Action processor 511 determines actions that may be performed. For instance, in the example of
In some examples, action memory 510 and sequence memory 512 could be combined into one component, but in the example of
Parsed result vector 421 includes template 531, flag fields 532, field vector 533, and other fields. Template 531 identifies the structure of parsed result vector 421. In other words, fields within parsed result vector 421 may differ when parsing different types of packets, or in different situations. Template 531 may identify a specification for the structure of parsed result vector 421, so that the fields within parsed result vector 421 can be properly interpreted by later blocks processing a packet. In particular, template 531 may be used by forwarding pipeline 440 to determine how forwarding pipeline 440 should operate and how components within forwarding pipeline 440 should work together.
Flag fields 532 include various flags describing attributes of the packet. In some examples, including in the example illustrated in
Field vector 533 includes fields used by forwarding pipeline 440 to process the packet. Such fields may include a source and destination address, a source and destination port, type information, timestamp information, length information, and header byte offset address information. Other information may also be stored within field vector 533. In some examples, the timestamp information is used, as further described herein, for implementing a cut-through switching, enabling transmission of early portions of a network packet to be transmitted to a destination device, even before all remaining portions of the network packet are parsed by parser 420.
Eventually, after processing a sequence of data from a packet header, parser 420 may transition to a terminate state. In the terminate state, the packet header has been processed through packet byte vector 504, and parsed result vector 421 has been populated based on the information in the packet header. Once in the terminate state, parsed result vector 421 is ready to be and/or is waiting to be processed by forwarding pipeline 440.
Parser 420 serves as a flexible parser that performs a number of preprocessing operations to identify and process the relevant portions of header memory 502 as data is shifted through packet byte vector 504. Parser 420 may parse packets conforming to a variety of different formats, such formats encompassing various encapsulation types and/or header types at various layers of the Open Standards Interconnection (OSI) or TCP/IP model, for instance. As a flexible parser, parser 420 may be configured to not only operate with current packet formats (Ethernet, IPv4, IPv6, or others), but can also be configured to parse packet formats that may be used in the future. Rules for processing any such new packet formats can be programmed into content-addressable memory 508, action memory 510, and sequence memory 512. The rules may then be implemented by action processor 511 and sequence machine 513, and thereby generate an appropriate parsed result vector 421, with a structure specified by template 531 included within parsed result vector 421.
In some examples, and as described herein, parser 420 may use a combination of a TCAM and an action processor to parse different types of packet headers. The parser receives a packet-byte stream as input and prepares a parsed result vector (PRV). The PRV contains some hard fields but its structure is primarily soft, i.e., the extracted fields may be placed at soft offsets based on configured templates. In some examples, the output of parser 420 is a 96B parsed result vector 421 that includes a template identifier (e.g. a “template index”) for a template that describes the structure of the soft fields within the PRV. The template index can be later used to lookup a per-template action table to generate lookup keys to be used by downstream blocks in the forwarding pipeline (e.g., forwarding pipeline 440).
In some examples, there is more than one use-case for parser 420. For instance, parser 420 may be deployed close to fabric-facing port groups to parse packets as they are received from the network. In this example, parser 420 may mainly work on the outer headers. The inner headers may be parsed mainly to derive an entropy hash. In this example, parser 420 may be located in FPG 170 of networking unit 142.
In another example, parser 420 may be reused for packets destined to the end-points after packet buffer 174 of networking unit 142. In this example, parser 420 is located in the ERP block of networking unit 142 and parses the inner headers (after decryption if packet was encrypted).
In another example, parser 420 may be deployed in the ETP block of networking unit 142 to parse arbitrary bytes from a virtual processor and generate a PRV that is understood by the NU forwarding pipe.
In some examples, the implementation illustrated in
In the example illustrated in
The size of the TCAM search key in the example of
In the example of
In addition to determining the next parser state, a set of actions may need to be performed on the bytes of the packet as referenced by packet pointer 505. These actions may involve extracting fields from packet byte vector 504 and populating parsed result vector 421. The actions can be pipelined and executed over multiple cycles. These actions are executed by action processor 511.
The throughput of parser 420 may, in some examples, be determined by the number TCAM search cycles required to parse a packet. For example, if a packet has 54 B of protocol headers (14 B L2+20 B IP+20 B TCP), and the parser consumes 6 B on an average every cycle, a new packet can be parsed every 9 cycles, resulting in a 111 Mpps throughput. On the other hand, if the parser has to examine 128 B of headers and consumes only 4 B on average every cycle, the pps reduces to 31.25 Mpps. On the line side, a single parser instance may handle a 25 G stream. This conservatively assumes a single TCAM rule should be able to consume around 3 B of header. For typical networking headers, the rate of consumption can be much higher because a number of fields can be skipped over without much examination. In some examples, a TCAM rule match (e.g., performed by content-addressable memory 508) may consume at least 4 B of data that was used as the lookup key. Such a design may allow for some speedup to handle cases where 4 B might not be consumed in some cycles (e.g. when one or more general purpose registers 526 are used as the lookup key instead of portions of packet byte vector 504). The speedup is provided by having a PBV that is 12 B and allowing a TCAM action to access up to 8 B from packet pointer 505 (byte 0 of PBV). In some examples, a 2× speedup may be achieved over the required parsing rate if 8 bytes need to be skipped over or if extraction of fields within an 8 B segment can be pipelined without any dependencies. The extra 4 B in the PBV can be viewed as a prefetch of the key to be used in the next cycle in the event that 8 bytes are consumed in the current cycle. In some examples, the prefetching of data into packet byte vector 504 might not be visible to software. The actions of parser 420 could potentially move the packet pointer to any offset within the packet, potentially skipping a large number of bits. Hardware may implement the interlocks to rate-match the parser to the incoming data rate and pull data into packet byte vector 504 when data is available. This allows the parser to handle variable speed streams and slow streams.
In the example illustrated in
A reorder buffer is necessary for supporting streams faster than 25 G (50 G and 100 G). Since packets of 50 G and 100 G streams are sprayed across parser contexts, the PRVs corresponding to the packets need to be reordered before forwarding them downstream to guarantee per-stream order out of the parser. The main source of packet reorder is the difference between the minimum and maximum times it takes to parse a packet. Parsing 192B of packet header can take up to 48 cycles (before timeout) whereas parsing the smallest packet can be done in less than 10 cycles (although we are allowed to take 16 cycles for a 64 B packet). This causes out-of-order-ness while generating PRVs.
In some examples, reorder buffer 562 may serve as a unified reorder buffer. For instance, the parser contexts (parsers 558) write the 96 B PRV to reorder buffer 562 over a 32 B interface. The write port to reorder buffer 562 is time-shared by the four parser contexts in the example of
In the example of
For 25 G streams, the stream multiplexing scheme to the downstream forwarding block can be a little different to optimize latency. In such an example, instead of sending the PRV in the order of dispatch, the PRVs can be sent in the order of PRV generation by the contexts with a round-robin policy across PRVs generated in the same cycle.
Each block accepts parsed result vector 421 as input and passes parsed result vector 421 along to the next block. In some examples, parsed result vector 421 is passed along to a next block in forwarding pipeline 440 without being modified, but parsed result vector 421 is used to define and influence the operations performed by each of the flexible forwarding engines 604 within the pipeline. Each instance of flexible forwarding engine 604 can be customized based on its expected use through programming various key engines and action processor microcode memory. In some examples, one or more of flexible forwarding engines 604 may generate one or more search keys from fields within parsed result vector 421, and perform a search using the search keys against programmed rules stored within one or more tables included in each of flexible forwarding engines 604. For instance, fields of parsed result vector 421 may be used to perform a lookup to determine the next hop index or to identify an address. Based on the results of such operations, each of flexible forwarding engines 604 may incrementally modify metadata 441 as metadata 441 is passed through forwarding pipeline 440 from block to block. Although shown in a pipeline of sequential blocks, forwarding pipeline 440 may alternatively be configured to include one or more loops whereby a flexible forwarding engine may perform multiple operations in succession. Alternatively, or in addition, one or more flexible forwarding engines 604 may be placed in parallel rather than one after another. Accordingly, metadata 441 is modified and/or updated by one or more of the blocks of forwarding pipelines 440 as parsed result vector 421 and metadata 441 are passed along forwarding pipeline 440. In the example illustrated in
In the example illustrated in
Fixed function engines may include engines of various types. For instance, stream selection block 602 selects a stream and pushes PRV in the forwarding pipe. Next hop block 606 may support a limited number of nexthops. Other types of blocks, such as a sample and forwarding result vector generation block (see, e.g., forwarding vector generator 608 of
Flexible forwarding engine 604 may perform operations to modify metadata 441. For instance, in the example of
Further, key engine 721 performs a lookup on small table 704, also based on parsed result vector 421, and action engine 741 performs an action on metadata 441 based on the results of the lookup performed by key engine 721. In some examples, key engine 721 performs the lookup on small table 704 and action engine 741 performs the action on metadata 441 concurrently and/or simultaneously with the lookup and action performed by key engine 711 and action engine 731.
In the example of
After performing various lookups and/or actions, flexible forwarding engine 604 outputs metadata 441′ (a modified version of metadata 441) to the next block in the pipeline. Flexible forwarding engine 604 may also output, to the next block parsed result vector 421 without modification. In some examples, the next block may be another one of flexible forwarding engines 604 within forwarding pipeline 440, or in other examples, the next block may be a fixed function block (e.g., next hop block 606).
In the example illustrated in
In the example of
In the example of
Each flexible forwarding engine 604 may be implemented with four action engines, as illustrated in
Rewrite block 614 executes, based on forwarding vector 612, rewrite instructions to modify packet header 410. While executing the instructions, rewrite block 614 may sample and/or access packet header 410 and/or parsed result vector 421. Rewrite block 614 outputs modified packet 620 to another block, such as packet switch 480 as illustrated in
In some examples, rewrite block 460 may generate two buses: “fwd_psw_ctl” and “fwd_psw_frv.” The “fwd_psw_ctl” bus carries control information for PSW (packet switch). In some examples, this information is not stored with the packet. The “fwd_psw_frv” bus may be 64 Bytes wide and may be stored with the packet in PSW packet memory. If the PSW stream is the FAE (forwarding acceleration engine) stream, then rewrite block 460 might not generate “fwd_psw_ctl” and “fwd_psw_frv,” and instead, it might send “fae_frv” to an FAE block.
In some examples, forwarding vector generator 608 may performs two main functions: (1) Rewrite Instructions Generation and (2) Packet Sample Decision. With respect to rewrite instruction generation, forwarding vector generator 608 may include “rewrite instruction memory” of, for example, 4096 entries, with each entry storing six rewrite instructions. The rewrite instruction memory may be configured by software to pair two consecutive single entries and create a double entry. This allows software to execute up to 12 rewrite instructions per packet. In some examples, software may be responsible for guaranteeing that each set of rewrite instructions will fit in 32 Byte FRV rewrite instruction space. The address of the rewrite instruction memory may be generated in next hop block 606 of
With respect to Packet Sample Decision functions, forwarding vector generator 608 may support a number of samplers (e.g., sixty-four, in one example). Each sampler can decide to make a “sample copy” of the packet. Each sampler has a set of parameters to determine the packet to be sampled. Software can use one of the samplers to perform “ingress port mirroring” or “egress port mirroring.”
In some examples, rewrite block 460 may modify underlay headers for different types of packets. Modifications to an outgoing packet may depend on many parameters. For example, such parameters may include the following: (1) Stack of packet headers (e.g., whether the incoming packet carries a C2T (CPU 2 Header) or whether the packet is an IPv4 or IPv6 packet), (2) Forwarding Type (whether the packet is being forwarded to the egress stream as Ethernet switch or whether the packet is being routed), (3) Egress Stream Type (if the packet is forwarded to the ERP stream, the modified packet should carry a T2N (TOR 2 NIC) or T2C (TOR 2 CPU) header, in addition to the other packet modifications), (4) Add or Remove LFA (loop-free alternate) tag (for some intra-cluster links within an access node 130 of
Further, in some examples, rewrite block 460 or logic associated with rewrite block 460 may, before transmitting modified packet 620, evaluate the timestamp included within parsed result vector 421 to determine whether a sufficient amount of time has elapsed so that modified packet 620 may be transmitted to a receiving device without causing an underrun condition. In some examples, such transmission may occur before all portions of the network packet are received. To ensure that an underrun condition does not occur, rewrite block 460 may evaluate the timestamp to determine, based on the latency of the system described herein (e.g.,
In the example of
Parser 420 may identify sequence instructions (902). For instance, in some examples, sequence memory 512 uses the match index received from content-addressable memory 508 to address memory within sequence memory 512. Based on the match index, sequence memory 512 identifies a series of instructions, stored within sequence memory 512, that can be executed by sequence machine 513.
Parser 420 may determine an updated parser state (903). For instance, in some examples, sequence machine 513 executes at least some of the identified instructions to determine a new parser state (903). Sequence machine 513 causes the new parser state to be stored in parser state storage 524 (904).
Parser 420 may also determine a new pointer reference (904). For instance, in some examples, sequence machine 513 executes additional identified instructions to determine how many bytes within packet byte vector 504 to advance packet pointer 505. Sequence machine 513 may, for example, determine that one or more bytes within packet byte vector 504 need not be processes and may be skipped. In such an example, sequence machine 513 may cause packet pointer 505 to advance beyond the four bytes used as input to content-addressable memory 508, as described above. After advancing packet pointer 505, parser 420 may resume the process at 901 to process additional bytes from packet byte vector 504.
In the example of
Action processor 511 may determine a structure for parsed result vector 421 (922). For instance, in some examples, action processor 511 may execute instructions that identify, based on packet byte vector 504, one or more network layer protocols (e.g., Ethernet, IPv4, IPv6) that are associated with the packet header within packet byte vector 504. Action processor 511 may also execute instructions that identify, based on packet byte vector 504, other attributes of the packet and/or the network (e.g., whether a VLAN header has been detected). Action processor 511 may identify, based on the information identified about the packet and/or the network, a structure for parsed result vector 421 that is appropriate for the packet and/or network. In some examples, one structure may be appropriate for some types of networks (e.g., those based on IPv4), and another type of structure may be appropriate for other types of networks (e.g., those based on IPv6). In other examples, one structure may be applied to multiple different types of networks (e.g., IPv4 and IPv6), and flags or settings within parsed result vector 421 may specify the type of applicable network.
Action processor 511 may store information about attributes of the packet. For instance, in some examples, action processor 511 may store, within parsed result vector 421, information about network layer protocols associated with the packet. Action processor 511 may also store, within parsed result vector 421, information about other attributes of the packet and/or he network. Action processor 511 may store such information at appropriate locations within parsed result vector 421 as defined by the structure of parsed result vector 421. For example, action processor 511 may store flags in flag fields 532 within parsed result vector 421. Action processor 511 may also store other information (e.g., an SMAC or DMAC Ethernet address) in appropriate areas, as defined by template 531, of field vector 533 within parsed result vector 421.
In the example of
Flexible forwarding engine 604 may perform an operation (942). For instance, in some examples, key engine 711 uses both parsed result vector 421 and metadata 441 to perform a lookup in large table 702. Large table 702 identifies one or more match indexes, or one or more addresses within large table 702. Action engine 731 performs operations specified by the match index or the one or more addresses. In some examples, action engine 731 may perform process an access control list, perform an address lookup, perform counting operations, and/or perform rate limiting functions. Many other operations may alternatively be performed. Further, key engine 712 may also use both parsed result vector 421 and metadata 441 to perform a lookup in small table 704. Small table 704 may identify one or more match indexes within small table 704. Action engine 741 may perform operations specified by the match index.
Flexible forwarding engine 604 may generate updated metadata (943). For instance, in some examples, action engine 731 generates data as a result of performing the one or more operations specified by the match index identified by large table 702. Action engine 731 may write the data to a bus on which metadata 441 is stored, as shown in
In some examples, action engine 732 and action engine 742 may also update metadata 441. For instance, action engine 732 and action engine 742 may each generate data as a result of performing an operation specified by a match index identified by large table 702 and small table 704, respectively. In such an example, action engine 732 and action engine 742 may each perform a second-stage lookup within large table 702 and small table 704. Each of action engine 732 and action engine 742 may write the generated data to the bus on which metadata 441 is stored, and further update metadata 441 through a second-stage operation.
Flexible forwarding engine 604 may output the updated metadata (944). For instance, in some examples, flexible forwarding engine 604 may output parsed result vector 421 and metadata 441 to a later flexible forwarding engine (or other block) within forwarding pipeline 440. In the example described, parsed result vector 421 is not modified by flexible forwarding engine 604. Metadata 441, however, has been modified flexible forwarding engine 604 as a result of the operations performed by action engine 731, action engine 741, action engine 732, and/or action engine 742. Accordingly, flexible forwarding engine 604 may output metadata 441 as updated metadata 441′.
In the example of
Forwarding pipeline 440 may determine a series of sequential operations (962). For instance, in some examples, a plurality of flexible forwarding engines 604 may each be configured to perform one or more operations, and each of flexible forwarding engines 604 accepts, as input, the parsed result vector 421 selected by stream selection block 602. Each of flexible forwarding engines 604 may determine which operation to perform based on parsed result vector 421. For example, based on the template included within parsed result vector 421, each of flexible forwarding engines 604 may be programmed to perform a specific operation. Each of flexible forwarding engines 604 determines its operation based on a state information included within metadata 441. In the example of
In some examples, one or more of flexible forwarding engines 604 might not perform any operation, and in such an example, might configured as simply a pass-through block for parsed result vector 421 and metadata 441. Further, in some examples, one or more function blocks included within forwarding pipeline 440 may serve as fixed function blocks that perform the same function without regard to parsed result vector 421 and/or metadata 441. For instance, in the example of
Forwarding pipeline 440 may perform the operations to generate metadata (963). For instance, in some examples, each of flexible forwarding engines 604 within forwarding pipeline 440 performs an operation. As the result of each operation performed by each of flexible forwarding engines 604, each flexible forwarding engine 604 updates metadata 441 by, for example, writing data to a metadata bus received as input from a previous flexible forwarding engine 604. After updating the input metadata, each flexible forwarding engine outputs its updated metadata 441 to the next flexible forwarding engine 604 in forwarding pipeline 440.
Forwarding pipeline 440 may modify the packet based on the metadata (964). For instance, in some examples, after processing by each of the blocks (e.g., stream selection block 602, flexible forwarding engines 604, and next hop block 606) of forwarding pipeline 440 is complete, final metadata 441′ is passed to forwarding vector generator 608 (see
In the example of
Networking unit 142 may store timestamp information associated with receiving the initial portion of the packet (972). For instance, in
Networking unit 142 may identify an egress port of networking unit 142 for outputting information to a destination device (973). For instance, FPG 170A outputs parsed result vector 421 to forwarding block 172 and FPG 170A outputs initial portion 311 to packet switch block 173. Packet switch block 173 stores initial portion 311 within buffer 177 included within packet switch block 173. Forwarding block 172 performs one or more destination lookups to determine an egress port for network packet 310. In some examples, forwarding block 172 performs the lookups based on information stored within parsed result vector 421. Forwarding block 172 identifies network port 171W as the egress port for network packet 310.
Networking unit 142 may determine whether to transmit information (974). For instance, in
Networking unit 142 waits until later portions of network packet 310 are received at network port 171A (No path from 976). Eventually, networking unit 142 may receive later portions of network packet 310, including intermediate portion 312 and final portion 313 of network packet 310 (Yes path from 976). Upon receipt of intermediate portion 312 and/or final portion 313, networking unit 142 may process intermediate portion 312 and final portion 313 of network packet 310, and store intermediate portion 312 and final portion 313 within buffer 177 of packet switch block 173.
Networking unit 142 may transmit the information from the later portion of the network packet (977). For instance, once again referring to
For processes, apparatuses, and other examples or illustrations described herein, including in any flowcharts or flow diagrams, certain operations, acts, steps, or events included in any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, operations, acts, steps, or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially. Further certain operations, acts, steps, or events may be performed automatically even if not specifically identified as being performed automatically. Also, certain operations, acts, steps, or events described as being performed automatically may be alternatively not performed automatically, but rather, such operations, acts, steps, or events may be, in some examples, performed in response to input or another event.
The detailed description set forth above is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a sufficient understanding of the various concepts. However, these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in the referenced figures in order to avoid obscuring such concepts.
In accordance with one or more aspects of this disclosure, the term “or” may be interrupted as “and/or” where context does not dictate otherwise. Additionally, while phrases such as “one or more” or “at least one” or the like may have been used in some instances but not others; those instances where such language was not used may be interpreted to have such a meaning implied where context does not dictate otherwise.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof If implemented in software, the functions may be stored, as one or more instructions or code, on and/or transmitted over a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (e.g., pursuant to a communication protocol). In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the terms “processor” or “processing circuitry” as used herein may each refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described. In addition, in some examples, the functionality described may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, a mobile or non-mobile computing device, a wearable or non-wearable computing device, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperating hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
This application claims the benefit of U.S. Provisional Patent Application No. 62/824,770 filed on Mar. 27, 2019, which is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6091707 | Egbert | Jul 2000 | A |
7187694 | Liao | Mar 2007 | B1 |
8472452 | Goyal et al. | Jun 2013 | B2 |
8606959 | Goyal et al. | Dec 2013 | B2 |
8711861 | Goyal et al. | Apr 2014 | B2 |
8719331 | Goyal et al. | May 2014 | B2 |
8923306 | Bouchard et al. | Dec 2014 | B2 |
8934488 | Goyal et al. | Jan 2015 | B2 |
8937952 | Goyal et al. | Jan 2015 | B2 |
8937954 | Goyal et al. | Jan 2015 | B2 |
8954700 | Ansari et al. | Feb 2015 | B2 |
8995449 | Goyal et al. | Mar 2015 | B2 |
9031075 | Goyal et al. | May 2015 | B2 |
9130819 | Pangborn et al. | Sep 2015 | B2 |
9137340 | Goyal et al. | Sep 2015 | B2 |
9191321 | Goyal et al. | Nov 2015 | B2 |
9195939 | Goyal et al. | Nov 2015 | B1 |
9208438 | Goyal et al. | Dec 2015 | B2 |
9225643 | Goyal et al. | Dec 2015 | B2 |
9268855 | Goyal et al. | Feb 2016 | B2 |
9275336 | Goyal et al. | Mar 2016 | B2 |
9319316 | Ansari et al. | Apr 2016 | B2 |
9344366 | Bouchard et al. | May 2016 | B2 |
9391892 | Ansari et al. | Jul 2016 | B2 |
9432284 | Goyal et al. | Aug 2016 | B2 |
9497117 | Goyal et al. | Nov 2016 | B2 |
9525630 | Ansari et al. | Dec 2016 | B2 |
9531647 | Goyal et al. | Dec 2016 | B1 |
9531690 | Ansari et al. | Dec 2016 | B2 |
9531723 | Bouchard et al. | Dec 2016 | B2 |
9544402 | Worrell et al. | Jan 2017 | B2 |
9595003 | Bullis et al. | Mar 2017 | B1 |
9596222 | Goyal et al. | Mar 2017 | B2 |
9614762 | Goyal et al. | Apr 2017 | B2 |
9647947 | Goyal et al. | May 2017 | B2 |
9729527 | Goyal et al. | Aug 2017 | B2 |
9866540 | Bouchard et al. | Jan 2018 | B2 |
10528498 | Inoue | Jan 2020 | B2 |
10565112 | Noureddine et al. | Feb 2020 | B2 |
10659254 | Sindhu et al. | May 2020 | B2 |
20050165966 | Gai et al. | Jul 2005 | A1 |
20060010193 | Sikdar et al. | Jan 2006 | A1 |
20110116507 | Pais et al. | May 2011 | A1 |
20130282766 | Goyal et al. | Oct 2013 | A1 |
20140016486 | Schzukin | Jan 2014 | A1 |
20140098824 | Edmiston | Apr 2014 | A1 |
20140214159 | Vidlund et al. | Jul 2014 | A1 |
20140369363 | Hutchison et al. | Dec 2014 | A1 |
20160191306 | Gasparakis et al. | Jun 2016 | A1 |
20160283391 | Nilsson et al. | Sep 2016 | A1 |
20170063690 | Bosshart | Mar 2017 | A1 |
20180293168 | Noureddine et al. | Oct 2018 | A1 |
20190012278 | Sindhu et al. | Jan 2019 | A1 |
20190013965 | Sindhu et al. | Jan 2019 | A1 |
20190104206 | Goel et al. | Apr 2019 | A1 |
20190213151 | Inoue | Jul 2019 | A1 |
20190289102 | Goel et al. | Sep 2019 | A1 |
20190379770 | Thantry et al. | Dec 2019 | A1 |
20200120191 | Thantry et al. | Apr 2020 | A1 |
20200183841 | Noureddine et al. | Jun 2020 | A1 |
Number | Date | Country |
---|---|---|
2013019981 | Feb 2013 | WO |
2013019996 | Feb 2013 | WO |
2013020001 | Feb 2013 | WO |
2013020002 | Feb 2013 | WO |
2013020003 | Feb 2013 | WO |
2018020645 | Feb 2018 | WO |
Entry |
---|
Alicherry et al., “High Speed Pattern Matching for Network IDS/IPS,” Proceedings of IEEE International Conference on Netwok Protocols, Nov. 2006, pp. 187-196. |
Bosshart et al., “Forwarding Metamorphosis: Fast Programmable Match-Action Processing in Hardware for SDN,” Proceedings of the ACM SIGCOMM 2013 conference on SIGCOMM, Aug. 12-16, 2013, 12 pp. |
Gibb et al., “Design Principles for Packet Parsers,” Architectures for Networking and Communications Systems, IEEE, Oct. 21-22, 2013, 12 pp. |
Kozanitis et al., “Leaping Multiple Headers in a Single Bound: Wire-Speed Parsing Using the Kangaroo System,” INFOCOM'10 Proceedings of the 29th conference on Information communications, Mar. 14, 2010, 9 pp. |
Tsai et al., “A Flexible Wildcard-Pattern Matching Accelerator via Simultaneous Discrete Finite Automata,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 25, No. 12, Dec. 2017, pp. 3302-3316. |
U.S. Appl. No. 16/877,050, filed May 15, 2020, naming inventors Sindhu et al. |
International Search Report and Written Opinion of International Application No. PCT/US2020/020817, dated May 18, 2020, 15 pp. |
International Preliminary Report on Patentability from International Application No. PCT/US2020/020817, dated Oct. 7, 2021, 12 pp. |
Number | Date | Country | |
---|---|---|---|
20200314030 A1 | Oct 2020 | US |
Number | Date | Country | |
---|---|---|---|
62824770 | Mar 2019 | US |