The invention relates to the field of network devices. In particular, the invention relates to software defined data center devices, systems and methods.
Software-Defined Networks (SDN) paradigm promises to address modern Data Center needs with a fine-grained control over the network. However, fixed pipeline switches are not providing the level of flexibility and programmability required by Software Defined Data Centers (SDDC) architectures to optimize the underlying networks. Specifically, although SDDC architectures put applications at the center of innovation, the full capability of theses applications is thwarted by rigid pipelines that are dictated by networking gears. For example, the applications are forced to be designed to use existing protocols, which slows down innovation.
Embodiments of the invention are directed to a software-defined network (SDN) system, device and method that comprises of one or more input ports, a programmable parser, a plurality of programmable lookup and decision engines (LDEs), programmable lookup memories, programmable counters, a programmable rewrite block and one or more output ports. The programmability of the parser, LDEs, lookup memories, counters and rewrite block enable a user to customize each microchip within the system to particular packet environments, data analysis needs, packet processing functions, and other functions as desired. Further, the same microchip is able to be reprogrammed for other purposes and/or optimizations dynamically. Moreover, by providing a programmable pipeline with flexible table management, the PIPS enables a software defined method to address many packet processing needs.
A first aspect is directed to a switch microchip for a software-defined network. The microchip comprises a programmable parser that parses desired packet context data from headers of a plurality of incoming packets, wherein the headers that are recognized by the parser based on a software-defined parse graph of the parser, one or more lookup memories having a plurality of tables, wherein the lookup memories are configured as a logical overlay such that the scaling and width of the lookup memories are software-defined by a user, a pipeline of a plurality of programmable lookup and decision engines that receive and modify the packet context data based on data stored in the lookup memories and software-defined logic programmed into the engines by the user, a programmable rewrite block that based on the packet context data received from one of the engines rebuilds and prepares the packet headers as processed within the switch for output and a programmable counter block used for counting operations of the lookup and decision engines, wherein the operation that are counted by the counter block is software-defined by the user. In some embodiments, starting from the same initial node of the parse graph, each path through the parse graph represents a combination of layer types of one of the headers that is able to be recognized by the parser. In some embodiments, portions of the paths are able to overlap. In some embodiments, the rewrite block expands each layer of each of the headers parsed by the parser to form a expanded layer type of a generic size based on a protocol associated with the layer. In some embodiments, the rewrite block generates a bit vector that indicates which portions of the expanded layer type contain valid data and which portions of the expanded layer type contain data added during the expanding by the rewrite block. In some embodiments, the tables of the lookup memories are each able to be independently set in hash, direct access or longest prefix match operational modes. In some embodiments, the tables of the lookup memories are able to be dynamically reformatted and reconfigured by the user such that a number of tiles of the lookup memories partitioned and allocated for lookup paths coupled to the lookup memories is based on memory capacity needed by each of the lookup paths. In some embodiments, each of the lookup and decision engines comprise a Key Generator configured to generate a set of lookup keys for each input token and an Output Generator configured to generate an output token by modifying the input token based on content of lookup results associated with the set of lookup keys. In some embodiments, each of the lookup and decision engines comprise an Input Buffer for temporarily storing input tokens before input tokens are processed by the lookup and decision engine, a Profile Table for identifying positions of fields in each of the input tokens, a Lookup Result Merger for joining the input token with the lookup result and for sending the joined input token with the lookup result to the Output Generator, a Loopback Checker for determining whether the output token should be sent back to the current lookup and decision engine or to another lookup and decision engine and a Loopback Buffer for storing loopback tokens. In some embodiments, Control Paths of both the Key Generator and the Output Generator are programmable such that users are able to configure the lookup and decision engine to support different network features and protocols. In some embodiments, the counter block comprises N wrap-around counters, wherein each of the N wrap-around counters is associated with a counter identification and an overflow FIFO used and shared by the N wrap-around counters, wherein the overflow FIFO stores the associated counter identifications of all counters that are overflowing.
A second aspect is directed to a method of operating a switch microchip for a software-defined network. The method comprises parsing desired packet context data from headers of a plurality of incoming packets with a programmable parser, wherein the headers that are recognized by the parser based on a software-defined parse graph of the parser, receiving and modifying the packet context data with a pipeline of a plurality of programmable lookup and decision engines based on data stored in lookup memories having a plurality of tables and software-defined logic programmed into the engines by a user, transmitting one or more data lookup requests to and receiving processing data based on the requests from the lookup memories with the lookup and decision engines, wherein the lookup memories are configured as a logical overlay such that the scaling and width of the lookup memories are software-defined by the user, performing counting operations based on actions of the lookup and decision engines with a programmable counter block, wherein the counter operations that are counted by the counter block is software-defined by the user and rebuilding the packet headers as processed within the switch with a programmable rewrite block for output, wherein the rebuilding is based the packet context data received from one of the lookup and decision engines. In some embodiments, starting from the same initial node of the parse graph, each path through the parse graph represents a combination of layer types of one of the headers that is able to be recognized by the parser. In some embodiments, portions of the paths are able to overlap. In some embodiments, the rewrite block expands each layer of each of the headers parsed by the parser to form a expanded layer type of a generic size based on a protocol associated with the layer. In some embodiments, the rewrite block generates a bit vector that indicates which portions of the expanded layer type contain valid data and which portions of the expanded layer type contain data added during the expanding by the rewrite block. In some embodiments, the tables of the lookup memories are each able to be independently set in hash, direct access or longest prefix match operational modes. In some embodiments, the tables of the lookup memories are able to be dynamically reformatted and reconfigured by the user such that a number of tiles of the lookup memories partitioned and allocated for lookup paths coupled to the lookup memories is based on memory capacity needed by each of the lookup paths. In some embodiments, each of the lookup and decision engines comprise a Key Generator configured to generate a set of lookup keys for each input token and an Output Generator configured to generate an output token by modifying the input token based on content of lookup results associated with the set of lookup keys. In some embodiments, each of the lookup and decision engines comprise an Input Buffer for temporarily storing input tokens before input tokens are processed by the lookup and decision engine, a Profile Table for identifying positions of fields in each of the input tokens, a Lookup Result Merger for joining the input token with the lookup result and for sending the joined input token with the lookup result to the Output Generator, a Loopback Checker for determining whether the output token should be sent back to the current lookup and decision engine or to another lookup and decision engine and a Loopback Buffer for storing loopback tokens. In some embodiments, Control Paths of both the Key Generator and the Output Generator are programmable such that users are able to configure the lookup and decision engine to support different network features and protocols. In some embodiments, the counter block comprises N wrap-around counters, wherein each of the N wrap-around counters is associated with a counter identification and an overflow FIFO used and shared by the N wrap-around counters, wherein the overflow FIFO stores the associated counter identifications of all counters that are overflowing.
A third aspect is directed to a top of rack switch microchip. The microchip comprises a programmable parser that parses desired packet context data from headers of a plurality of incoming packets, wherein the headers that are recognized by the parser based on a software-defined parse graph of the parser and wherein, starting from the same initial node of the parse graph, each path through the parse graph represents a combination of layer types of one of the headers that is able to be recognized by the parser, one or more lookup memories having a plurality of tables, a Key Generator configured to generate a set of lookup keys for each input token and an Output Generator configured to generate an output token by modifying the input token based on content of lookup results associated with the set of lookup keys, wherein the lookup memories are configured as a logical overlay such that the scaling and width of the lookup memories are software-defined by a user, and further wherein each of the lookup memories are configured to selectively operate in hash, direct access or longest prefix match operational modes, a pipeline of a plurality of programmable lookup and decision engines that receive and modify the packet context data based on data stored in the lookup memories and software-defined logic programmed into the engines by the user, a programmable rewrite block that based on the packet context data received from one of the engines rebuilds and prepares the packet headers as processed within the switch for output, wherein the rewrite block expands each layer of each of the headers parsed by the parser to form a expanded layer type of a generic size based on a protocol associated with the layer and a programmable counter block used for counting operations of the lookup and decision engines, wherein the counter block comprises N wrap-around counters, wherein each of the N wrap-around counters is associated with a counter identification and an overflow FIFO used and shared by the N wrap-around counters, wherein the overflow FIFO stores the associated counter identifications of all counters that are overflowing, and further wherein the operations that are performed by the counter block are software-defined by the user.
Embodiments of the a software-defined network (SDN) system, device and method comprise one or more input ports, a programmable parser, a plurality of programmable lookup and decision engines (LDEs), programmable lookup memories, programmable counters, a programmable rewrite block and one or more output ports. The programmability of the parser, LDEs, lookup memories, counters and rewrite block enable a user to customize each microchip within the system to particular packet environments, data analysis needs, packet processing functions, and other functions as desired. Further, the same microchip is able to be reprogrammed for other purposes and/or optimizations dynamically. As a result, the system provides the ability to programmatically customize the performance of the system creating unified hardware and software that can be used in various deployments. Further, it allows optimization tailed deployment to application specific needs. In other words, the system software-defined flexibility provides the ability to customize the same switch microchip such that it provides the same high bandwidth and high port density despite being positioned in multiple different places within a network.
Parser/Rewrite
The parser 104 is able to include one or more parser engines to identify contents of network packets and the rewriter 112 is able to include one or more rewrite engines to modify packets before they are transmitted out from the network switch. The parser engine(s) and the rewrite engine(s) are flexible and operate on a programmable basis. In particular, the parser 104 is able to decode packets and extract internal programmable layer information (as described in detail below), wherein the internal layer information is used by the system 100 to make forwarding decisions for that packet through the pipeline. Additionally as described below, the rewriter 112 performs transformations on this internal layer information in order to modify the packet as needed. As described above, the system 100 also includes a memory (e.g. lookup memories 108) to store data used by the system 100. For example, the memory is able to store a set of generic commands that are used to modify protocol headers. As another example, the memory is able to also store software-defined mappings of generic formats of protocols in the form of a parse map (or table), wherein each protocol header is represented according to one of the software-defined mappings that is specific to a corresponding protocol. As it will become evident, these mappings are able to be used to identify different variations of a protocol as well as on different protocols, including new protocols that were not previously known. In some embodiments, the parse map includes layer information of each protocol layer of each protocol layer combination that is programmed into the parse map (or table).
In Ethernet, packets include multiple protocol layers. Each protocol layer carries different information. Some examples of well known layers are Ethernet; PBB Ethernet; ARP IPV4; IPV6; MPLS; FCOE; TCP; UDP; ICMP; IGMP; GRE; ICMPv6; VxLAN; TRILL and CNM. Theoretically, the protocol layers can occur in any order. However, only some well-known combinations of these layers occur. Some examples of valid combinations of these layers are Ethernet; Ethernet, ARP; Ethernet, CNM; Ethernet, FcoE; Ethernet, IPV4; Ethernet, IPV4, ICMP; and Ethernet, IPV4, IGMP.
In some embodiments, the network switch supports 17 protocols and eight protocol layers. There are therefore 817 possible protocol layer combinations. A packet can include a three protocol layer combination such as Ethernet, IPv4 and ICMP. For another example, a packet can include a seven protocol layer combination such as, Ethernet, IPv4, UDP, VxLAN, Ethernet and ARP. Although there are 817 possible protocol layer combinations, only some well-known combinations of these layers occur. In some embodiments, all known protocol layer combinations are uniquely identified and translated into a unique number called the packet identifier (PktID). A parse table stored in the memory of the network switch is able to be programmed to include layer information of each layer of each known protocol layer combination. In practice, this local parse table includes less than 256 protocol layer combinations. In some embodiments, this local table includes 212 known protocol layer combinations. The local table is able to be dynamically re-programmed to include more or less protocol layer combinations.
In some embodiments, the parser and/or rewriter described herein are able to be the same as the parser and/or rewriter described in U.S. patent application Ser. No. 14/309,603, entitled “Method of modifying packets to generate format for enabling programmable modifications and an apparatus thereof,” and filed Jun. 19, 2014, which is hereby incorporated by reference. In some embodiments, the parser described herein are able to be the same as the parser described in U.S. patent application Ser. No. 14/675,667, entitled “A parser engine programming tool for programmable network devices,” and filed Mar. 31, 2015, which is hereby incorporated by reference.
Parser
In order for the parser engine 99 to be able to perform the above parsing functions, it is able to be programmed by a parse programming tool such that any type of header data (e.g. a header comprising one or more header layer types) within a range of possible header data specified is able to be properly parsed by the parser engine 99. As a result, the programming tool is configured to read the input configuration file and automatically (based on the data within the file) generate a set of values necessary to program the parser engine 99 to handle all of the possible header data represented by the configuration file.
The configuration file indicates the range of possible header data that the parse engine 99 needs to be able to parse by describing a directly connected cyclical graph or parse tree of the possible header data.
In order to determine all the possible paths through the cyclical graph 300, the tool is able to walk the graph or tree 300 using a modified depth first search. In particular, starting from one of the nodes 302, the programming tool walks down one of the possible paths through the graph or tree 300 (as permitted by the directional connections) until the tool reaches a terminating node (e.g. a node with no outgoing branches 304) or the starting node (e.g. when a loop has been completed). Alternatively, in some embodiments even if the starting node is reached, the programming tool is able to continue until a terminating node is reached or the starting node is reached a second or more times. In any case, during the “walk,” the tool is able to sequentially add the data associated with each node 302 and branch 304 traversed to a stack such that the stack includes a journal or list of the path taken. When the terminating node or starting node 302 is reached, the current stack is determined and saved as a complete path and the process is repeated to find a new complete path until all of the possible paths and their associated stacks have been determined. In this way, each of the combinations of headers that are able to form the header data 202 of a packet 200 are represented by one of the paths such that the programming tool provided the advantage of automatically identifying all of the possible header data 202 based on the input configuration file. In some embodiments, one or more of the header combinations or paths determined by the tool are able to be omitted. Alternatively, all of the headers possible within the graph or tree 300 are able to be included.
Finally, the parser programming tool is able to store the TCAM and SRAM values in the assigned TCAM 204 and SRAM 206 pairs of each of the KPUs 202 of the parser 104 such that the parser 104 is able to parse all of the possible headers 202 indicated within the graph or tree 300 of the input configuration file.
Rewrite
The information for each protocol layer is able to comprise the following: Layer Type, Layer Data Offset and Miscellaneous Information. However, more information can be stored in the local table 500. Briefly, the Layer Type refers to an associated protocol (e.g., IP/TCP/UDP/Ethernet) of the protocol layer, Layer Data Offset provides a start location of layer data in the protocol layer, and the Miscellaneous Information includes data such as checksum and length data. Upon parsing an incoming packet, the parser engine is able to identify the PktID of the incoming packet based on the parse table. Specifically, each combination of layer types that make up a packet header has a unique PkID. The rewrite engine uses the PktID as key to the parse table, which gives the rewrite engine all the information needed to generalize each protocol layer of the packet for modification. In other words, the rewrite engine uses the PktID to access or retrieve information for each of the protocol layers in the packet from the parse table, instead of receiving parsed results from the parser engine.
Layer Type. The unique combination of the Layer Type and a hash on one or more fields of the packet provides the rewrite engine a “generic format” for each protocol layer. In some embodiments, this unique combination specifies one of software-defined mappings of generic formats of protocols that are stored in the memory. The generic format is used by the rewrite engine to expand the protocol layers and to modify the protocol layers using software commands. This information also tells the rewrite engine where each protocol layer starts within the packet.
Layer Data Offset. The rewrite engine uses data to modify an incoming header layer. This data can be spread anywhere in the packet. Since layer sizes can vary, so can the offsets to the data that the rewrite engine needs to use during modifications, which limits hardware flexibility on what data the rewrite engine can pick up and from where.
Extracted data from incoming packet headers are arranged in a layered manner. The extracted data structure is arranged such that starting offsets of layer-data-structure is unique per PktID. The Layer Data Offset of each layer is used to identify the location of the extracted data for modifications. Since the structure of the layers within a packet and locations of the extracted data from the layers are identified through the PktID of the packet, software and hardware uses the same unique identifier to manage the extracted data, which simplifies the commands in the rewrite engine. Miscellaneous information. Information, such as checksum and length data, tells the rewrite engine about special handing requirements, such as checksum re-calculation and header length update, for the associated protocol layer.
The generalized protocol header can be modified by applying at least one command to the generalized protocol header. In some embodiments, the generalized protocol header is modified by creating a bit vector using the information to determine a location of data that is used to modify the generalized protocol header. In particular, each bit of the bit vector represents if a byte of the header is valid or was added (during the expansion/generalization) in order to fill in for a missing field (e.g. an optional field of the header protocol that was not used). The rewrite engine generalizes the protocol header and modifies the generalized protocol header. Each protocol layer has a respective protocol. More or less protocol layers are possible as indicated above. The rewrite engine is able to detect missing fields from any of the protocol headers and to expand each protocol header to its generic format. A generalized/canonical layer refers to a protocol layer that has been expanded to its generic format. Briefly, each canonical layer includes a bit vector with bits marked as 0 for invalid fields and bits marked as 1 for valid fields.
The rewrite engine not only uses the bit vector for each protocol header to allow expansion of the protocol header based a generic format for modification, the rewrite engine also uses the bit vector to allow collapse of the protocol header from the generic format to a “regular” header. Typically, each bit in the bit vector represents a byte of the generalized protocol header. A bit marked as 0 in the bit vector corresponds to an invalid byte, while a bit marked as 1 in the bit vector corresponds to a valid byte. The rewrite engine uses the bit vector to remove all the invalid bytes after all commands have been operated on the generalized protocol header to thereby form a new protocol header. The rewrite engine therefore uses bit vectors to allow expansion and collapse of protocol headers of packets, thereby enabling flexible modification of the packets by using a set of generic commands. Thus, the rewrite provides the benefit of being programmable such that a user is able to assemble a modification of the packet that suits their needs (e.g. expansion, collapse or other software defined packet modification by the rewrite).
Lookup and Decision Engines
Lookup and Decision Engines 106 are able to generate lookup keys for input tokens and to modify the input tokens based on lookup results such that the corresponding network packets can be correctly processed and forwarded by other components in the system 100. The conditions and rules for generating keys and modifying tokens are fully programmable by software and are based on network features and protocols configured for the LDE 106. The LDE 106 includes two main blocks: a Key Generator and an Output Generator. As named, the Key Generator generates a set of lookup keys for each input token, and the Output Generator generates an output token, which is a modified version of the input token based on the lookup results. The Key Generator and the Output Generator have a similar design architecture, which includes a Control Path and a Data Path. The Control Path examines whether specific fields and bits in its input satisfy conditions of the configured protocols. Based on the examination outcomes, it generates instructions accordingly. The Data Path executes all instructions produced by the Control Path for generating the set of lookup keys in the Key Generator or for generating the output token in the Output Generator. The conditions and rules for key and output generations are fully programmable in the Control Paths of the Key Generator and the Output Generator. In other words, LDE 106 is able to enable programmable formation of an Input Key to be used for matching the lookup memory and a programmable formation of the Output Key for results returning from the lookup memory, along with the merging of the Input token with the lookup table result to form the Output token that is passed to the next addressable LDE.
The LDE 106 also includes an Input FIFO for temporarily storing the input tokens, a Lookup Result Collector/Merger for collecting the lookup results for the lookup keys, a Loopback Check for sending an output token back to the LDE 106 in the case where multiple serial lookups is required for that token at the same LDE 106, and a Loopback FIFO for storing loopback tokens. The loopback path has higher priority than an input path to guarantee deadlock freedom.
In some embodiments, the LDEs described herein are able to be the same as the LDEs described in U.S. patent application Ser. No. 14/144,270, entitled “Apparatus and Method of Generating Lookups and Making Decisions for Packet Modifying and Forwarding in a Software-Defined Network Engine,” and filed Dec. 30, 2013, which is hereby incorporated by reference. Additionally, the Key Generator and the Output Generator are similarly configured as an SDN processing engine discussed in U.S. patent application Ser. No. 14/144,260, entitled “Method and Apparatus for Parallel and Conditional Data Manipulation in a Software-Defined Network Processing Engine,” and filed Dec. 30, 2013, which is hereby incorporated by reference.
The LDE 106 can receive the input tokens from a Parser. The Parser parses headers of each network packet and outputs an input token for each network packet. An input token has a predefined format such that the LDE 106 will be able to process the input token. The LDE 106 can also receive the input tokens from a previous LDE if multiple LDEs are coupled in a chain for performing, in serial, multiple lookup and token modification steps.
The input tokens received at the LDE 106 from an upstream Parser or an upstream LDE are first buffered inside an Input FIFO 805. The input tokens wait inside the Input FIFO 805 until the LDE is ready to process them. If the Input FIFO 805 is full, the LDE 106 notifies the source of the input tokens (i.e., an upstream Parser or an upstream LDE) to stop sending new tokens.
Positions of fields in each input token are identified by looking up from a table, namely Template Lookup block 810. The input tokens are next sent to a Key Generator 815. The Key Generator 815 is configured to pick up specific data in the input tokens for building the lookup keys. Configuration of the Key Generator 815 is user-defined and depends on network features and protocols users want the LDE 106 to perform.
A lookup key (or set of lookup keys) per each input token is output from the Key Generator 815 and is sent to a remote Search Engine (not illustrated). The remote Search Engine can perform multiple configurable lookup operations such as TCAM, direct-access, hash-based and longest prefix matching lookup. For each lookup key sent to the remote Search Engine, a lookup result is returned to the LDE 106 at a Lookup Result Collector/Merger 820.
While generating a lookup key (or set of lookup keys) for each input token, the Key Generator 815 also passes the input token to the Lookup Result Collector/Merger 820. The input token is buffered inside the Lookup Result Collector/Merger 820. The input token waits inside the Lookup Result Collector/Merger 820 until the lookup result is returned by the remote Search Engine. Once the lookup result is available, the input token along with the lookup result are sent to an Output Generator 825.
Based on the lookup result and content of the input token, the Output Generator 825 modifies one or several fields of the input token before sending the modified token to output. Similar to the Key Generator 815, configuration of the Output Generator 825 regarding, for example, conditions and rules for token modification, is user-defined and depends on network features and protocols users want the LDE 106 to perform.
After the token is modified, the modified token is sent to a Loopback Checker 830. The Loopback Checker 830 determines whether the modified token should be either sent back to the current LDE for doing another lookup or sent to another engine in the associated SDN system. This loopback check is a design option that advantageously allows a single LDE to perform multiple lookups in serial for the same token rather than using multiple engines to do the same. This design option is useful in a system with a limited number of LDEs due to limitations, such as chip area budget. Tokens sent back to the current LDE are buffered inside a Loopback FIFO 835 via a loopback path 840. The loopback path 840 always has higher priority than the input path (e.g., from the Input FIFO 805) to avoid deadlock. Although
Lookup Memories
When data requests/lookups are made to the lookup memories 108 by the LDEs 106 or other components of the system 100, the system 100 supports multiple parallel lookups that share a pool of the lookup memories 108. The number of memories 108 reserved for each lookup is programmable/reconfigurable based on the memory capacity needed by that lookup. In other words, the lookup memories 108 are able to be dynamically reconfigured for capacity and logical functionality. In addition, each lookup can be configured to perform as a hash-based lookup or direct-access lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles. The tiles in the set are not shared with other sets such that all lookups are able to be performed in parallel without collision. The system 100 also includes reconfigurable connection networks which are programmed based on how the tiles are allocated for each lookup.
At the block 905, an input key of each lookup path is converted to a plurality of lookup indexes. Information for reading lookup data, such as Tile IDs of respective tiles that the lookup path will access and addresses of memories in those tiles from which data will be read, become part of the lookup indexes. The Tile IDs and the memory addresses of each input key are sent to their corresponding tiles though the block 910, which is a central reconfiguration interconnection fabric. The central reconfiguration interconnection fabric 910 includes a plurality of configurable central networks. These central networks are configured based on locations of the tiles that are reserved for the respective lookup path.
In each tile, at the block 920, pre-programmed keys and data are read from the memories at the addresses that had been previously converted from the corresponding input key (e.g., conversion at the block 910). These pre-programmed keys located in the memories are compared to the input key for the respective lookup path. If there is any match among these pre-programmed keys with the input key, then the tile returns a hit data and a hit address. The hit information of each tile is collected by the respective lookup path which owns that tile through the block 925, which is an output reconfigurable interconnection network. Each lookup path performs another round of selection among the hit information of all tiles it owns at the block 930 before a final lookup result is returned for that lookup path.
After the hash size of each lookup is known, at a step 1015, registers cfg_hash_sel and cfg_tile_offset in the index converters are configured accordingly. The cfg_hash_sel register selects a function for the lookup path. The cfg_tile_offset register adjusts the Tile ID of a lookup index for the lookup path. Meanwhile, at a step 1020, central and output interconnect networks are configured to connect the lookup paths with their reserved tiles. All configuration bits for index converters and networks can be automatically generated by a script according to the principles described herein. At a step 1025, the memories allocated for each lookup path are programmed. Programming technique is based on a D-LEFT lookup technique with M ways per lookup and P buckets per way. After all allocated memories are programmed, at a step 1030, the parallel lookup system 100 is ready to receive input keys and execute N lookups in parallel.
Embodiments relate to multiple parallel lookups using a pool of shared lookup memories 108 by proper configuration of interconnection networks. The number of shared memories 108 reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories 108 are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programmed based on how the tiles are allocated for each lookup. In some embodiments, the lookup memories and/or lookup memory system described herein are able to be the same as the lookup memories and/or lookup memory system described in U.S. patent application Ser. No. 14/142,511, entitled “Method and system for reconfigurable parallel lookups using multiple shared memories,” and filed Dec. 27, 2013, which is hereby incorporated by reference.
Counters
The counter block 110 is able to comprise a plurality of counters that are able to be programmed such that they are each bound to one or more events within the packet processing within the system 100 in order to track data about those selected events. Indeed, the counter block 110 is able to be configured to count, police and/or sample simultaneously on a packet. In other words, each counter (or counter block 110 sub-unit) is able to be configured to count, sample and/or police. For example, an LDE 106 is able to request concurrent activity be monitored by the counter block 110 such that a packet may be sampled, policed and counted concurrently or simultaneously by the block 110. Additionally, each counter is able to be provisioned for an average case and to handle overflow via an overflow FIFO and an interrupt to a process monitoring the counters. This counter block architecture addresses a general optimization problem, which can be stated as, given N counters, for a certain CPU read interval T, of how to minimize the number of storage bits needed to store and operate these N counters. Equivalently, this general optimization problem can also be stated as, given N counters and a certain amount of storage bits, of how to optimize and increase CPU read interval T. This counter block architecture extends the counter CPU read interval linearly with depth of the overflow FIFO.
The overflow FIFO stores the associated counter identifications of all counters that are overflowing. Typically, as soon as any of the N counters 1105 starts overflowing, the associated counter identification of the overflowed counter is stored in the overflow FIFO 1110. An interrupt is sent to a CPU to read the overflow FIFO 1110 and the overflowed counter. After the overflowed counter is read, the overflowed counter is cleared or reset.
In a timing interval T, the number of counter overflow is M=ceiling(PPS*T/2w), wherein PPS is packets per second, and w is the bit width of each counter. The total count of packets during interval T is PPS*T. Assume PPS is up to 654.8 MPPS, T=1, w=17 and N=16 k. Based on these assumptions, there are up to 4,995 overflow events per second.
The overflow FIFO is typically M-deep and log2N-bits wide to capture all counter overflows. As such, the counter block 1100 requires w*N+M*log2N total storage bits, where M=ceiling(PPS*T/2w).
At a step 1210, upon overflowing one of the at least one counter, the counter identification of the overflowed counter is stored in a queue. In some embodiments, the queue is a FIFO buffer. The queue is typically shared and used by all counters in the counter block 1100. In some embodiments, storing the counter identification in the queue sends an interrupt to the CPU to read values from the queue and the overflowed counter. It is possible to then calculate the actual value of the overflowed counter from the read values. After the overflowed counter is read by the CPU, the overflowed counter is typically cleared or reset.
For example, a counter with 5 as its counter identification is the first counter to overflow during arithmetic operations. The counter identification (i.e., 5) is then stored in the queue, presumably at the head of the queue since counter #5 is the first counter to overflow. In the meantime, the count in counter #5 can still be incremented. In the meantime, other counters can also overflow, with the counter identifications of those counters being stored in the queue.
An interrupt is sent to the CPU to read the value at the head of the queue (i.e., 5). The CPU reads the current value stored in the counter associated with the counter identification (i.e., counter #5). Since the counter width is known, the actual value of the counter can be calculated. Specifically, the actual value of the counter is 2w plus the current value stored in the counter. Continuing with the example, assume the current value of counter #5 is 2 and w=17. The actual value of counter #5 is 131,074 (=217+2). As long as the queue is not empty, the CPU continuously reads and clears the values from the queue and the counters.
The final total count of a particular counter is: the number of times the counter identification appears in the queue * 2w plus counter remainder value.
Although the counters have been described as for counting packets, it should be noted that the counters can be used for counting anything, such as bytes. Generally, an expected total count during T is calculated as EPS*T, where EPS is events per second. An upper bound of this maximum total count during time interval T can be established or calculated since the network switch is typically designed with a certain bandwidth from which the event rate can be calculated. In some embodiments, the counters described herein are able to be the same as the counters described in U.S. patent application Ser. No. 14/302,343, entitled “Counter with overflow FIFO and a method thereof,” and filed Jun. 11, 2014, which is hereby incorporated by reference.
The SDN system, device and method described herein has numerous advantages. Specifically, as described above, it provides the advantage of utilizing a generic packet forwarding pipeline that is fully programmable such that the forwarding intelligence of various network protocol packets is imparted onto the LDEs through software. Additionally, the system provides the advantage of enabling complete software defined control over the resource management for forwarding tables within the system enabling the system to be configured to match the scaling profiles as required by various places within the network. Further, the system provides the ability to programmatically customize the performance of the system creating unified hardware and software that can be used in various deployments. Further, it allows optimization tailed deployment to application specific needs. In other words, the system software-defined flexibility provides the ability to customize the same switch microchip such that it provides the same high bandwidth and high port density despite being positioned in multiple different places within a network. Thus, the information processing system, device and method has many advantages.
The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of the principles of construction and operation of the invention. Such reference herein to specific embodiments and details thereof is not intended to limit the scope of the claims appended hereto. It will be apparent to those skilled in the art that modifications may be made in the embodiment chosen for illustration without departing from the spirit and scope of the invention.
This application is a continuation of U.S. application Ser. No. 15/067,139, filed on Mar. 10 2016, entitled “PROROCOL INDEPENDENT PROGRAMMABLE SWITCH (PIPS) SOFTWARE DEFINED DATA CENTER NETWORKS” which claims priority under 35 U.S.C. § 119(e) of the U.S. provisional patent application No. 62/133,166, entitled “PIPS: PROTOCOL INDEPENDENT PROGRAMMABLE SWITCH (PIPS) FOR SOFTWARE DEFINED DATA CENTER NETWORKS,” filed Mar. 13, 2015, and is a continuation-in-part of the co-pending U.S. patent application Ser. No. 14/144,270, entitled “APPARATUS AND METHOD OF GENERATING LOOKUPS AND MAKING DECISIONS FOR PACKET MODIFYING AND FORWARDING IN A SOFTWARE-DEFINED NETWORK ENGINE,” and filed Dec. 30, 2013, all of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4459658 | Gabbe et al. | Jul 1984 | A |
4617648 | Kuboki | Oct 1986 | A |
4929939 | Varma | May 1990 | A |
5319347 | McClure | Jun 1994 | A |
5325276 | Sullivan | Jun 1994 | A |
5613071 | Rankin | Mar 1997 | A |
5640509 | Balmer | Jun 1997 | A |
5667984 | Parekh et al. | Sep 1997 | A |
5721858 | White et al. | Feb 1998 | A |
5729712 | Whittaker | Mar 1998 | A |
5781729 | Baker et al. | Jul 1998 | A |
5805808 | Hasani et al. | Sep 1998 | A |
5959914 | Gates | Sep 1999 | A |
6076146 | Tran | Jun 2000 | A |
6088356 | Hendel et al. | Jul 2000 | A |
6327271 | Gotesman et al. | Dec 2001 | B1 |
6330251 | O'Loughlin et al. | Dec 2001 | B1 |
6330688 | Brown | Dec 2001 | B1 |
6341129 | Schroeder et al. | Jan 2002 | B1 |
6356951 | Gentry | Mar 2002 | B1 |
6606301 | Muller et al. | Aug 2003 | B1 |
6636879 | Doucette et al. | Oct 2003 | B1 |
6678837 | Quach | Jan 2004 | B1 |
6782462 | Marion et al. | Aug 2004 | B2 |
6789116 | Sarkissian et al. | Sep 2004 | B1 |
6831917 | Cheriton | Dec 2004 | B1 |
6952425 | Nelson | Oct 2005 | B1 |
7017162 | Smith | Mar 2006 | B2 |
7085229 | Potter, Jr. | Aug 2006 | B1 |
7187694 | Liao | Mar 2007 | B1 |
7263099 | Woo et al. | Aug 2007 | B1 |
7277957 | Rowley et al. | Oct 2007 | B2 |
7292573 | La Vigne | Nov 2007 | B2 |
7293113 | Krishna | Nov 2007 | B1 |
7359403 | Rinne | Apr 2008 | B1 |
7367052 | Desanti | Apr 2008 | B1 |
7391735 | Johnson | Jun 2008 | B2 |
7440573 | Lor et al. | Oct 2008 | B2 |
7461167 | Park | Dec 2008 | B1 |
7502374 | Parker et al. | Mar 2009 | B1 |
7568047 | Aysan et al. | Jul 2009 | B1 |
7596744 | Kow | Sep 2009 | B1 |
7606263 | Parker | Oct 2009 | B1 |
7710959 | Ramasamy et al. | May 2010 | B2 |
7715611 | Eaton et al. | May 2010 | B2 |
7724760 | Balkrishnan et al. | May 2010 | B2 |
7779071 | Lor et al. | Aug 2010 | B2 |
7796627 | Hurley et al. | Sep 2010 | B2 |
7797489 | Jiang et al. | Sep 2010 | B1 |
7802009 | Cui et al. | Sep 2010 | B2 |
7822032 | Parker et al. | Oct 2010 | B1 |
7903689 | Miinomi et al. | Mar 2011 | B2 |
8031640 | Mitsumori | Oct 2011 | B2 |
8054744 | Bishara | Nov 2011 | B1 |
8112800 | Yang et al. | Feb 2012 | B1 |
8144706 | Daniel et al. | Mar 2012 | B1 |
8345816 | Adiga et al. | Jan 2013 | B1 |
8432908 | Li | Apr 2013 | B2 |
8437200 | Tan | May 2013 | B1 |
8570713 | Kumfer | Oct 2013 | B2 |
8576173 | Verhaegh | Nov 2013 | B2 |
8599850 | Jha | Dec 2013 | B2 |
8705533 | Venkatraman | Apr 2014 | B1 |
8804733 | Sakai | Aug 2014 | B1 |
8964742 | Mizrahi et al. | Feb 2015 | B1 |
9064058 | Daniel | Jun 2015 | B2 |
9159420 | Wohlgemuth | Oct 2015 | B1 |
9226308 | Ketchum et al. | Dec 2015 | B2 |
9241304 | Dawson et al. | Jan 2016 | B2 |
9253043 | Adophson | Feb 2016 | B2 |
9313115 | Kamerkar et al. | Apr 2016 | B2 |
9331929 | Thomas et al. | May 2016 | B1 |
9331939 | Thomas et al. | May 2016 | B2 |
9379963 | Tran | Jun 2016 | B2 |
9525647 | Koponen | Dec 2016 | B2 |
9582440 | Gabbay et al. | Feb 2017 | B2 |
9590820 | Shukla | Mar 2017 | B1 |
9590914 | Alizadeh Attar et al. | Mar 2017 | B2 |
9620213 | Tran et al. | Apr 2017 | B2 |
9654409 | Yadav et al. | May 2017 | B2 |
9742694 | Anand | Aug 2017 | B2 |
20010039550 | Putzolu | Nov 2001 | A1 |
20010050914 | Akahane et al. | Dec 2001 | A1 |
20020007277 | Besana et al. | Jan 2002 | A1 |
20020009076 | Engbersen et al. | Jan 2002 | A1 |
20020016852 | Nishihara | Feb 2002 | A1 |
20020062394 | Bunn et al. | May 2002 | A1 |
20020076142 | Song | Jun 2002 | A1 |
20020083210 | Harrison et al. | Jun 2002 | A1 |
20020099900 | Kawarai et al. | Jul 2002 | A1 |
20020101867 | O'Callaghan et al. | Aug 2002 | A1 |
20020163935 | Paatela et al. | Nov 2002 | A1 |
20020191521 | Miniamino | Dec 2002 | A1 |
20030037154 | Poggio et al. | Feb 2003 | A1 |
20030093613 | Sherman | May 2003 | A1 |
20030120806 | Clune et al. | Jun 2003 | A1 |
20030144993 | Kishigami | Jul 2003 | A1 |
20030152078 | Henderson et al. | Aug 2003 | A1 |
20030190078 | Govindaswamy | Oct 2003 | A1 |
20030193949 | Kojima et al. | Oct 2003 | A1 |
20030198216 | Lewis | Oct 2003 | A1 |
20030210702 | Kendall | Nov 2003 | A1 |
20030218978 | Brown | Nov 2003 | A1 |
20030231625 | Calvignac et al. | Dec 2003 | A1 |
20040019733 | Garinger | Jan 2004 | A1 |
20040064589 | Boucher et al. | Apr 2004 | A1 |
20040107295 | Herkersdorf | Jun 2004 | A1 |
20040213370 | Smallwood et al. | Oct 2004 | A1 |
20040236897 | Cheng | Nov 2004 | A1 |
20050076228 | Davis | Apr 2005 | A1 |
20050138478 | Safford | Jun 2005 | A1 |
20050160151 | Rawson, III | Jul 2005 | A1 |
20050193240 | Ash | Sep 2005 | A1 |
20050213570 | Stacy et al. | Sep 2005 | A1 |
20050220107 | Del Regno | Oct 2005 | A1 |
20050232303 | Deforche et al. | Oct 2005 | A1 |
20050246716 | Smith | Nov 2005 | A1 |
20050276230 | Akahane et al. | Dec 2005 | A1 |
20050281281 | Nair et al. | Dec 2005 | A1 |
20060039372 | Sarkinen et al. | Feb 2006 | A1 |
20060045074 | Lee | Mar 2006 | A1 |
20060059269 | Chen et al. | Mar 2006 | A1 |
20060168309 | Sidkar et al. | Jul 2006 | A1 |
20060215653 | LaVigne | Sep 2006 | A1 |
20060215695 | Olderdissen | Sep 2006 | A1 |
20060259620 | Tamai | Nov 2006 | A1 |
20060280178 | Miller et al. | Dec 2006 | A1 |
20070011537 | Kiryu | Jan 2007 | A1 |
20070078997 | Stern | Apr 2007 | A1 |
20070101043 | Herman | May 2007 | A1 |
20070160052 | Okada | Jul 2007 | A1 |
20070168814 | Leininger | Jul 2007 | A1 |
20070180006 | Gyoten | Aug 2007 | A1 |
20070186085 | Yim | Aug 2007 | A1 |
20070195778 | Tatar | Aug 2007 | A1 |
20070268931 | Shaikli | Nov 2007 | A1 |
20080008159 | Bourlas et al. | Jan 2008 | A1 |
20080141023 | Qi | Jun 2008 | A1 |
20080222380 | Sze et al. | Sep 2008 | A1 |
20080279205 | Sgo | Nov 2008 | A1 |
20080304498 | Jorgensen | Dec 2008 | A1 |
20090028045 | Stellenberg et al. | Jan 2009 | A1 |
20090067325 | Baratakke et al. | Mar 2009 | A1 |
20090067446 | Lee | Mar 2009 | A1 |
20090234818 | Lobo et al. | Sep 2009 | A1 |
20090238190 | Cadigan, Jr. et al. | Sep 2009 | A1 |
20090307660 | Srinivasan | Dec 2009 | A1 |
20090328211 | Abraham | Dec 2009 | A1 |
20100107249 | Krigg | Apr 2010 | A1 |
20100161787 | Jones | Jun 2010 | A1 |
20100259536 | Toksvig et al. | Oct 2010 | A1 |
20100272125 | Frank et al. | Oct 2010 | A1 |
20100318538 | Wyman | Dec 2010 | A1 |
20100329255 | Singhal | Dec 2010 | A1 |
20110022732 | Hutchison et al. | Jan 2011 | A1 |
20110026405 | Takagi et al. | Feb 2011 | A1 |
20110040923 | Ren | Feb 2011 | A1 |
20110058514 | Lee et al. | Mar 2011 | A1 |
20110134920 | Dyke | Jun 2011 | A1 |
20110142070 | Lim et al. | Jun 2011 | A1 |
20110261698 | Kamerkar et al. | Oct 2011 | A1 |
20110261812 | Kini et al. | Oct 2011 | A1 |
20110268123 | Kopelman et al. | Nov 2011 | A1 |
20110295815 | Madagere | Dec 2011 | A1 |
20110307656 | Hamdi et al. | Dec 2011 | A1 |
20110310892 | DiMambro | Dec 2011 | A1 |
20120016845 | Bates | Jan 2012 | A1 |
20120134356 | Groarke et al. | May 2012 | A1 |
20120159132 | Abel | Jun 2012 | A1 |
20120192035 | Nakanishi | Jul 2012 | A1 |
20120254579 | Schroeder et al. | Oct 2012 | A1 |
20120260038 | Imazaki | Oct 2012 | A1 |
20120281714 | Chang et al. | Nov 2012 | A1 |
20120311197 | Larson et al. | Dec 2012 | A1 |
20130034100 | Goyal et al. | Feb 2013 | A1 |
20130039278 | Bouaziz | Feb 2013 | A1 |
20130055033 | Frazier | Feb 2013 | A1 |
20130111308 | Sauber | May 2013 | A1 |
20130163427 | Beliveau et al. | Jun 2013 | A1 |
20130163475 | Beliveau | Jun 2013 | A1 |
20130191701 | Mueller | Jul 2013 | A1 |
20130195457 | Levy et al. | Aug 2013 | A1 |
20130215906 | Hidai | Aug 2013 | A1 |
20130238792 | Kind et al. | Sep 2013 | A1 |
20130290752 | Jones et al. | Oct 2013 | A1 |
20130294458 | Yamaguchi et al. | Nov 2013 | A1 |
20140056141 | Breternitz, Jr. | Feb 2014 | A1 |
20140078902 | Edsall et al. | Mar 2014 | A1 |
20140086378 | Felten et al. | Mar 2014 | A1 |
20140108759 | Iwamitsu | Apr 2014 | A1 |
20140119231 | Chan et al. | May 2014 | A1 |
20140153443 | Carter | Jun 2014 | A1 |
20140181374 | Ellard | Jun 2014 | A1 |
20140181473 | Dice | Jun 2014 | A1 |
20140181827 | Dice | Jun 2014 | A1 |
20140241353 | Zhang | Aug 2014 | A1 |
20140269307 | Banjeree et al. | Sep 2014 | A1 |
20140307579 | Calo | Oct 2014 | A1 |
20140328354 | Michael | Nov 2014 | A1 |
20140369363 | Hutchison | Dec 2014 | A1 |
20140369365 | Denio et al. | Dec 2014 | A1 |
20150081726 | Izenberg | Mar 2015 | A1 |
20150124614 | Alizadeh Attar et al. | May 2015 | A1 |
20150156288 | Lu | Jun 2015 | A1 |
20150172189 | Pitchai | Jun 2015 | A1 |
20150187419 | Tran | Jul 2015 | A1 |
20150189047 | Naaman et al. | Jul 2015 | A1 |
20150222533 | Birritella et al. | Aug 2015 | A1 |
20150277911 | Khartikov et al. | Oct 2015 | A1 |
20160028623 | Kamath | Jan 2016 | A1 |
20160077904 | Wendel | Mar 2016 | A1 |
20160117148 | Jeon et al. | Apr 2016 | A1 |
20160197852 | Hutchison et al. | Jul 2016 | A1 |
20160274944 | Winkel | Sep 2016 | A1 |
20170048144 | Liu | Feb 2017 | A1 |
Number | Date | Country |
---|---|---|
1675635 | Sep 2005 | CN |
1798102 | Jul 2006 | CN |
101026586 | Aug 2007 | CN |
101095310 | Dec 2007 | CN |
101112056 | Jan 2008 | CN |
101237419 | Aug 2008 | CN |
101286215 | Oct 2008 | CN |
101517534 | Aug 2009 | CN |
101543018 | Sep 2009 | CN |
101563908 | Oct 2009 | CN |
101686102 | Mar 2010 | CN |
101694627 | Apr 2010 | CN |
101777791 | Jul 2010 | CN |
101854361 | Oct 2010 | CN |
102104541 | Jun 2011 | CN |
102204180 | Sep 2011 | CN |
102273149 | Dec 2011 | CN |
102656850 | Sep 2012 | CN |
102866961 | Jan 2013 | CN |
103444138 | Feb 2013 | CN |
103460751 | Dec 2013 | CN |
103597794 | Feb 2014 | CN |
1553738 | Jul 2005 | EP |
1735957 | Dec 2006 | EP |
2000196672 | Jul 2000 | JP |
2000253061 | Sep 2000 | JP |
2001024641 | Jan 2001 | JP |
2002077269 | Mar 2002 | JP |
2002198430 | Jul 2002 | JP |
2002208938 | Jul 2002 | JP |
2003308206 | Oct 2003 | JP |
2005522948 | Jul 2005 | JP |
2007503770 | Feb 2007 | JP |
2007166514 | Jun 2007 | JP |
2007208963 | Aug 2007 | JP |
2009260880 | Nov 2009 | JP |
2009272912 | Nov 2009 | JP |
2010233018 | Oct 2010 | JP |
2013055642 | Mar 2013 | JP |
2014510504 | Apr 2014 | JP |
200832408 | Aug 2008 | TW |
201134252 | Oct 2011 | TW |
2005036834 | Apr 2005 | WO |
2009133918 | Nov 2009 | WO |
2011078108 | Jun 2011 | WO |
2012138370 | Oct 2012 | WO |
2013093857 | Jun 2013 | WO |
2013119241 | Aug 2013 | WO |
Entry |
---|
The office action from the Chinese Application No. 2014108380634 received Nov. 21, 2018. |
The office action from the Chinese Application No. 2014108439341 received Dec. 24, 2018. |
Enclosed please find the Office Action for the Taiwanese Patent Application 103145445 dated Nov. 19, 2018. |
The office action from the Taiwanese Application No. 103145662, received Nov. 16, 2018. |
The office action from the Japanese Application No. 2014-267001, received Nov. 19, 2018. |
The office action from the Japanese Application No: 2014-263371, received Nov. 12, 2018. |
The office action from the Japanese Application No. 2014-267000, received Nov. 19, 2018. |
The office action from the Japanese Application No. 2014-263372, received Dec. 3, 2018. |
The Office Action and translation for the Taiwanese application No. 104112176. |
Adamchik, Binary Trees, published Sep. 9, 2013 according to WayBack Machine, 8pages. |
Parlante, Linked List Basics, http://cslibrary.standford.edu/103/LinkedListBasics.pdf, 26 pages, 2001. |
Glen Gibb, “Reconfigurable Hardware for software-defined network”, Nov. 2013, Standford University. |
Office Action for the Japanese Application No. 2015122564 dated Mar. 25, 2019. |
The Office Action for the Chinese application 201410838433.4 dated Dec. 12, 2018. |
The Japanese Office Action dated Mar. 18, 2019, for Japanese Patent Application No. 2015-122560. |
The Office Action and English Translation for the Taiwanese application No. 104111755. |
The Office Action and English Translation for the Taiwanese application No. 104110829. |
Office Action for the Japanese Application No. 2015122559 dated Mar. 18, 2019. |
Office Action for the Japanese Application No. 2015122561 dated Mar. 18, 2019. |
Office Action for the Japanese Application No. 2015-122562 dated Mar. 18, 2019. |
Altera, “White Paper” Jul. 2006, pp. 1-9. |
Bosshart, Pat, “Forwarding Metamophis: Fast Programmable Match-Action Processing in Hardware for SDN”, Aug. 12, 2013, ACM, pp. 99-109. |
Price, Charles, “MIPS IV Instruction Set”, Sep. 1995, MIPS, pp. A-28-A-29 & A-105-A-106. |
The office action from the Taiwanese Application No. 103145450 received Sep. 17, 2018. |
The Office Action and Search Report dated Aug. 20, 2018 from Chinese Application No. 201510320187.8. |
The Office Action dated Jul. 17, 2019 for Chinese Patent Application No. 201510276426.4. |
The Office Action dated Jul. 17, 2019 for Chinese Patent Application No. 201510276588.8. |
The Office Action dated Aug. 19, 2019 for Chinese Patent Application No. 201502297799. |
The Office Action dated Aug. 26, 2019 for Japanese Patent Application No. 2015122562. |
The Office Action dated Aug. 27, 2019 for Chinese Patent Application No. 201502724360. |
The Office Action dated Aug. 27, 2019 for Chinese Patent Application No. 20150229610.3. |
The Office Action dated Sep. 3, 2019 for Chinese Patent Application No. 2015102726703. |
The Office Action from the Chinese Patent Application No. 201510229770.8 dated Aug. 5, 2019. |
The Notice of Allowance for the Chinese application No. 201510320187.8 dated Sep. 30, 2019. |
The Office Action dated Jul. 29, 2019 for Chinese Patent Application No. 2015102724093. |
The Office Action dated Aug. 2, 2019 for Chinese Patent Application No. 2015102369392. |
The Office Action dated Aug. 5, 2019 for Chinese Patent Application No. 201510272163X. |
The office action from the Chinese Application No. 201410843934.1 dated May 21, 2019. |
Musin et al., Multiple Errors Detection Technique for RAM, IEE, Conference Paper, Pertinent pp. 1-4(Year:2007). |
Chinese Office Action dated May 7, 2020 for the Chinese Patent Application No. 201510272409.3. |
Chinese Office Action dated May 8, 2020 for the Chinese Patent Application No. 201510229779.9. |
Chinese Office Action dated May 18, 2020 for the Chinese Patent Application No. 201510229610.3. |
Chinese Office Action dated May 18, 2020 for the Chinese Patent Application No. 201510272436.0. |
Chinese Notice of Allowance dated Jun. 15, 2020 for the Chinese Patent Application No. 2015102369392. |
Number | Date | Country | |
---|---|---|---|
20180041450 A1 | Feb 2018 | US |
Number | Date | Country | |
---|---|---|---|
62133166 | Mar 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15067139 | Mar 2016 | US |
Child | 15786900 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14144270 | Dec 2013 | US |
Child | 15067139 | US |