A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
This invention relates to digital data processing. More particularly, this invention relates to transfer of information between data networks and central processing units using hardware independent of the central processor.
The meanings of certain acronyms and abbreviations used herein are given in Table 1.
Data movements between a host, an accelerator and a network incur overhead that impacts system performance. A bus that interconnects these components is a particular bottleneck, as data may need to be transferred through the bus more than once. The problem has been addressed using a “bump-in-the-wire” architecture. This term is defined in Request for Comments (RFC) 4949 of the Internet Engineering Task Force (IETF) as an implementation approach that places a network security mechanism outside of the system that is to be protected. For example, commonly assigned U.S. Patent Application Publication No. 20160330301 by Raindel et al., which is herein incorporated by reference, discloses a bump-in-the-wire accelerator device that performs opportunistic decryption of received data when the packets carrying the data are received in order, without any packet loss. The accelerator logic decrypts the contents of these packets using computational context information, including cryptographic variables, from a table in local memory, and updating the table as required.
In one approach the following sequence occurs: Data received from the network is accelerated in the accelerator and a NIC. The data is then transmitted via the PCIe fabric to the host, already accelerated. The host sends un-accelerated data to the accelerator via a PCIe bus. The data is accelerated and transmitted to the network without passing via the PCIe bus again. However, this method has several downsides:
The accelerator (e.g., an FPGA) has to perform packet classification and parsing, in order to “understand” (1) what data needs to be accelerated, and (2) whether the data exists in a packet. The accelerator is further required to extract the context (key, state, operation) of the acceleration operation required by the packet.
For a non-virtualized environment (e.g., with no overlay network and a single software database), the packet classification and parsing task is considered doable. However, these tasks require that the accelerator implement the parsing logic. This increases power consumption and reduces the effective portion of the accelerator that can be used for actual acceleration. In a virtualized environment this approach is considered to be impractical, for example, due to high memory requirements and the difficulties imposed by the inconstancy of the virtualized environment.
A network node receives data to be processed in a local accelerator from other nodes or devices in the network. The node may additionally send results of accelerated processing tasks to the network. Embodiments of the present invention that are described herein provide improved methods and systems for data processing in nodes that employ accelerators. The terms “processing latency” or “processing delay” refer to the duration between message arrival and the processing start time. The duration between processing conclusion and the time of sending the results to the network is referred to herein as “sending latency” or “sending delay.” The disclosed techniques reduce both processing and sending latencies.
In the disclosed embodiments, a network node comprises a host, an accelerator and a network adapter such as a NIC, which communicate with one another over an internal bus. In an exemplary embodiment, the accelerator may comprise a Field Programmable Gate Array (FPGA) or a Graphics Processing Unit (GPU) and the internal bus is a Peripheral Component Interconnect Express (PCIe) bus, or even a generic processor. In alternative embodiments, the node may comprise any other suitable network adapter, such as, for example, a Host Channel Adapter (HCA) in InfiniBand networks and other bus technologies.
The NIC contains a hierarchical packet processing pipeline that can be configured from several layers of software, isolated and independently of other modules, other VMs and other layers. Software applies packet processing rules, and the rules will be executed according to the hierarchy of the software. For example: in a received flow, a hypervisor may be the first hierarchical level, so the hypervisor rules will apply first to alter and forward the packet. Thereafter, at a second hierarchical level, a specific guest operating system or virtual machine may implement other rules using hardware as described below.
The packet processing pipeline is based on “match and action” rules. In embodiments of the invention, the accelerator is integrated with the packet processing pipeline, in the sense that it may be utilized repeatedly during processing of a packet in the pipeline, as is explained in further detail below. Examples of software that can utilize the pipeline include the hypervisor kernel of a virtual machine application in a virtual machine. Data is accelerated and embedded into a NIC steering pipeline, such that:
Application of the principles of the invention reduces PCIe traffic and host overhead in virtualized and non-virtualized environments. This allows relatively smaller accelerators to handle high bandwidth data.
There is provided according to embodiments of the invention a method of communication, which is carried out by receiving a packet in a network interface controller that is connected to a host and a communications network. The network interface controller includes electrical circuitry configured as a packet processing pipeline with a plurality of stages. The method is further carried out by determining in the network interface controller that at least a portion of the stages of the pipeline are acceleration-defined stages, processing the packet in the pipeline by transmitting data to an accelerator from the acceleration-defined stages, performing respective acceleration tasks on the transmitted data in the accelerator, and returning processed data from the accelerator to receiving stages of the pipeline. The method is further carried out after processing the packet in the pipeline by routing the packet toward a destination.
In one aspect of the method the stages of the pipeline are organized as a hierarchy and each of the levels of the hierarchy by are configured by processes executing in respective domains.
According to a further aspect of the method the receiving stages differ from the acceleration-defined stages.
Yet another aspect of the method includes accessing the network interface controller by a plurality of virtual machines having respective virtual network interface controllers, and processed data that is returned from the accelerator is transmitted from one of the virtual network interface controllers to another of the virtual network interface controllers.
In still another aspect of the method, transmitting data to an accelerator includes adding metadata to the data, and determining in the accelerator responsively to the metadata whether to perform acceleration on the data or to direct the data to the communications network.
An additional aspect of the method includes performing one of the acceleration tasks in a sandbox unit of the accelerator, thereafter reporting a status of the one acceleration task from the accelerator to the network interface controller, and responsively to the status returning the processed data to the accelerator to perform another acceleration task. The one acceleration task can be a decryption of a portion of the packet, and the other acceleration task can be an acceleration of the decrypted portion of the packet.
According to one aspect of the method, transmitting data to an accelerator includes transmitting an indication to perform a specified acceleration task.
According to a further aspect of the method, transmitting data to an accelerator is performed in one virtual machine, and the indication includes an instruction to the accelerator to route the processed data to the host for use in another virtual machine.
According to yet another aspect of the method, routing the packet toward a destination includes routing the packet to the communications network while avoiding transmitting the packet to the host.
There is further provided according to embodiments of the invention a communications apparatus, including a host processor, a network interface controller coupled to the host processor and to a communications network, electrical circuitry configured as a multi-stage packet processing pipeline and an accelerator linked to the network interface controller. The network interface controller is configured for receiving a packet, determining that at least a portion of the stages of the pipeline are acceleration-defined stages, and processing the packet in the pipeline, wherein processing the packet includes transmitting data to the accelerator from the acceleration-defined stages, performing respective acceleration tasks on the transmitted data in the accelerator, returning processed data from the accelerator to receiving stages of the pipeline, and thereafter routing the packet toward a destination.
For a better understanding of the present invention, reference is made to the detailed description of the invention, by way of example, which is to be read in conjunction with the following drawings, wherein like elements are given like reference numerals, and wherein:
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the various principles of the present invention. It will be apparent to one skilled in the art, however, that not all these details are necessarily always needed for practicing the present invention. In this instance, well-known circuits, control logic, and the details of computer program instructions for conventional algorithms and processes have not been shown in detail in order not to obscure the general concepts unnecessarily.
Documents incorporated by reference herein are to be considered an integral part of the application except that, to the extent that any terms are defined in these incorporated documents in a manner that conflicts with definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
System Description.
Turning now to the Drawings,
Node 24 comprises a network interface controller (NIC) 30 for communicating with other nodes or devices in the network, and a host 34 that carries out the various tasks of the node. Host 34 comprises a central processing unit (CPU) 38, and a host memory 42 for storing code and data.
Processing data in a network node comprises various functions or jobs that can be expedited by offloading them to a hardware accelerator. Node 24 further comprises a processing accelerator 46 that can process data sent, for example, from some remote node or device. Typically, accelerator 46 comprises one or more processors 50 and an accelerator memory 54. Typical acceleration tasks include: IP fragmentation, IP defragmentation, NAT, encryption, decryption, compression, decompression, processing regular expressions, video encoding, decoding and transcoding, video downscaling and upscaling, traffic monitoring, traffic load balancing, scheduling, authentication, IP security (IPSEC), SSL/TLS protocols, and other cypher algorithms.
In some embodiments, accelerator 46 comprises a field programmable gate array having processors 50. Alternatively, the accelerator 46 may be realized as a graphics processing unit, in which processors 50 comprise multiple GPU cores that are typically designed for parallel rather than linear processing. In alternative embodiments, however, any other accelerator can also be used, such as, for example, an application-specific integrated circuit (ASIC), a ciphering accelerator, or an accelerator suitable for a storage system implementing a redundant array of independent disks (RAID). The accelerator and the host may reside in a common package or implemented on separate packages.
Node 24 receives data from and sends data to the network using NIC 30. NIC 30 stores data received from the network in a receiver buffer 60, and data to be sent to the network in a sender buffer 64. NIC logic 68 manages the various tasks of NIC 30.
Host 34, accelerator 46 and NIC 30 communicate with one another via a high speed bus 70 or crossbar. In some embodiments, bus 70 comprises a Peripheral Component Interconnect Express (PCIe) bus. In alternative embodiments, bus 70 may comprise any suitable bus, such as, for example, Intel's Quick Path Interconnect (QPI) bus, or AMD's Hyper Transport (HT) bus. In some embodiments, host 34 comprises a PCIe switch (not shown), to which the accelerator and the NIC connect using bus 70. The NIC, the accelerator and the host may connect to separate buses of different technologies, and interconnect via dedicated interfaces. Alternatively, the accelerator may be incorporated within the NIC.
Bus 70 enables NIC 30 to directly access host memory 42 and accelerator memory 54. In some embodiments, the host and/or accelerator memories are not fully accessible, and the NIC has access to only parts of host memory 42 and/or processors 50. A bus architecture feature that enables a device connected to the bus to initiate transactions is also referred to as “bus mastering” or direct memory access (DMA). The access time between processors 50 and accelerator memory 54 within the accelerator is typically faster than communication transactions made over bus 70. Nevertheless, the bus 70 can be a bottleneck in data movements that include: data sent from the host 34 to the accelerator 46 for acceleration; accelerated data sent from the accelerator 46 to the host 34; and accelerated data transmitted from the host 34 via the NIC 30 to the network. The bus 70 can become overloaded, as data may need to be transferred twice (read and write). This effectively doubles the latency when data is passed via the bus 70.
The configuration of node 24 in
In some embodiments, certain node elements, such as host CPU 38, may comprise a general-purpose computer, which is programmed in software to carry out the functions described herein. The software may be downloaded to the computer in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
Reference is now made to
Stage 76 and action 84 are be accomplished in the NIC. Stage 78 results in action 86, which is performed in the accelerator. The results of action 86 is returned to the NIC, which then begins stage 80. This results in action 90, which also is performed in the accelerator. Finally stage 82 and action 88, accomplished without need for the accelerator, results in a disposition of the packet. The chain of events in
An important aspect of the pipeline 72 is operations on metadata that the NIC passes to the accelerator or is passed from the accelerator to the NIC. Use of this metadata makes the accelerator: smaller, easier to use, and more general. Relevant metadata include: metadata that is passed to and from applications; metadata that is created by applications running on the CPU and consumed by the accelerator; and metadata that is created by the accelerator and consumed by applications running on the CPU. Typical relevant metadata include packet payload length, flow identification, key index and packet header information, including errors in the headers. Other categories of metadata include metadata used for packet steering in the NIC, and metadata returned by the accelerator that can be used by the NIC as a source for subsequent match-action operations.
Metadata may be transferred as part of a descriptor over a PCI-based protocol, or as a packet header or encapsulation layer.
In this embodiment the NIC contains a hierarchical pipeline, typically implemented as configurable electrical circuitry. Details of the circuitry are omitted, as they will be known to those skilled in the art. Layers of the hierarchy may be configured by layers of software, independently of other layers.
Each of the layers 94, 96, 98 constitutes a separate domain. Layer 98 is a component of an embedded switch that deals with initial reception and final transmission of packets. It is typically configured by NIC driver 100. Layer 96 involves packet steering controlled in the kernel, e.g., in the NIC driver 100. Layer 94 involves packet steering controlled by any number of applications executing in one or more virtual machines, of which two are shown in
Separate processing pipelines may be provided in the layers 94, 96, 98 for incoming and outgoing packets. The pipelines are fed to and from receiving queues 102 and transmission queues 104. As noted above in the discussion of
While a linear arrangement is shown in the examples of
Indeed, in embodiments of the pipeline, the output of a stage could be returned to a previous stage of the same domain, although this presents difficulties in actual practice. For example, an infinite loop might occur, which would need to be detected and dealt with, possibly resulting in packet loss.
Operation.
Reference is now made to
For example, for an encrypted packet in an overlay network. A typical hypervisor pipeline configuration might be:
If packet includes a node vxlan and vxlan id is X, then pass packet to Guest Y.
Guest Y: If packet is encrypted, then:
Accelerate using key=9;
Count packet; and
Send packet to queue #7.
Initial step 114 comprises step 116 in which the packet arrives from the data network, and step 118, in which the packet is transmitted to the NIC from the host.
Next, after performing initial step 114, packet processing tasks begin in step 120. At decision step 122, it is determined if the packet requires acceleration. If the determination at decision step 122 is negative, then control proceeds to final step 124. The packet is routed conventionally by the NIC to the host or to the network, as the case may be.
If the determination at decision step 122 is affirmative, then control proceeds to step 126. At least a portion of the packet data, e.g., the packet header, is sent to an accelerator, such as the accelerator 46 (
1. Payload starts within offset 86 bytes. The accelerator does not need to parse the header.
2. Acceleration operation is <encrypt using AES GCM> The accelerator doesn't need to understand what to do.
3. Stage the packet in the accelerator memory (if multiple accelerations are needed).
For example, the flow identification could be extracted from the packet. Another acceleration task might be the determination of quality-of-service requirements for the packet, which would influence its subsequent handling in the host.
Next, at step 128 the acceleration task specified by the metadata in step 126 is performed by acceleration logic in the accelerator. Then, in step 130 the accelerated data is returned to the NIC.
Next, at decision step 132, it is determined if more packet processing tasks need to be performed. The NIC is responsive in this step to the pre-programmed software defining the acceleration pipeline. If the determination at decision step 132 is affirmative, then control returns to step 120, which may require more data and metadata to be submitted to the accelerator.
If the determination at decision step 132 is negative, then control proceeds to final step 134. Final step 134 comprises step 136 in which data is sent to the network. The data may be included in a modification of the packet being currently processed, or in another format. Alternatively, the data may be incorporated in a new packet, and comprises step 138 in which data is sent to the host.
Reference is now made to
Reference is now made to
The PCIe interface 162 connects FPGA 158 with NIC 156 as a different entity from host 170. A separate PCIe switch 172 connects the host 170 and NIC 156 through PCIe fabric 174. A software-configured steering table in NIC 156 directs the packet to FPGA 158 each time an action is required. The packet may be accompanied by metadata added by NIC 156, which allows FPGA 158 to understand the context of the packet, and to allow NIC 156 to continue the packet processing pipeline from the place where it left off. The metadata also enables FPGA 158 to identify network errors in the packet (confirmed by NIC 156), and enables FPGA 158 to report the acceleration status of the packet.
The diagram of
Additional benefits of the architecture, shown in
Reference is now made to
Reference is now made to
Reference is now made to
Routing decisions are always made by the NIC 156, not in the FPGA 158. However, the FPGA 158 make needed information available so that the NIC 156 can make the correct decision, for example whether the acceleration operation has succeeded or failed. The FPGA 158 sends the data back to NIC 156 through the PCIe interface 162 in flow 4. NIC 156 now determines that one or more additional acceleration procedures are needed and sends the data to FPGA 158 through the Ethernet interface 164, with new “hints” in flow 5. In this figure, flow 6 is similar to flow 2, flow 7 is similar to flow 3, flow 8 is similar to flow 4, and flow 9 represents the final pass through NIC 156, after all the acceleration passes in FPGA 158 have been completed. NIC 156 then sends the data to the network 160, including stateless offloads such as checksums that do not require the accelerator.
Reference is now made to
Reference is now made to
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof that are not in the prior art, which would occur to persons skilled in the art upon reading the foregoing description.
This Application claims the benefit of U.S. Provisional Application No. 62/582,997, filed 8 Nov. 2017, which is herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
6901496 | Mukund et al. | May 2005 | B1 |
7657659 | Lambeth et al. | Feb 2010 | B1 |
8006297 | Johnson et al. | Aug 2011 | B2 |
8103785 | Crowley et al. | Jan 2012 | B2 |
8824492 | Wang | Sep 2014 | B2 |
9038073 | Kohlenz et al. | May 2015 | B2 |
9678818 | Raikin et al. | Jun 2017 | B2 |
9904568 | Vincent | Feb 2018 | B2 |
10078613 | Ramey | Sep 2018 | B1 |
10120832 | Raindel et al. | Nov 2018 | B2 |
10152441 | Liss et al. | Dec 2018 | B2 |
10210125 | Burstein | Feb 2019 | B2 |
10218645 | Raindel et al. | Feb 2019 | B2 |
10423774 | Zelenov et al. | Apr 2019 | B1 |
10382350 | Bohrer | Aug 2019 | B2 |
20030023846 | Krishna et al. | Jan 2003 | A1 |
20040039940 | Cox et al. | Feb 2004 | A1 |
20040057434 | Poon et al. | Mar 2004 | A1 |
20040158710 | Buer et al. | Aug 2004 | A1 |
20050102497 | Buer | May 2005 | A1 |
20050198412 | Pedersen et al. | Sep 2005 | A1 |
20060095754 | Hyder et al. | May 2006 | A1 |
20060104308 | Pinkerton et al. | May 2006 | A1 |
20090086736 | Foong et al. | Apr 2009 | A1 |
20090106771 | Benner et al. | Apr 2009 | A1 |
20090319775 | Buer et al. | Dec 2009 | A1 |
20090328170 | Williams et al. | Dec 2009 | A1 |
20100228962 | Simon et al. | Sep 2010 | A1 |
20120314709 | Post et al. | Dec 2012 | A1 |
20130080651 | Pope et al. | Mar 2013 | A1 |
20130125125 | Karino et al. | May 2013 | A1 |
20130142205 | Munoz | Jun 2013 | A1 |
20130263247 | Jungck et al. | Oct 2013 | A1 |
20130276133 | Hodges et al. | Oct 2013 | A1 |
20130329557 | Petry | Dec 2013 | A1 |
20130347110 | Dalal | Dec 2013 | A1 |
20140129741 | Shahar et al. | May 2014 | A1 |
20140185616 | Bloch et al. | Jul 2014 | A1 |
20140254593 | Mital | Sep 2014 | A1 |
20140282050 | Quinn et al. | Sep 2014 | A1 |
20140282561 | Holt et al. | Sep 2014 | A1 |
20150100962 | Morita et al. | Apr 2015 | A1 |
20150288624 | Raindel et al. | Oct 2015 | A1 |
20150347185 | Holt et al. | Dec 2015 | A1 |
20150355938 | Jokinen et al. | Dec 2015 | A1 |
20160132329 | Gupte et al. | May 2016 | A1 |
20160226822 | Zhang | Aug 2016 | A1 |
20160330112 | Raindel et al. | Nov 2016 | A1 |
20160330301 | Raindel et al. | Nov 2016 | A1 |
20160342547 | Liss et al. | Nov 2016 | A1 |
20160350151 | Zou et al. | Dec 2016 | A1 |
20160378529 | Wen | Dec 2016 | A1 |
20170075855 | Sajeepa et al. | Mar 2017 | A1 |
20170180273 | Daly | Jun 2017 | A1 |
20170237672 | Dalal | Aug 2017 | A1 |
20170264622 | Cooper et al. | Sep 2017 | A1 |
20170286157 | Hasting | Oct 2017 | A1 |
20180004954 | Liguori et al. | Jan 2018 | A1 |
20180067893 | Raindel et al. | Mar 2018 | A1 |
20180109471 | Chang | Apr 2018 | A1 |
20180114013 | Sood et al. | Apr 2018 | A1 |
20180210751 | Pepus et al. | Jul 2018 | A1 |
20180219770 | Wu et al. | Aug 2018 | A1 |
20180219772 | Koster et al. | Aug 2018 | A1 |
20180246768 | Palermo et al. | Aug 2018 | A1 |
20180262468 | Kumar et al. | Sep 2018 | A1 |
20180285288 | Bernat et al. | Oct 2018 | A1 |
20180329828 | Apfelbaum et al. | Nov 2018 | A1 |
20190012350 | Sindhu | Jan 2019 | A1 |
20190026157 | Suzuki et al. | Jan 2019 | A1 |
20190116127 | Pismenny et al. | Apr 2019 | A1 |
20190163364 | Gibb et al. | May 2019 | A1 |
20190173846 | Patterson et al. | Jun 2019 | A1 |
20190250938 | Claes et al. | Aug 2019 | A1 |
20200012604 | Agarwal | Jan 2020 | A1 |
20200026656 | Liao et al. | Jan 2020 | A1 |
Number | Date | Country |
---|---|---|
1657878 | May 2006 | EP |
2463782 | Jun 2012 | EP |
2010062679 | Jun 2010 | WO |
Entry |
---|
U.S. Appl. No. 15/701,459 office action dated Dec. 27, 2018. |
Dierks et al., “The Transport Layer Security (TLS) Protocol Version 1.2”, Request for Comments: 5246 , pp. 1-104, Aug. 2008. |
Turner et al., “Prohibiting Secure Sockets Layer (SSL) Version 2.0”, Request for Comments: 6176, pp. 1-4, Mar. 2011. |
Rescorla et al., “The Transport Layer Security (TLS) Protocol Version 1.3”, Request for Comments: 8446, pp. 1-160, Aug. 2018. |
Comer., “Packet Classification: A Faster, More General Alternative to Demultiplexing”, the Internet Protocol Journal, vol. 15, No. 4, pp. 12-22, Dec. 201. |
U.S. Appl. No. 15/146,013 Office Action dated Dec. 19, 2018. |
Salowey et al., “AES Galois Counter Mode (GCM) Cipher Suites for TLS”, Request for Comments: 5288, pp. 1-8, Aug. 2008. |
International Application # PCT/IB2018/058705 search report dated Feb. 18, 2019. |
International Application # PCT/IB2018/059824 search report dated Mar. 22, 2019. |
Shirey., “Internet Security Glossary, Version 2”, Request for Comments 4949, 365 pages, Aug. 2007. |
Information Sciences Institute, “Transmission Control Protocol; DARPA Internet Program Protocol Specification”, Request for Comments 793, 90 pages, Sep. 1981. |
InfiniBand TM Architecture Specification vol. 1, Release 1.3, 1842 pages, Mar. 3, 2015. |
Stevens, “TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms”, Request for Comments 2001, 6 pages, Jan. 1997. |
Netronome Systems, Inc., “Open vSwitch Offload and Acceleration with Agilio® CX SmartNICs”, White Paper, 7 pages, Mar. 2017. |
PCI Express® Base Specification ,Revision 3.0, 860 pages, Nov. 10, 2010. |
Bohrer et al., U.S. Appl. No. 15/701,459, filed Sep. 12, 2017. |
U.S. Appl. No. 15/146,013 office action dated May 18, 2018. |
Menachem et al., U.S. Appl. No. 15/841,339, filed Dec. 14, 2017. |
Pismenny et al., U.S. Appl. No. 62/572,578 dated Oct. 16, 2017. |
European Application # 201668019 search report dated May 29, 2020. |
Burstein, “Enabling Remote Persistent Memory”, SNIA—PM Summit, pp. 1-24, Jan. 24, 2019. |
Chung et al., “Serving DNNs in Real Time at Datacenter Scale with Project Brainwave”, IEEE Micro Pre-Print, pp. 1-11, Mar. 22, 2018. |
Talpey, “Remote Persistent Memory—With Nothing But Net”, SNIA—Storage developer conference, pp. 1-30, year 2017. |
Microsoft, “Project Brainwave”, pp. 1-5, year 2019. |
U.S. Appl. No. 16/202,132 office action dated Apr. 2, 2020. |
U.S. Appl. No. 16/159,767 Office Action dated Oct. 1, 2020. |
Number | Date | Country | |
---|---|---|---|
20190140979 A1 | May 2019 | US |
Number | Date | Country | |
---|---|---|---|
62582997 | Nov 2017 | US |