The subject matter disclosed herein relates to neural network processing devices. More particularly, the subject matter disclosed herein relates to a neural processing unit that is reconfigurable to process a fine-grain structured sparsity weight arrangement selected from N:M=1:4, 2:4, 2:8 and 4:8 fine-grain structured weight sparsity arrangements.
Processing of Deep Neural Networks (DNNs) may be accelerated by Neural Processing Units (NPUs). That is, the sparsity of operands associated with General Matrix Multiply (GEMM) operations in DNNs may be used to accelerate operations performed by NPUs. Fine-grained structured sparsity, especially N:M sparsity (N nonzero elements out of M weight values), may be helpful to maintain DNN accuracy and save hardware overhead as compared to random sparsity. Existing technology, however, only supports one N:M configuration (i.e., N:M=2:4), while there are additional fine-grained structured sparsity configurations, such as N:M={1:4, 4:8, 2:8}.
An example embodiment provides a neural processing unit that may include a weight buffer, a weight multiplexer, an activation buffer, an activation multiplexer and a multiplier array. The weight buffer may be configured to store weight values in a fine-grain structured sparsity weight arrangement selected from a group of fine-grain structured sparsity weight arrangements that may include at least two arrangements of a 1:4 fine-grain structured sparsity weight arrangement, a 2:4 fine-grain structured sparsity weight arrangement, a 4:8 fine-grain structured sparsity weight arrangement, and a 2:8 fine-grain structured sparsity weight arrangement. The weight multiplexer array may be configured to output one or more weight values stored in the weight buffer as first operand values based on the selected fine-grain structured sparsity weight arrangement. The activation buffer may be configured to store activation values. The activation multiplexer array may include inputs to the activation multiplexer array that may be coupled to the activation buffer, and the activation multiplexer array may be configured to output one or more activation values stored in the activation buffer as second operand values in which each respective second operand value and a corresponding first operand value form an operand value pair. The multiplier array may be configured to output a product value for each operand value pair. In one embodiment, the activation buffer may include 8 activation registers to store 8 activation values, the weight multiplexer array may include a first weight multiplexer configured to select a weight register based on the selected fine-grain structured sparsity weight arrangement and to output the weight value stored in the selected weight register as a first operand value, the activation multiplexer array may include a first activation multiplexer that may include seven inputs in which each respective input of the first activation multiplexer may be connected to a corresponding activation register within a first group of activation registers, the first activation multiplexer may be configured to select an activation register in the first group of activation registers based on the selected fine-grain structured sparsity weight arrangement and to output the activation value stored in the selected activation register as a second operand value in which the second operand value corresponds to the first operand value and forms a first operand value pair, and the multiplier array may include a first a first multiplier unit configured to output a product value for the first operand value pair. In another embodiment, the weight values may be stored in the weight buffer in a 1:4 fine-grain structured sparsity weight arrangement, or in a 2:8 fine-grain structured sparsity weight arrangement, and the first group of activation registers may include 7 activation registers. In still another embodiment, the weight values may be stored in the weight buffer in a 2:4 fine-grain structured sparsity weight arrangement, and the first group of activation registers may include 4 activation registers. In yet another embodiment, the weight values may be stored in the weight buffer in a 4:8 fine-grain structured sparsity weight arrangement, and the first group of activation registers may include 6 activation registers. In one embodiment, the weight values may be arranged in a 2:8 fine-grain structured sparsity configuration, and the activation registers may include two rows of four activation registers in which two output multiplexers may be configured to select one activation register from each row. In another embodiment, the weight values may be arranged in a 2:4 fine-grained structured sparsity configuration, and the activation registers may include two rows of two activation registers in which two output multiplexers are configured to select one activation register from each row.
An example embodiment provides a neural processing unit that may include a first weight buffer, a first weight multiplexer, a first activation buffer, a first activation multiplexer, and a first multiplier unit. The first weight buffer may include an array of first weight registers in which each first weight register may be configured to store a weight value in a fine-grain structured sparsity weight arrangement selected from a group of fine-grain structured sparsity weight arrangements that may include at least two arrangements of a 1:4 fine-grain structured sparsity weight arrangement, a 2:4 fine-grain structured sparsity weight arrangement, a 4:8 fine-grain structured sparsity weight arrangement, and a 2:8 fine-grain structured sparsity weight arrangement. The first weight multiplexer may be configured to select a first weight register based on the selected fine-grain structured sparsity weight arrangement and output the weight value stored in the selected first weight register as a first operand value. The first activation buffer may be a first predetermined number of first activation registers in which each first activation register may be configured to store an activation value. The first activation multiplexer may include a second predetermined number of first activation multiplexer inputs in which each respective input of the first activation multiplexer may be connected to a corresponding first activation register within a first group of first activation registers, and in which the first activation multiplexer may be configured to select a first activation register based on the selected fine-grain structured sparsity weight arrangement and output the activation value stored in the selected first activation register as a second operand value, the activation value output as the second operand value corresponding to the weight value output as the first operand value. The first multiplier unit may be configured to output a first product value of the first operand value and the second operand value. In one embodiment, the first predetermined number of first activation registers may be 8, and the second predetermined number of activation inputs may be 7. In another embodiment, the weight values may be arranged in a 1:4 fine-grain structured sparsity configuration. In still another embodiment, the weight values may be arranged in a 2:4 fine-grain structured sparsity configuration. In yet another embodiment, the weight values may be arranged in a 4:8 fine-grain structured sparsity configuration. In one embodiment, the weight values may be arranged in a 2:8 fine-grain structured sparsity configuration. In another embodiment, the neural processing unit may further include a second weight multiplexer, a second activation multiplexer and a second multiplier unit in which the second weight multiplexer may be configured to select a first weight register based on the selected fine-grain structured sparsity weight arrangement and output the weight value stored in the selected first weight register as a third operand value, the second activation multiplexer may include a second predetermined number of second activation multiplexer inputs in which each respective input of the second activation multiplexer may be connected to a corresponding first activation register within a second group of activation registers that is different from the first group of first activation registers, the second activation multiplexer may be configured to select a first activation register based on the selected fine-grain structured sparsity weight arrangement and output the activation value stored in the selected first activation register as a fourth operand value, and the activation value output as the fourth operand value corresponds to the weight value output as the third operand value, and the second multiplier unit may be configured to output a second product value of the third operand value and the fourth operand value. In still another embodiment the neural processing unit may further include a second weight buffer, a third weight multiplexer, a second activation buffer, a third activation multiplexer, a third multiplier unit, a fourth weight multiplexer, a fourth activation multiplexer, and a fourth multiplier unit in which the second weight buffer may be configured to store weight values of fine-grain structured sparsity weights based on the selected fine-grain structured sparsity weight arrangement, the third weight multiplexer may be configured to select a second weight register based on the selected fine-grain structured sparsity weight arrangement and output the weight value stored in the selected second weight register as a fifth operand value, the second activation buffer may include a first predetermined number of second activation registers in which each second activation register may be configured to store an activation value, and third activation multiplexer comprising a second predetermined number of third activation multiplexer inputs, each respective input of the third activation multiplexer being connected to a corresponding second activation register within a first group of second activation registers, the third activation multiplexer being configured to select a second activation register based on the selected fine-grain structured sparsity weight arrangement and output the activation value stored in the selected second activation register as a sixth operand value, the activation value output as the sixth operand value corresponding to the weight value output as the fifth operand value, the third multiplier unit may be configured to output a third product value of the fifth operand value and the sixth operand value, the fourth weight multiplexer may be configured to select a second weight register based on the selected fine-grain structured sparsity weight arrangement and output the weight value stored in the selected second weight register as a seventh operand value, the fourth activation multiplexer may include a second predetermined number of fourth activation multiplexer inputs in which each respective input of the fourth activation multiplexer may be connected to a corresponding second activation register within a fourth group of second activation registers that is different from the third group of activation registers and in which the fourth activation multiplexer may be configured to select a second activation register based on the selected fine-grain structured sparsity weight arrangement and output the activation value stored in the selected second activation register as an eighth operand value, and the fourth multiplier unit may be configured to output a fourth product value of the seventh operand value and the eighth operand value. In one embodiment, the first predetermined number of first activation registers may be 8 first activation registers, the second predetermined number of first activation multiplexer inputs may be 7 first activation multiplexer inputs, the second predetermined number of second activation multiplexer inputs may be 7 second activation multiplexer inputs, the first predetermined number of second activation registers may be 8 second activation registers, the second predetermined number of third activation multiplexer inputs may be 7 third activation multiplexer inputs, and the second predetermined number of fourth activation multiplexer inputs may be 7 fourth activation multiplexer inputs. In another embodiment, the weight values may be arranged in a 1:4 fine-grain structured sparsity configuration, the first group of first activation registers may include four first activation registers, the second group of first activation registers may four first activation registers and is different from the first group of first activation registers, the third group of second activation registers may include four second activation registers, and the fourth group of second activation registers may include four second activation registers and is different from the third group of second activation registers. In still another embodiment, the weight values may be arranged in a 2:8 fine-grain structured sparsity configuration, the first group of first activation registers may include seven first activation registers, the second group of first activation registers may include seven first activation registers and is different from the first group of first activation registers, the third group of second activation registers may include seven second activation registers, and the fourth group of second activation registers may include seven second activation registers and is different from the third group of second activation registers. In yet another embodiment, the weight values may be arranged in a 2:4 fine-grain structured sparsity configuration, an activation value may be stored in four first activation registers of the first activation buffer and may be stored in four second activation registers of the second activation buffer, the first group of first activation registers may include the four first activation registers storing activation values, the second group of first activation registers may include a same four first activation registers as the first group of second activation registers, the third group of second activation registers may include the four second activation registers storing activation values, and the fourth group of second activation registers may include a same four second activation registers as the third group of activation registers. In one embodiment, the weight values may be arranged in a 4:8 fine-grain structured sparsity configuration, an activation value may be stored in six first activation registers of the first activation buffer and in six second activation registers of the second activation buffer, the first group of first activation registers may include the six first activation registers storing activation values, the second group of first activation registers may include a same six activation registers as the first group of second activation registers, the third group of second activation registers may include six second activation registers storing activation values, and the fourth group of second activation registers may include a same six second activation registers as the third group of second activation registers. In still another embodiment, the weight values may be arranged in a 2:8 fine-grain structured sparsity configuration, and the first activation registers may include two rows of four activation registers in which two output multiplexers may be configured to select one activation register from each row. In yet another embodiment, the weight values may be arranged in a 2:4 fine-grained structured sparsity configuration, and the first activation registers may include two rows of two activation registers in which two output multiplexers may be configured to select one activation register from each row.
In the following section, the aspects of the subject matter disclosed herein will be described with reference to exemplary embodiments illustrated in the figure, in which:
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be understood, however, by those skilled in the art that the disclosed aspects may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail to not obscure the subject matter disclosed herein.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment disclosed herein. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) in various places throughout this specification may not necessarily all be referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. In this regard, as used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not to be construed as necessarily preferred or advantageous over other embodiments. Additionally, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Also, depending on the context of discussion herein, a singular term may include the corresponding plural forms and a plural term may include the corresponding singular form. Similarly, a hyphenated term (e.g., “two-dimensional,” “pre-determined,” “pixel-specific,” etc.) may be occasionally interchangeably used with a corresponding non-hyphenated version (e.g., “two dimensional,” “predetermined,” “pixel specific,” etc.), and a capitalized entry (e.g., “Counter Clock,” “Row Select,” “PIXOUT,” etc.) may be interchangeably used with a corresponding non-capitalized version (e.g., “counter clock,” “row select,” “pixout,” etc.). Such occasional interchangeable uses shall not be considered inconsistent with each other.
Also, depending on the context of discussion herein, a singular term may include the corresponding plural forms and a plural term may include the corresponding singular form. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, if considered appropriate, reference numerals have been repeated among the figures to indicate corresponding and/or analogous elements.
The terminology used herein is for the purpose of describing some example embodiments only and is not intended to be limiting of the claimed subject matter. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms “first,” “second,” etc., as used herein, are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless explicitly defined as such. Furthermore, the same reference numerals may be used across two or more figures to refer to parts, components, blocks, circuits, units, or modules having the same or similar functionality. Such usage is, however, for simplicity of illustration and ease of discussion only; it does not imply that the construction or architectural details of such components or units are the same across all embodiments or such commonly-referenced parts/modules are the only way to implement some of the example embodiments disclosed herein.
It will be understood that when an element or layer is referred to as being on, “connected to” or “coupled to” another element or layer, it can be directly on, connected or coupled to the other element or layer or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly connected to” or “directly coupled to” another element or layer, there are no intervening elements or layers present. Like numerals refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terms “first,” “second,” etc., as used herein, are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless explicitly defined as such. Furthermore, the same reference numerals may be used across two or more figures to refer to parts, components, blocks, circuits, units, or modules having the same or similar functionality. Such usage is, however, for simplicity of illustration and ease of discussion only; it does not imply that the construction or architectural details of such components or units are the same across all embodiments or such commonly-referenced parts/modules are the only way to implement some of the example embodiments disclosed herein.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this subject matter belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the term “module” refers to any combination of software, firmware and/or hardware configured to provide the functionality described herein in connection with a module. For example, software may be embodied as a software package, code and/or instruction set or instructions, and the term “hardware,” as used in any implementation described herein, may include, for example, singly or in any combination, an assembly, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, but not limited to, an integrated circuit (IC), system on-a-chip (SoC), an assembly, and so forth.
The subject matter disclosed herein provides an efficient hardware logic architecture in an NPU that supports N:M fine-grained structured sparsity for N:M=1:4, 4:8 and 2:8 fine-grained structured sparsity. Additionally, the subject matter disclosed herein provides a reconfigurable sparse logic architecture that supports N:M={1:4, 2:4, 2:8, 4:8} fine-grained structured sparsity. That is, a single sparse NPU logic architecture may be reconfigured to efficiently support four different N:M sparsity modes. For example, for N:M=2:8 or for 1:4, the NPU architectures disclosed herein provide a speed up of 4× times. Further, the sizes of activation buffers sizes and multiplexer complexity may be reduced in the N:M sparse logic architectures disclosed herein at a cost of a demultiplexer and an additional adder tree per multiplier unit, but with an overall power reduction and an area efficiency that may be improved greater than 3.5 times.
While conventional N:M sparsity clusters M-weights in a reducible dimension (input channel for convolution operation, or column vector of weight matrix in GEMM operation), the subject matter disclosed herein also clusters M weights from both reducible dimension and irreducible dimension (output channel or output pixel for convolution operation, or row vector of weight matrix in a GEMM operation).
The weight values of a trained neural network are fixed known values whereas activation values depend on inputs to the neural network and, therefore, vary. The weight values of a trained neural network may be dense or, alternatively, may be pruned and then compressed to form dense weight values. The weight values may be arranged in an N:M fine-grain structured sparse arrangement.
indicated at 205 means mat as only N elements out of M weights in C channels are kept, the channel size of the weight tensor shrinks from C to
The respective inputs to a 4-to-1 activation multiplexer (AMUX) are each connected to a corresponding ABUF register (REG) of a four-register ABUF. The output of the AMUX is input to a multiplier MULT. The AMUX may be controlled by a control unit (not shown) to select a particular ABUF register based on, for example, a weight zero-bit mask or weight metadata. The routing logic depicted in
The routing logic depicted in
In the example configuration of routing logic depicted in
The maximum channel range Cw for the routing logic of
In the example configuration of routing logic depicted in
The maximum channel range Cw for the routing logic of
In the example configuration of routing logic depicted in
The maximum channel range Cw for the routing logic of
The architecture of the NPU 700 may be used for fine-grain structured weight sparsity arrangements of 1:4, 2:4, 2:8 and 4:8 by selective placement of activation values in the registers of the ABUF. Referring to
When the NPU 700 is configured for a 2:8 fine-grain structured weight sparsity, the connections between the ABUF, AMUX and MULT arrays are depicted above the N:M=2:8 configuration. Sixteen activation channels are each input to a corresponding ABUF array register. The AMUX array multiplexers are controlled by a controller (not shown) to select an appropriate ABUF register based, for example, on a weight zero-bit mask or weight metadata associated with the 2:8 fine-grain structured weight sparsity values.
When the NPU 700 is configured for a 1:4 fine-grain structured weight sparsity, the connections between the ABUF, AMUX and MULT arrays are depicted above the N:M=1:4 configuration. The N:M=1.4 configuration is the same as the N:M=2:8 configuration. For the N:M=1:4 configuration, 16 activation channels are each input to a corresponding ABUF array register. The AMUX array multiplexers are controlled by a controller (not shown) to select an appropriate ABUF register based, for example, on a weight zero-bit mask or weight metadata associated with the 1:4 fine-grain structured weight sparsity values.
When the NPU 700 is configured for a 2:4 fine-grain structured weight sparsity, the connections between the ABUF, AMUX and MULT arrays are depicted above the N:M=2:4 configuration. For the N:M=2:4 configuration, eight activation channels are each input to a corresponding ABUF array register as indicated. The AMUX array multiplexers are controlled by a controller (not shown) to select an appropriate ABUF register based, for example, on a weight zero-bit mask or weight metadata associated with the 2:4 fine-grain structured weight sparsity values.
When the NPU 700 is configured for a 4:8 fine-grain structured weight sparsity, the connections between the ABUF, AMUX and MULT arrays are depicted above the N:M=4:8 configuration. For the N:M=4:8 configuration, eight activation channels are each input to a corresponding ABUF array register as indicated. More specifically, the two topmost multipliers have access to channels 1-6. The topmost multiplier has access to channels 1-5, and the next multiplier down has access to channels 2-6, which corresponds to the NPU configuration depicted in
Table 1 sets forth hardware costs and speed up benefits for the four different NPU weight-only (W-Only) structured-sparsity core architectures disclosed herein. The W-Only 1:4 sparse NPU logic design depicted in
Table 1 also shows approximate computational speed up provided by each of the different NPU logic designs disclosed herein for a random weight sparsity mode of −80% sparsity.
4
3
7
5
4
2
4
2
4x
2x
4x
2x
The left side of
The left side of
The reduction in ABUF size described herein is applicable for all four NPU weight-only (W-Only) structured-sparsity core architectures disclosed herein. Table 2 sets forth hardware costs and speed up benefits for the four different NPU weight-only (W-Only) structured-sparsity core architectures disclosed herein. The hardware costs and benefits are also shown for reduced ABUF sizes, that is, for ABUFs that have been reconfigured from 1 depth dimension to 2 depth dimensions, as indicated in “→2D” columns immediately to the right of the different N:M sparsity modes.
The interface 1040 may be configured to include a wireless interface that is configured to transmit data to or receive data from, for example, a wireless communication network using a RF signal. The wireless interface 1040 may include, for example, an antenna. The electronic system 1000 also may be used in a communication interface protocol of a communication system, such as, but not limited to, Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), North American Digital Communications (NADC), Extended Time Division Multiple Access (E-TDMA), Wideband CDMA (WCDMA), CDMA2000, Wi-Fi, Municipal Wi-Fi (Muni Wi-Fi), Bluetooth, Digital Enhanced Cordless Telecommunications (DECT), Wireless Universal Serial Bus (Wireless USB), Fast low-latency access with seamless handoff Orthogonal Frequency Division Multiplexing (Flash-OFDM), IEEE 802.20, General Packet Radio Service (GPRS), iBurst, Wireless Broadband (WiBro), WiMAX, WiMAX-Advanced, Universal Mobile Telecommunication Service—Time Division Duplex (UMTS-TDD), High Speed Packet Access (HSPA), Evolution Data Optimized (EVDO), Long Term Evolution—Advanced (LTE-Advanced), Multichannel Multipoint Distribution Service (MMDS), Fifth-Generation Wireless (5G), Sixth-Generation Wireless (6G), and so forth.
Embodiments of the subject matter and the operations described in this specification may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification may be implemented as one or more computer programs, i.e., one or more modules of computer-program instructions, encoded on computer-storage medium for execution by, or to control the operation of data-processing apparatus. Alternatively or additionally, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer-storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial-access memory array or device, or a combination thereof. Moreover, while a computer-storage medium is not a propagated signal, a computer-storage medium may be a source or destination of computer-program instructions encoded in an artificially-generated propagated signal. The computer-storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). Additionally, the operations described in this specification may be implemented as operations performed by a data-processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
While this specification may contain many specific implementation details, the implementation details should not be construed as limitations on the scope of any claimed subject matter, but rather be construed as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described herein. Other embodiments are within the scope of the following claims. In some cases, the actions set forth in the claims may be performed in a different order and still achieve desirable results. Additionally, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
As will be recognized by those skilled in the art, the innovative concepts described herein may be modified and varied over a wide range of applications. Accordingly, the scope of claimed subject matter should not be limited to any of the specific exemplary teachings discussed above, but is instead defined by the following claims.
This application claims the priority benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application Serial Nos. 63/408,827, filed on Sep. 21, 2022, and 63/408,828, filed Sep. 21, 2022, the disclosures of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63408827 | Sep 2022 | US | |
63408828 | Sep 2022 | US |