The present disclosure relates generally to a programmable spatial array that can rapidly perform different types of matrix decomposition.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it may be understood that these statements are to be read in this light, and not as admissions of prior art.
Integrated circuit devices are found in numerous electronic devices, many of which may perform machine learning or use wireless communication. A type of computation known as a matrix decomposition is widely used in wireless communication, machine learning, and other areas. For instance, multiple-input multiple-output (MIMO) wireless communication in 5G wireless systems, multivariate linear regressions in machine learning, systems of linear equations, matrix inversions and determinant calculations, and many others involve performing matrix decompositions. Different types of matrix decompositions include LU decomposition, QR decomposition, and Cholesky decomposition.
Matrix decompositions are more complicated than matrix multiplication. The latter may generally use multiplication and addition operations and may have little or no data dependency among operations. Matrix decompositions, on the other hand, may have many data dependencies. This may cause one operation to have to wait for the result of another operation to be ready, which makes it difficult to handle data in parallel. Moreover, matrix decomposition usually has arithmetic operations other than multiplication, such as division and square root. As a consequence, an integrated circuit that performs matrix decompositions may use specialized circuitry that is quite complex and may support just one type of matrix decomposition.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “including” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “some embodiments,” “embodiments,” “one embodiment,” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Furthermore, the phrase A “based on” B is intended to mean that A is at least partially based on B. Moreover, the term “or” is intended to be inclusive (e.g., logical OR) and not exclusive (e.g., logical XOR). In other words, the phrase A “or” B is intended to mean A, B, or both A and B. Moreover, this disclosure describes various data structures, such as instructions for an instruction set architecture. These are described as having certain domains (e.g., fields) and corresponding numbers of bits. However, it should be understood that these domains and sizes in bits are meant as examples and are not intended to be exclusive. Indeed, the data structures (e.g., instructions) of this disclosure may take any suitable form.
An integrated circuit, such as an application specific integrated circuit (ASIC) or a programmable logic device (PLD) like a field programmable gate array (FPGA), may be part of an electronic device that perform wireless communications, machine learning, or many other tasks. These tasks may involve performing matrix decompositions. Indeed, matrix decomposition is widely used in wireless communication, machine learning, and other areas. For instance, multiple-input multiple-output (MIMO) wireless communication in 5G wireless systems, multivariate linear regressions in machine learning, systems of linear equations, matrix inversions and determinant calculations, and many others involve performing matrix decompositions. Different types of matrix decompositions include LU decomposition, QR decomposition, and Cholesky decomposition.
In contrast to single-purpose architectures that may support only one type of matrix decomposition, this disclosure provides a programmable spatial array processor that can be programmed to compute a variety of different types of matrix decompositions. The programmable spatial array processor has a two-dimensional upper triangular Processing Element (PE) array which acts as a high throughput engine. Every PE executes under instructions that provide programmability to support different modes.
As noted above, matrix decompositions are more complicated than matrix multiplication. The latter may generally use multiplication and addition operations and may have little or no data dependency among operations. Matrix decompositions, on the other hand, may have many data dependencies. This may cause one operation to have to wait for the result of another operation to be ready, which makes it difficult to handle data in parallel. Moreover, matrix decomposition usually has arithmetic operations other than multiplication, such as division and square root.
The programmable spatial array processor of this disclosure may use a control scheme that can mitigate the challenges of the data dependency of the various PEs in solving matrix decompositions. To solve this problem, an Instruction Share and Propagation (ISP) scheme may control all PEs efficiently. Instructions may be shared by certain PEs and propagated through them. This may substantially reduce the size or complexity of the instruction memory. Indeed, instructions may flow through the array in a systolic-like way, just like the data flow. All non-diagonal PEs may share the same instructions. This may (a) reduce instruction memory from N2/2 to 2 and (b) allow instructions to transfer between adjacent PEs so that a long control path may be avoided. Furthermore, the programmability of the programmable spatial array processor may enable a fast switch between two different types of matrix operation. The array of the programmable spatial array processor may simply be fed with new instructions for new matrix operation. Additional reset or reconfiguration time may be avoided, enabling transitions to computing different types of matrix decomposition to occur rapidly and seamlessly.
In addition to matrix decompositions, the programmable spatial array processor may also support widely used matrix operations like back substitution, matrix-vector multiplication, matrix multiplying by its transpose (ATA), and so on. The programmability even empowers it to perform customized functions. What is more, the programmable spatial array processor may have a triangular arrangement that, compared to a square array, may cut hardware resource usage nearly in half.
With this in mind,
Designers may implement their high-level designs using design software 14, such as a version of Intel® Quartus® Prime by INTEL CORPORATION. The design software 14 may use a compiler 16 to convert the high-level program into a lower-level description. The compiler 16 may provide machine-readable instructions representative of the high-level program to a host 18 and the integrated circuit device 12. The host 18 may include any suitable processing circuitry and may receive a host program 22 which may be implemented by the kernel programs 20. To implement the host program 22, the host 18 may communicate instructions from the host program 22 to the integrated circuit device 12 via a communications link 24, which may be, for example, direct memory access (DMA) communications or peripheral component interconnect express (PCIe) communications. While the techniques described above refer to the application of a high-level program, in some embodiments, a designer may use the design software 14 to generate and/or to specify a low-level program, such as the low-level hardware description languages described above. Further, in some embodiments, the system 10 may be implemented without a separate host program 22. Moreover, in some embodiments, the techniques described herein may be implemented in circuitry as hardened IP that is not programmed into a programmable logic device. Thus, embodiments described herein are intended to be illustrative and not limiting.
In some embodiments, the kernel programs 20 may enable configuration of a programmable spatial array processor 26 on the integrated circuit device 12. Indeed, the programmable spatial array processor 26 may represent a circuit design of the kernel program 20 that is configured onto the integrated circuit device 12 (e.g., formed in soft logic). In some embodiments, the programmable spatial array processor 26 may be partially or fully formed in hardened circuitry (e.g., application-specific circuitry of the integrated circuit 12 that is not configurable as programmable logic). The host 18 may use the communication link 24 to cause the programmable spatial array processor 26 to decompose matrices according to any suitable matrix decomposition type. For example, the programmable spatial array processor 26 may be used to perform matrix decomposition to detect or transmit a signal for multiple-input multiple-output (MIMO) communication via antennas 28.
The programmable spatial array processor 26 may be component included in a data processing system 40, as shown in
In one example, the data processing system 40 may be part of a data center that processes a variety of different requests. For instance, the data processing system 40 may receive a data processing request via the network interface 46 to perform encryption, decryption, machine learning, video processing, voice recognition, image recognition, data compression, database search ranking, bioinformatics, network security pattern identification, spatial navigation, digital signal processing, or some other specialized task. Some or all of the components of the data processing system 40 may be virtual machine components running on physical circuitry (e.g., managed by one or more hypervisors or virtual machine managers). Whether physical components or virtual machine components, the various components of the data processing system 40 may be located in the same location or different locations (e.g., on different boards, in different rooms, at different geographic locations). Indeed, the data processing system 40 may be accessible via a computing service provider (CSP) that may provide an interface to customers to use the data processing system 40 (e.g., to run programs and/or perform acceleration tasks) in a cloud computing environment.
The input data 68 may take any suitable form, including a matrix or vector format with throughput of one matrix row (column) per clock cycle. A block of the input data 68 may contain a batch of matrices to utilize the pipeline capability of PE array 76 and improve average throughput. Any suitable quantity of matrices or vectors may be used in a batch (e.g., 2, 3, 4, 5, 6, 7, 8, 16, 32, 64, 100, 128, 200, 256, 500, 512, 1000, 1024, or more or fewer). For instance, 32 consecutive matrices may form a batch, in this case the batch size is 32.
For example, as shown in
The core part of the programmable spatial array processor 26 is the two-dimensional processing element (PE) array 76. As shown in
The M PEs 112 mainly perform multiplication and accumulation (MAC) operations, and the M PEs 112 form the upper triangular part of a square N-by-N array, of which any suitable number N may define the array. The M PEs 112 may be considered an internal processing element type of the processing element array 76, since they are bounded to the left and right by the D PEs 110 and the V PEs 114. Multiplication and accumulation (MAC) operations are abundant in matrix operations. The V PEs 114 located at the rightmost column handle vector-related operations like matrix-vector multiplication. The V PEs 114 may have the same or a similar internal hardware structure as the M PEs 112. The main difference between the V PEs 114 and the M PEs 112 is that they run under different instructions (with different behaviors). The D PEs 110 may include more compute resources than the M PEs 112, since the diagonal elements may perform more complicated computations than non-diagonal elements in most matrix decomposition cases. As discussed further below, the D PEs 110 may include some MAC units and other math function (such as inverse square root) units, or may include units that perform certain specific operations.
The PE array 76 structure may achieve a relatively high operating clock frequency, since each PE 110, 112, or 114 may only connect with adjacent PEs 110, 112, or 114. This means that there may be no long routing path or that the routing paths between PEs 110, 112, and 114 may be sufficiently similar so as to have similar (e.g., equal) latencies. And this structure may relatively easily scale up to a large array size.
Example architectures of the PEs 110, 112, and 114 will be described below. It should be appreciated that these are intended to be illustrative and not exhaustive. Indeed, the PEs 110, 112, and 114 may take any suitable form and have any suitable architectures.
Multiply-accumulate (M) PE 112 Architecture. One example architecture of an M PE 112 appears in
The ALU 164 may perform arithmetic operations such as add, multiply, multiply-add, multiply-accumulate, and so on. It may be implemented in complex form (named CMAC or CALU) to support complex number arithmetic that is widely used in wireless communication systems. The inputs of the ALU 164 can have multiple sources, such as input ports, the register file (RF) 172, or the data queue 174. The input and output interfaces shown in
The data queue 174 is used to buffer upper input data, since the left input data may be delayed after upper input data. One way to handle this delay gap is to input the input data in a staggered way, as shown in
It can be observed that the data queue method shown in
Diagonal (D) PE 110 Architecture. Since the D PEs 110 may handle more complicated calculations than an M PE 112, the D PEs 110 may have more functional units. In an example, shown in
In the example architecture of the D PE 110 shown in
Other operations like square root and division can be calculated using Isqrt result
Multiple issues in a D PE 110 can work in a pipelined manner to achieve high throughput. Take Cholesky decomposition, for example. The process includes inverse square root (Isqrt) from the Isqrt slot 196 and multiplications in the issue slot 198, which use the result from the Isqrt slot 196. Using this pipeline scheme, the two issue slots 196 and 198 can work in parallel. An example is shown in
As previously discussed with respect to
Accordingly, a scheme referred to as Instruction Share and Propagation (ISP) may overcome some of the challenges mentioned above (e.g., avoiding such high fan-out and high memory utilization problems). The design of Instruction Share and Propagation (ISP) is made possible because the M PEs 112 and the D PEs 110 generally respectively execute the same or similar programs with only time offset and slight code differences. For instance, in a Cholesky decomposition procedure, every M PE 112 may execute the same first instruction but at different start times, and almost the same remaining instructions except that some of them may be ignored, as shown in
As shown in
As can be seen, the start time of instruction execution of each M PE 112 is different. As such, the delay of instruction arrival to each M PE 112 will be different and varies among functions. For example, the instruction delay between two adjacent M PEs 112 in one row may be 1 or 2 cycles (or more, as desired). The instruction delay between two adjacent rows of M PEs 112 could be many more cycles. As shown in
There may also be a Time to Live (TTL) domain in each instruction indicating whether this instruction should be executed, as shown in
Thus, input data with length of N in the form of one row or column of a matrix may be fed into N FIFOs 310, and data read from the N FIFOs 310 may be sent to the PE array 76 as one matrix row or column. The write and read control blocks 314, 316, and 318 are used to generate FIFO access signals (e.g., 330, 332, 334, 336, 338, and 340). Some specific data like an identity matrix can also be generated by the read control block 318. The memory 320 may store the FIFO access patterns of each operation (e.g., each type of matrix decomposition). The memory 320 may store read patterns. Table 1 provides one example of a read pattern.
Table 2 illustrates one example instruction structure for the instructions of Table 1.
A read control block 360 is used to make sure that the output of all FIFOs 356 are aligned. For example, the write control block 358 may receive instructions from delay buffer write instruction memory 362 indicating access patterns for the current application and from the instruction decoder 350 indicating, for example, matrix size (size_matrix), batch size (size_batch), and the function that was performed (Function). The write control block 358 may generate write enable (wr_en) signals to write into the FIFOs 356 and a start read (start_rd) signal for the read control block 360. The write control block 358 may trigger some or all of these signals upon receipt of a start of packet (sop) signal corresponding to the input data 352. The read control block 360 may use the start_rd signal from the write control block 358 and instructions from a delay buffer read instruction memory 364 indicating access patterns for the current application. The read control block 360 may also use the instruction decoder 350 indicating, for example, matrix size (size_matrix), batch size (size_batch), and the function that was performed (Function). Monitor circuitry 366 may provide error signals.
Example instructions that may be stored in the delay buffer write instruction memory 362 are shown below in Table 3. One such instruction can serve for a write process for one batch of matrices.
Instructions in the delay buffer read instruction memory 364 may be organized as shown below in Table 4.
The instructions may be described as shown below in Table 5.
In this way, the delay buffer 82 may use the N FIFOs 356 to buffer both matrices and vectors. The input data 352 from the PE array 76 arrive in a staggered pattern, which is different from that of the main buffer 70. The write control block 358 is responsible for writing the data into alignment addresses of all the FIFOs 356. The read control block 360 causes data to be read from the FIFOs 356 and sent to the output port as output data 354 or looped back to the main buffer 70 as the loop-back intermediate data 90.
ISA for an M PE 112. The behavior of each M PE 112 is controlled by the instruction it receives. An instruction contains the arithmetic operation to be performed and the routing selection of each signal.
Assembly language for an M PE 112. To display instructions in a more readable way, the instructions may be visualized in an assembly-like language: Assembly for M PE (ASMMPE). This is an assembly language designed for matrix decomposition using the PE array 76.
Below, Table 10 provides various keywords that may be used by the ASMMPE language.
ISA for a D PE 110. The behavior of each D PE 110 is also controlled by the instruction it receives. An instruction for the D PEs 110 includes five sub-instructions, where each sub-instruction belongs to one issue slot. As may be appreciated, when the D PEs 110 include more or fewer issue slots, there may be correspondingly more or fewer sub-instructions. As mentioned above, multiple issue slots work simultaneously to achieve a pipeline effect. Each issue slot runs just under its corresponding sub-instruction, and time offsets among multiple issue slots may also be specified by the program. Table 11, Table 12, Table 13, and Table 14 show an example instruction structure of the four kinds of issue slot discussed above with reference to
Each issue slot instruction may also include a time-to-live (TTL) domain to indicate whether that instruction should be executed or ignored (e.g., as NOP). For example, the TTL of an instruction for a D PE 110 may have a data structure as described in Table 15.
The programmable spatial array processor 26 may be programmable to perform a wide variety of types of matrix decompositions. This section will describe the following types of matrix decompositions:
The D PE 110 and M PE 112 may have a dataflow as illustrated in
Cholesky decomposition. Cholesky decomposition aims to find a lower triangular matrix L that satisfies L*L′=A. A is given as a positive definite Hermitian matrix:
A=L·L
H
The procedure of Cholesky decomposition is (R=A):
LU decomposition. The programmable spatial array processor 26 can also be used to perform LU decomposition. LU (lower-upper) decomposition factors matrix A as production of a lower triangluar matrix L and an upper triangular matrix U:
A=L*U
Example Matlab code of LU decomposition is shown below:
Cholesky-based minimum mean square error (MMSE). The programmable spatial array processor 26 can also be used to perform Cholesky-based MMSE. An example procedure for performing Cholesky-based MMSE is provided below:
Description of input signals:
x=(HHH+σ2I)−1HHY
To implement it on the PE array 76, the procedure is divided into 4 stages:
A=H
H
·H,
R=A+σ
2
·I
Z=H
H
·Y.
R=L·LH
V=L−1
VZ=V*Z
x=V
H*(VZ)
Reviewing each stage of Cholesky-based MMSE, pre-filtering may take place in the PE array 76 as illustrated by
After pre-filtering, the second stage of Cholesky-based MMSE is Cholesky decomposition. This may take place in the same way described above. After Cholesky decomposition, Cholesky-based MMSE continues with back substitution and V*Z. Back substitution is used to solve V=L−1 in which L is an upper triangular matrix. V*Z is a matrix (V) vector (Z) multiplication:
V=L−1
V*Z
The procedure of back substitution may be described as:
The fourth stage of Cholesky-based MMSE is to calculate VH*(VZ):
VH*(VZ)
Givens-Rotation QR based MMSE. Givens Rotation based QR decomposition (GR-QRD) uses a series of Givens rotation operations to eliminate the entries of a lower triangular part of R. One Givens rotation can zero the lower element of a 2×1 vector:
The α and β may be calculated as:
α=a/√{square root over (a2+b2)}
β=b/√{square root over (a2+b2)}
The entire procedure of QR decomposition may be described as:
Rotate the 1st and 2nd row of A to zero A(2,1).
Q*1A=R1
Then rotate the 1st and 3rd row of A to zero A(3,1).
Q*2Q*1A=R2
The last step is to rotate the N-1th and Nth row of A to zero A(N,N-1).
Q*m . . . Q*2Q*1A=R
A=QR (Q=Q1Q2 . . . Qm)
The MATLAB code of above procedure is:
In many cases, there is no need to obtain the Q matrix explicitly. For instance, the QRD based MMSE may include the following:
One question about QRD is how to get QH if there is no explicit Q calculation. The answer is that when Givens rotation is performed on HHH+σ2I, it should also be performed on HHY simultaneously as below equation shows:
[QR, HHY]—Givens Rotations—>[R, QHHHY]
To increase the utilization rate of MAC resources and data throughput, GR-QRD may be performed using an interleaved batch mode.
Gram-Schmidt QR decomposition. GS (Gram-Schmidt) QR decomposition is a canonical and widely used matrix decomposition algorithm. The procedure is shown below:
Average throughput estimation. Tables 30 and 31 provide a rough estimate of the average throughput of the matrix decomposition examples discussed above. The parameters are defined as: matrix size of N×N, the size of one batch (number of matrices in one batch) is LenB, the gap between two consecutive batches is LenG clock cycles, and the delay of multiply-accumulate operations in each M PE 112 is DAcc.
Various example embodiments, representing a non-limiting set of embodiments that may follow from this disclosure, are provided below.
EXAMPLE EMBODIMENT 1. A system comprising:
EXAMPLE EMBODIMENT 2. The system of example embodiment 1, wherein the array of processing elements has a triangular arrangement.
EXAMPLE EMBODIMENT 3. The system of example embodiment 1, wherein the array of processing elements comprise at least three different types of processing elements.
EXAMPLE EMBODIMENT 4. The system of example embodiment 3, wherein the at least three different types of processing elements comprise:
EXAMPLE EMBODIMENT 5. The system of example embodiment 3, wherein the instruction memory comprises:
EXAMPLE EMBODIMENT 6. The system of example embodiment 1, wherein the array of processing elements comprise:
EXAMPLE EMBODIMENT 7. The system of any of example embodiments 1-6, wherein the one processing element of the row of processing elements and the other processing elements of the row of processing elements do not contain
EXAMPLE EMBODIMENT 8. The system of any of example embodiments 1-6, wherein the first type of matrix decomposition comprises at least one of Cholesky decomposition, LU decomposition, Cholesky-based minimum mean square error (MMSE), Givens-Rotation QR based MMSE, and Gram-Schmidt QR decomposition, and wherein the second type of matrix decomposition comprises a different one of Cholesky decomposition, LU decomposition, Cholesky-based minimum mean square error (MMSE), Givens-Rotation QR based MMSE, and Gram-Schmidt QR decomposition.
EXAMPLE EMBODIMENT 9. The system of any of example embodiments 1-6, comprising host processing circuitry configurable to perform a task that involves performing the first type of matrix decomposition and cause the processing element array to be programmed with the first instructions.
EXAMPLE EMBODIMENT 10. The system of example embodiment 9, comprising a plurality of antennas, wherein the host processing circuitry and the programmable spatial array processing circuitry are configured to perform the first type of matrix decomposition or the second type of matrix decomposition to carry out a multiple-input multiple-output (MIMO) computation.
EXAMPLE EMBODIMENT 11. An article of manufacture comprising one or more tangible, non-transitory, machine readable media comprising instructions that, when executed by processing circuitry, cause the processing circuitry to:
EXAMPLE EMBODIMENT 12. The article of manufacture of example embodiment 11, wherein the instructions cause the processing circuitry to instruct the triangular spatial array of processing elements to perform the first matrix decomposition on a batch of matrices.
EXAMPLE EMBODIMENT 13. The article of manufacture of example embodiment 12, wherein the instructions cause the processing circuitry to:
EXAMPLE EMBODIMENT 14. The article of manufacture of example embodiment 11, wherein the instructions cause the processing circuitry to perform Cholesky decomposition, LU decomposition, Cholesky-based minimum mean square error (MMSE), Givens-Rotation QR based MMSE, or Gram-Schmidt QR decomposition as the first matrix decomposition and perform a different one of Cholesky decomposition, LU decomposition, Cholesky-based minimum mean square error (MMSE), Givens-Rotation QR based MMSE, or Gram-Schmidt QR decomposition as the second matrix decomposition.
EXAMPLE EMBODIMENT 15. The article of manufacture of any of example embodiments 11-14, wherein the instructions cause the processing circuitry to send an instruction having a data structure that includes an indication of a function to be performed and a time-to-live value, wherein the time-to-live value causes that instruction to be ignored once that instruction has propagated through processing elements of the processing element array more than indicated by the time-to-live value.
EXAMPLE EMBODIMENT 16. An integrated circuit comprising programmable spatial array processing circuitry comprising:
EXAMPLE EMBODIMENT 17. The integrated circuit of example embodiment 16, wherein the processing element array performs the first matrix decomposition or the second matrix decomposition on at least two of the batch of matrices at least partly in parallel.
EXAMPLE EMBODIMENT 18. The integrated circuit of example embodiments 16 or 17, wherein the processing element array comprises a processing element having a plurality of issue slots and a plurality of register files, wherein the at least two of the plurality of issue slots are programmable to operate in parallel to perform part of the first matrix decomposition or the second matrix decomposition.
EXAMPLE EMBODIMENT 19. The integrated circuit of example embodiments 16 or 17, wherein the processing element array outputs results from the first matrix decomposition or the second matrix decomposition staggered in time such that results corresponding to a first matrix of the batch of matrices overlap in time with results corresponding to a second matrix of the batch of matrices.
EXAMPLE EMBODIMENT 20. The integrated circuit of example embodiment 19, comprising a delay alignment buffer that aligns the results corresponding to the first matrix of the batch of matrices in time and aligns the results corresponding to the second matrix of the batch of matrices in time.
EXAMPLE EMBODIMENT 21. A method comprising:
using a triangular spatial array of processing elements to perform a first type of matrix decomposition; and
using the same triangular spatial array of processing elements to perform a second type of matrix decomposition.
EXAMPLE EMBODIMENT 22. The method of example embodiment 21, wherein using the triangular spatial array of processing elements to perform the first type of matrix decomposition comprises:
EXAMPLE EMBODIMENT 23. The method of example embodiment 21 or 22, wherein the first type of matrix decomposition comprises Cholesky decomposition, LU decomposition, Cholesky-based minimum mean square error (MMSE), Givens-Rotation QR based MMSE, or Gram-Schmidt QR decomposition and the second type of matrix decomposition comprises a different one of Cholesky decomposition, LU decomposition, Cholesky-based minimum mean square error (MMSE), Givens-Rotation QR based MMSE, or Gram-Schmidt QR decomposition.
EXAMPLE EMBODIMENT 24. A system comprising:
EXAMPLE EMBODIMENT 25. The system of example embodiment 24, comprising:
While the embodiments set forth in the present disclosure may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the disclosure is not intended to be limited to the particular forms disclosed. The disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure as defined by the following appended claims. Moreover, the techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
This application is a U.S. national stage filing of PCT Application No. PCT/CN2020/117932, filed Sep. 25, 2020, entitled “Programmable Spatial Array for Matrix Decomposition,” which is incorporated by reference herein in its entirety for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2020/117932 | 9/25/2020 | WO |