This disclosure relates generally to Fast Fourier Transforms (FFT), and more specifically to a self-ordering FFT, which eliminates vector memory access to non-contiguous elements.
Radix-2 Discrete Fourier Transforms (DFT) are one of the most commonly used signal processing algorithms spanning a plethora of application domains. For radix-2 sizes, that is for DFT sizes that are powers of 2, the most commonly used implementation is based on the Cooley-Tukey Fast Fourier Transform algorithm. The FFT algorithm for N point data has a complexity of the order of N log 2(N) in contrast to the order of N2 complexity needed for the DFT. An in-place decimation-in-frequency (DIF) version takes FFT inputs in linear order and produces them in bit-reversed order. Thus, there is a need to undo the bit-reversal (or, bit-reverse the outputs again since the bit-reversal is a symmetric operation) to retrieve the FFT outputs in their original linear order.
Single instruction multiple data (SIMD) based Digital Signal Processor (DSP) architectures are very popular since they provide high computational powers at high efficiencies. Efficiencies are typically quantified in terms of power/FLOP or area/FLOP. SIMD engines derive their efficiencies by using wide data buses for efficient memory transfers and by executing the same numerical operation on a parallel set of Arithmetic Units (AUs) (e.g., an operation on a vector data driven by a single instruction).
A conventional in-place implementation of a DIF-FFT on a SIMD engine requires hardware support for multiple levels of special source data and writeback multiplexing. Finally, the outputs are bit-reversed so they need to be reordered to enable downstream vectorized operations on the SIMD processor.
To achieve SIMD efficiency, memory accesses are streamlined so that a wide data bus fetches/writes data from continuous locations in memory commensurate with the size of the vector data path. For an N point FFT with data bus width W, the theoretical number of data accesses for N point FFT implementation is given by N/W*log 2(N)*2, where the last factor of 2 is accounting for the loads and stores.
The present invention is illustrated by way of example and is not limited by the accompanying figures, in which like references indicate similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
Embodiments described herein provide for the automatic undoing of the bit-reversed ordering of the FFT by enabling incremental intra-vector permutations at each stage of an FFT thereby avoiding the need for accessing a vector memory in sets of non-contiguous elements at any stage of the FFT operation. Bit reversed ordering is a consequence of the pattern of butterfly operations inherent in each stage of the DIF FFT algorithm. This automatic vector reordering is particularly beneficial when used with SIMD engines (e.g., processors) due to the substantial savings in power consumption and reduced computational overhead by reducing wide data width accesses to memory. The term “SOS-FFT” is used to describe a Self-Ordering SIMD based FFT as described in the embodiments of this disclosure. The term “bit-reversed reordering” as used throughout this disclosure, refers to the reordering of indexed elements within an array or vector of multiple elements. In some embodiments, each element is a complex number. Each complex number may have a data width (e.g., 8-bit, 16-bit or 32-bit) chosen in part by data width of a SIMD processor used to implement the FFT operation. In the embodiments described herein, bit reversed reordering is achieved by enabling some specific patterns of permutations of the output vectors at each stage of the FFT and is accomplished through write back multiplexers (“MUXes”) controlled through a combination of various write back modes, thereby providing a final FFT output with a linear ordering of elements without incurring any vector memory access overhead.
A DIF-FFT has log 2(N) stages with N/2 radix-2 butterflies in each stage. A radix-2 butterfly in each stage is defined by the following operation:
y(n1)=x(n1)+x(n2) [1]
y(n2)=wn1*x(n1)−wn1*x(n2) [2]
where x( ) represents the outputs of the previous stage and y( ) represents the output of the current stage, and wn1( ) represents a TWF (twiddle factor which is derived from a complex exponential sequence). Assuming an in-place operation (e.g., the output indices are the same as input indices for any butterfly), the indices n1 and n2 are separated by N/2 in first stage, N/4 in second stage, N/8 in third stage and so on till a separation of 1 in the final stage. SIMD cores vectorize the operations of each stage by operating on sets of M contiguous n1 and n2 indices. The term “M” (e.g., M number) as used herein is the number of complex numbers or elements that can be loaded by a source register and also quantifies the number of radix2 butterflies than be performed in each clock by a SIMD engine. The term “N” (e.g., N number) is the FFT size. The term “W” is a bus width of the memory used to store and retrieve data from the SIMD processor. The term “L” is equal to W/M and represents the number of M-element vectors that can be fetched from memory for each fetch or store operation. Some specialized hardware handling is required for the final stages of the FFT when the separation between n1 and n2 is less than or equal to M. As will be appreciated for each stage, N elements need to be read and N elements need to written. So given a memory access width of W elements, N/W vectorized reads and N/W vectorized stores are required at each stage.
The SOS-FFT relies upon “intra-vector” reordering at each stage of the FFT. For example, the SOS-FFT algorithm reads 2 “vectors” of contiguous data for the butterfly operations and permutes them when writing them back. The write back re-ordering is handled by specialized MUXes on the vector data path within the AU and is simple and cost effective in hardware implementation terms. A key aspect of the SOS-FFT is that memory is always accessed as “contiguous vectors” and the writeback MUXes are always “intra”, implying that the outputs of a vector butterfly operation do not write out to any other locations other than contained within the current output vectors. The SOS-FFT is not an in-place design, in that a “scratch memory” of size N is required in addition to an N element input/output memory. However, considering that any design that requires a bit-reversal operation following an in-place FFT, the total memory requirement is the same with SOS-FFT as is with an in-place FFT implementation.
After each vectorized butterfly, the output vector is reordered, (local to the current output vector). A scratch memory and the output memories are used to store intermediate results between stages in a toggle fashion such that the final stage output is written on to the output buffer. After executing log 2(N) stages of the SOS-FFT the per stage intra-vector permutations ensure that the final output is bit-reversed. For a SIMD engine with a vector path capable of supporting M butterflies per cycle, the computational time of the SOS-FFT operation is N*log 2(N)/(2*M) clock cycles (e.g., equal to the processing time of the FFT which implies that the reordering operation carries no processing time overhead).
If the N bit reversed indices of the output vector of the DIF-FFT operation is partitioned into K contiguous groups each containing M indices, the difference between any pair of indices in any group is an integer multiple of K. Within each such group, the M sorted (ascending) element indices are ordered in a M bit-reversed order. For example, if N=8 elements, and the N elements are partitioned into 2 consecutive groups, (e.g., K=2), the 8 indices (indexed elements) [0, 1, 2, 3, 4, 5, 6, 7)] are bit reversed with the butterfly operation to generate [0, 4, 2, 6, 1, 5, 3, 7]. If we partition this into 2 consecutive groups, we get the two groups [0, 4, 2, 6] and [1, 5, 3, 7]. The absolute difference between each pair of indices within either group is an integer multiple of K=2. This implies that for a SIMD style operation handling butterflies between 2 sets of M contiguous samples, each operation will encounter sample indices within the final group as a part of either of the butterfly outputs within the first log 2(M) stages. Final bit-reversal is done as a part of intra group writeback. This principle is used to design log 2(M)+1 special writeback MUXes for the SOS-FFT controlled by one or more MUX modes including a Straight-Mode, a BR_Straight-Mode, an MbyL-Mode and a BR_MbyL-Mode, described in more detail below.
A vector data path fed from the cache 14 includes a first line buffer 20, which loads a full line of data of width W from the cache 14. A first MUX 22 multiplexes a vector of complex numbers or elements of width M from the first line buffer, to load a first storage (S1) 24, in accordance with a MUX mode (e.g., Straight-Mode). In some embodiments, a type conversion 26 converts data from the first MUX 22 to a format required by the first storage 24 (e.g., converting an 8-bit, 16-bit or 32-bit data).
Another vector data path from the cache 14 includes a second line buffer 30, which loads a full line of data of width W from the cache 14. A second MUX 32 multiplexes a vector of complex numbers or elements of width M from the first line buffer, to load a second storage (S2) 34, in accordance with a MUX mode (e.g., Straight-Mode). In some embodiments, a type conversion 36 converts data from the second MUX 32 to a format required by the second storage 34 (e.g., converting an 8-bit, 16-bit or 32-bit data). In an embodiment, the first storage 24 and the second storage 34 are both source register memories. In another embodiment, the first storage 24 and the second storage 34 are both cache memories. References to “source register” as applied to S1 24 and S2 34 throughout this disclosure should be considered to also apply to a cached memory implementation for S1 24 and S2 34 in an alternate embodiment.
A DIF butterfly 40 performs a butterfly transformation on at least one pair of elements read from S1 24 and S2 34 in accordance with equations [1] and [2] described above, wherein x(n1) and x(n2) correspond to the outputs of S1 24 and S2 34, and y(n1) and y(n2) correspond to V1 46 and V2 48. The butterfly 40 uses a TWF generated by a TWF mode 42, which controls a Special Arithmetic Unit (SAU) TWF generator 44. The SAU-TWF 44 includes a numerically controlled oscillator unit that can generate complex exponential sequences (e.g., TWFs). A pair of output vectors V1 46 and V2 48 are generated by the DIF Bfly 40. A vector multiplexer V MUX 50 permutes V1 46 and V2 48 in accordance with a write back MUX mode (e.g., Straight-Mode, BR_Straight-Mode, MbyL-Mode or a BR_MbyL-Mode), to undo the bit reversal inherent in the butterfly operation performed by the DIF Bfly 40. The V MUX 50 thereby generates re-ordered versions of the vectors V1 46 and V2 48, stored in A 52 and B 54 respectively in accordance with the different writeback modes referenced above. In one embodiment, the vectors from A 52 and B 54 are written back into the cache 14 for processing by a subsequent stage of the FFT or stored as the result of the last stage of the FFT. In another embodiment, the respective outputs of A 52 and B 54 are converted by type conversion 56 and 58 respectively prior to being stored in the cache 14. The operation of type converters 56 and 58 is similar to the type converters 26 and 36 previously described. In some embodiments, data stored in the cache 14 is subsequently transferred to the TCM 12. The TCM 12 may include the scratch memory, output memory, vector memory and the like. As a consequence of the cumulative writeback MUX operations, the SOS-FFT generates a final FFT output wherein the elements are in linear order thereby undoing the bit-reversal ordering inherent to the DIF-FFT algorithm.
As will be appreciated, at least some of the embodiments as disclosed include at least the following. In one embodiment, a method for self-ordering Fast Fourier Transform (FFT) for Single Instruction Multiple Data engines comprises performing a butterfly operation on a first input vector and a second input vector to generate a first output vector and a second output vector, wherein the first input vector, the second input vector, the first output vector and the second output vector are each comprised of complex numbers, and a first order of the complex numbers of the first output vector is non-linear and a second order of the complex numbers of the second output vector is non-linear. A combination of complex numbers is reordered and exchanged between the first output vector and the second output vector to partially linearize the first order of the first output vector and to partially linearize the second order of the second output vector.
Alternative embodiments of the method for self-ordering Fast Fourier Transform (FFT) for Single Instruction Multiple Data engines include one of the following features, or any combination thereof. The first output vector is written back to a first storage and the second output vector is written back to a second storage. A first data type of the first output vector is converted before writing back to the first storage and a second data type of the second output vector is converted before writing back to the second storage. The first order and the second order are both linear in a final stage of the FFT, the butterfly operation performed for each of a plurality of stages of the FFT. The butterfly operation is performed on each one of a plurality of stages of a Decimation-In-Frequency FFT. At least one complex number of the second input vector is modified with a twiddle factor. The first output vector generated by the butterfly operation comprises adding each one of the complex numbers of the first input vector to a corresponding one of the complex numbers of the second input vector. The second output vector generated by the butterfly operation comprises subtracting each one of the complex numbers of the second input vector multiplied by a twiddle factor from a corresponding one of the complex numbers of the first input vector multiplied by the twiddle factor. The first source register is loaded with complex numbers of the first input vector received from a first multiplexer, the first multiplexer configured to multiplex a subset of a line of complex numbers received from a line buffer. A data type of the complex numbers of the first input vector is converted before loading the first source register.
In another embodiment, a method for self-ordering Fast Fourier Transform (FFT) for Single Instruction Multiple Data engines comprises transforming an N number of elements comprising first input elements and second input elements with an FFT comprising a plurality of stages, wherein the plurality of stages comprises at least one first stage, at least one second stage and a final stage, and wherein the N number is greater than an M number of a subset of the N number of elements loadable by each of a first storage and a second storage. For each stage, a butterfly operation is performed on a first input vector and a second input vector to generate a first output vector and a second output vector, wherein the first input vector is comprised of the first input elements, the second input vector is comprised of the second input elements, the first output vector is comprised of first output elements and the second output vector is comprised of second output elements, and a first order of the first output elements is non-linear and a second order of the second output elements is non-linear. A combination of elements is reordered and exchanged between the first output vector and the second output vector to partially linearize the first order of the first output vector and to partially linearize the second order of the second output vector.
Alternative embodiments of the method for self-ordering Fast Fourier Transform (FFT) for Single Instruction Multiple Data engines include one of the following features, or any combination thereof. The FFT is a Decimation-In-Frequency FFT. The plurality of stages comprises a first stage, the first output vector and the second output vector each partially linearized by a multiplexing mode comprising a Straight-Mode and written back to the respective first storage and second storage with an MbyL-Mode. The plurality of stages comprises a second stage, the first output vector and the second output vector each partially linearized by a multiplexing mode comprising a Straight-Mode and written back to the respective first storage and second storage with the Straight-Mode. The plurality of stages comprises a last stage, the first output vector and the second output vector each linearized by a multiplexing mode comprising a Straight-Mode and written back to the respective first storage and second storage with a BR_Straight-Mode.
In another embodiment, a method for self-ordering Fast Fourier Transform (FFT) for Single Instruction Multiple Data engines comprises transforming an N number of elements comprising first input elements and second input elements with an FFT comprising a plurality of stages, wherein the plurality of stages comprises at least one first stage and a final stage, and wherein the N number is less than or equal to an M number of a subset of the N number of elements loadable by each of a first storage and a second storage. For each stage, a butterfly operation is performed on a first input vector and a second input vector to generate a first output vector and a second output vector, wherein the first input vector is comprised of the first input elements, the second input vector is comprised of the second input elements, the first output vector is comprised of first output elements and the second output vector is comprised of second output elements, and a first order of the first output elements is non-linear and a second order of the second output elements is non-linear. A combination of elements is reordered and exchanged between the first output vector and the second output vector to partially linearize the first order of the first output vector and to partially linearize the second order of the second output vector.
Alternative embodiments of the method for self-ordering Fast Fourier Transform (FFT) for Single Instruction Multiple Data engines include one of the following features, or any combination thereof. The FFT is a Decimation-In-Frequency FFT. The plurality of stages comprises a first stage, the first output vector and the second output vector each partially linearized by a multiplexing mode comprising a Straight-Mode and written back to the respective first storage and second storage with an MbyL-Mode. The plurality of stages comprises a last stage and the N number is greater than 4, the first output vector and the second output vector each linearized by a multiplexing mode comprising a Straight-Mode, and written back to the respective first storage and second storage with a BR_MbyL-Mode. The plurality of stages comprises a last stage and the N number is less than or equal to 4, the first output vector and the second output vector each linearized by a multiplexing mode comprising a Straight-Mode, and written back to the respective first storage and second storage with an MbyL-Mode.
Although the invention is described herein with reference to specific embodiments, various modifications and changes can be made without departing from the scope of the present invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention. Any benefits, advantages, or solutions to problems that are described herein with regard to specific embodiments are not intended to be construed as a critical, required, or essential feature or element of any or all the claims.
Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements.